Readit News logoReadit News
digikata commented on AI doesn’t reduce work, it intensifies it   simonwillison.net/2026/Fe... · Posted by u/walterbell
digikata · 16 hours ago
A couple of historical notes that come to mind.

When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.

When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.

digikata commented on Ask HN: Who wants to be hired? (February 2026)    · Posted by u/whoishiring
digikata · 8 days ago
Location: Portugal

Remote: Yes Willing to relocate: No

Willing to Relocate: No

Technologies: Rust, Python, C/C++, Typescript, LLM APIs, Distributed Systems, Embedded Systems, devops, Linux Kernel

Resume: https://uplinklabs.com

Email: alan@uplinklabs.com

Hands on builder, fractional CTO/Architect. 25+ years of US tech experience. Full stack with data intensive backend experience. Multi domain expertise, 0 -> 1 startup stacks, AI prototype cleanup for production, cloud, storage, embedded, autonomous vehicles, regulated industries. Problem solver with using tech and team leadership skills. Open to fractional and contract opportunities. US B2B invoicing available.

digikata commented on Ask HN: How do you find the "why" behind old code decisions?    · Posted by u/siddhibansal9
pella · 19 days ago
retroactively - create Lightweight Architecture Decision Records (ADRs) by reconstructing key decisions from the available sources, then make it a habit to maintain them for all future changes.

- https://github.com/peter-evans/lightweight-architecture-deci...

- https://adr.github.io/

- https://www.thoughtworks.com/radar/techniques/lightweight-ar...

digikata · 19 days ago
The easiest is to add short info in comments, and longer info in some sort of document and reference the doc in comments.

Lightweight ADRs are a good recommendation. I've put similar practices into place with teams I've worked with. Though I prefer to use the term "Technical Memo", of which some contain Architectural Decisions. Retroactive documentation is a little misaligned with the term ADR, in that it isn't really making any sort of decision. I've found the term ADR sometimes makes some team members hesitant to record the information because of that kind of misalignment.

As for retroactively discovering why, code archeology skills in the form of git blame and log, and general search skills are very helpful.

digikata commented on TimeCapsuleLLM: LLM trained only on data from 1800-1875   github.com/haykgrigo3/Tim... · Posted by u/admp
digikata · a month ago
A fun use of this kind of approach would be to see if conversational game NPCs could be generated that stick the the lore of the game and their character.
digikata commented on “Erdos problem #728 was solved more or less autonomously by AI”   mathstodon.xyz/@tao/11585... · Posted by u/cod1r
mjevans · a month ago
I think the question is, how can humans have verification that the problem statement was correctly encoded into that Lean specification?
digikata · a month ago
To borrow some definitions from Systems engineering for verification and validation, this question is one of validation. Verification is performed by Lean and spec syntax and logic enforcement. But Validation is a question of is if the Lean spec encodes a true representation of the problem statement (was the right thing specced). Validation at highest levels is probably an irreplaceable human activity.

Also, on the verification side - there could also be a window of failure that Lean itself has a hidden bug in it too. And with automated systems that seek correctness, it is slightly elevated that some missed crack of a bug becomes exploited in the dev-check-dev loop run by the AI.

digikata commented on What Does a Database for SSDs Look Like?   brooker.co.za/blog/2025/1... · Posted by u/charleshn
esperent · 2 months ago
Don't some SSDs have 512b page size?
digikata · 2 months ago
I would guess by now none have that internally. As a rule of thumb every major flash density increase (SLC, TLC, QLC) also tended to double internal page size. There were also internal transfer performance reasons for large sizes. Low level 16k-64k flash "pages" are common, and sometimes with even larger stripes of pages due to the internal firmware sw/hw design.

Deleted Comment

digikata commented on MinIO is now in maintenance-mode   github.com/minio/minio/co... · Posted by u/hajtom
dardeaup · 2 months ago
I've done some preliminary testing with garage and I was pleasantly surprised. It worked as expected and didn't run into any gotchas.
digikata · 2 months ago
Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.

It's a "Misc" endpoint in the Garage docs here: https://garagehq.deuxfleurs.fr/documentation/reference-manua...

digikata commented on MinIO stops distributing free Docker images   github.com/minio/minio/is... · Posted by u/LexSiga
digikata · 4 months ago
Incidentally there is a open source S3 project in rust that I have been following. About a year ago, I applied Garage images to replace some minio instances used in CI pipelines - lighter weight and faster to come up.

https://github.com/deuxfleurs-org/garage

digikata commented on Show HN: FeOx – Fast embedded KV store in Rust   github.com/mehrantsi/FeOx... · Posted by u/mehrant
emschwartz · 6 months ago
Sounds interesting, though that durability tradeoff is not one that I’d think most people/applications want to make. When you save something to the DB, you generally want that to mean it’s been durably stored.

Are there specific applications you’re targeting where latency matters more than durability?

digikata · 6 months ago
This seems around the durability that most databases can reach. Aside from more specialized hardware arrangements, with a single computer, embedded database there is always a window of data loss. The durability expectation is that some in-flight window of data will be lost, but on restart, it should recover to a consistent state of the last settled operation if at all possible.

A related questions is if the code base is mature enough when configured for higher durability to work as intended. Even with Rust, there needs to be some hard systems testing and it's often not just a matter of sprinkling flushes around. Further optimization can try to close the window tighter - maybe with a transaction log, but then you obviously trade some speed for it.

u/digikata

KarmaCake day2438September 19, 2010
About
Fractional advisement and software development w/ Uplink Labs

Reach me on Fedi/Mastodon @digikata@fosstodon.org

View Original