Readit News logoReadit News
thatoneengineer commented on AI will make formal verification go mainstream   martin.kleppmann.com/2025... · Posted by u/evankhoury
pron · 14 hours ago
> it’s not hard to extrapolate and imagine that process becoming fully automated in the next few years. And when that happens, it will totally change the economics of formal verification.

There is a problem with this argument similar to one made about imagining the future possibilities of vibe coding [1]: once we imagine AI to do this task, i.e. automatically prove software correct, we can just as easily imagine it to not have to do it (for us) in the first place. If AI can do the hardest things, those it is currently not very good at doing, there's no reason to assume it won't be able to do easier things/things it currently does better. In particular, we won't need it to verify our software for us, because there's no reason to believe that it won't be able to come up with what software we need better than us in the first place. It will come up with the idea, implement it, and then decide to what extent to verify it. Formal verification, or programming for that matter, will not become mainstream (as a human activity) but go extinct.

Indeed, it is far easier for humans to design and implement a proof assistant than it is to use one to verify a substantial computer program. A machine that will be able to effectively use a proof checker, will surely be able to come up with a novel proof checker on its own.

I agree it's not hard to extrapolate technological capabilities, but such extrapolation has a name: science fiction. Without a clear understanding of what makes things easier or harder for AI (in the near future), any prediction is based on arbitrary guesses that AI will be able to do X yet not Y. We can imagine any conceivable capability or limitation we like. In science fiction we see technology that's both capable and limited in some rather arbitrary ways.

It's like trying to imagine what problems computers can and cannot efficiently solve before discovering the notion of compuational complexity classes.

[1]: https://news.ycombinator.com/item?id=46207505

thatoneengineer · 13 hours ago
I disagree. Right now, feedback on correctness is a major practical limitation on the usefulness of AI coding agents. They can fix compile errors on their own, they can _sometimes_ fix test errors on their own, but fixing functionality / architecture errors takes human intervention. Formal verification basically turns (a subset of) functionality errors into compile errors, making the feedback loop much stronger. "Come up with what software we need better than us in the first place" is much higher on the ladder than that.

TL;DR: We don't need to be radically agnostic about the capabilities of AI-- we have enough experience already with the software value chain (with and without AI) for formal verification to be an appealing next step, for the reasons this author lays out.

thatoneengineer commented on Tesla Is Testing Robotaxis Without Safety Drivers – Or Riders   gizmodo.com/tesla-is-test... · Posted by u/HiroProtagonist
thatoneengineer · 2 days ago
Isn't this just a normal step in self-driving testing? Weird that Gizmodo is being so negative about it.
thatoneengineer commented on Patterns for Defensive Programming in Rust   corrode.dev/blog/defensiv... · Posted by u/PaulHoule
thatoneengineer · 12 days ago
Aside from just being immensely satisfying, these patterns of defensive programming may be a big part of how we get to quality GAI-written code at scale. Clippy (and the Rust compiler proper) can provide so much of the concrete iterative feedback an agent needs to stay on track and not gloss over mistakes.
thatoneengineer commented on Anthropic acquires Bun   bun.com/blog/bun-joins-an... · Posted by u/ryanvogel
bblaylock · 15 days ago
This reads more like Anthropic wanted to hire Jarred and Jarred wants to work with AI rather than build a Saas product around bun. I doubt it has anything to do with what is best for bun the project. Considering bun always seemed to value performance more than all else, the only real way for them to continue pursuing that value would be to move into the actual js engine design. This seems like a good pivot for Jarred personally and likely a loss for bun.
thatoneengineer · 15 days ago
Nah, it reads like the normal logic behind the consulting model for open source monetization, except that Bun was able to make it work with just one customer. Good for them, though it comes with some risks, especially when structured as an acquisition.

Deleted Comment

thatoneengineer commented on Cloudflare outage on November 18, 2025 post mortem   blog.cloudflare.com/18-no... · Posted by u/eastdakota
thatoneengineer · a month ago
The unwrap: not great, but understandable. Better to silently run with a partial config while paging oncall on some other channel, but that's a lot of engineering for a case that apparently is supposed to be "can't happen".

The lack of canary: cause for concern, but I more or less believe Cloudflare when they say this is unavoidable given the use case. Good reason to be extra careful though, which in some ways they weren't.

The slowness to root cause: sheer bad luck, with the status page down and Azure's DDoS yesterday all over the news.

The broken SQL: this is the one that I'd be up in arms about if I worked for Cloudflare. For a system with the power to roll out config to ~all of prod at once while bypassing a lot of the usual change tracking, having this escape testing and review is a major miss.

thatoneengineer commented on Cloudflare outage on November 18, 2025 post mortem   blog.cloudflare.com/18-no... · Posted by u/eastdakota
rafaelmn · a month ago
That's such a bad take after reading the article. If you're going to write a system that preallocates and is based on hard assumptions about max size - the panic/unwrap approach is reasonable.

The config bug reaching prod without this being caught and pinpointed immediately is the strange part.

thatoneengineer · a month ago
I agree there's no way to soft-error this, though "truncate and raise an alert" is arguably the better pattern.
thatoneengineer commented on Google Antigravity   antigravity.google/... · Posted by u/Fysi
sorokod · a month ago
"junior developers" is a convenient label, it is incorrect but it will take a bit until we come up something that describes entities that:

- can write code

- tireless

- have no aspirations

- have no stylistic or architectural preferences

- have massive, but at the same time not well defined, body of knowledge

- have no intrinsic memories of past interactions.

- change in unexpected ways when underlying models change

- ...

Edit: Drones? Drains?

thatoneengineer · a month ago
"brooms"

u/thatoneengineer

KarmaCake day108November 14, 2025View Original