Readit News logoReadit News
QuadrupleA · 2 months ago
I don't think formal verification really addresses most day-to-day programming problems:

    * A user interface is confusing, or the English around it is unclear
    * An API you rely on changes, is deprecated, etc.
    * Users use something in unexpected ways
    * Updates forced by vendors or open source projects cause things to break
    * The customer isn't clear what they want
    * Complex behavior between interconnected systems, out of the purview of the formal language (OS + database + network + developer + VM + browser + user + web server)
For some mathematically pure task, sure, it's great. Or a low-level library like a regular expression parser or a compression codec. But I don't think that represents a lot of what most of us are tasked with, and those low-level "mathematically pure" libraries are generally pretty well handled by now.

Byamarro · 2 months ago
In fact, automated regression tests done by ai with visual capabilities may have bigger impact than formal verification has. You can have an army of testers now, painfully going through every corner of your software
petesergeant · 2 months ago
In practice ends up being a bit like static analysis though, which is you get a ton of false positives.

All said, I’m now running all commits through Codex (which is the only thing it’s any good at), and it’s really pretty good at code reviews.

Maxion · 2 months ago
Will only work somewhat when customers expect features to work in a standard way. When customer spec things to work in non-standard approaches you'll just end up with a bunch of false positives.
adrianN · 2 months ago
TBH most day to day programming problems are barely worth having tests for. But if we had formal specs and even just hand wavy correspondences between the specs and the implementation for the low level things everybody depends on that would be a huge improvement for the reliability of the whole ecosystem.
gizmo686 · 2 months ago
A limited form of formal verification is already mainstream. It is called type systems. The industry in general has been slowly moving to encode more invariants into the type system, because every invariant that is in the type system is something you can stop thinking about until the type checker yells at you.

A lot of libraries document invariants that are either not checked at all, only at runtime, or somewhere in between. For instance, the requirement that a collection not be modified during interaction. Or that two region of memory do not overlap, or that a variable is not modified without owning a lock. These are all things that, in principle, can be formally verified.

No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.

elbear · 2 months ago
> No one claims that good type systems prevent buggy software.

That's exactly what languages with advanced type systems claim. To be more precise, they claim to eliminate entire classes of bugs. So they reduce bugs, they don't eliminate them completely.

skissane · 2 months ago
> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.

The problem is there is always some chance a coding agent will get stuck and be unable to produce a conforming implementation in a reasonable amount of time. And then you are back in a similar place to what you were with those pre-LLM solutions - needing a human expert to work out how to make further progress.

kreetx · 2 months ago
Some type systems (e.g, Haskell) are closing in in becoming formal verification languages themselves.
blub · 2 months ago
> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

To me it seems they reduce productivity. In fact, for Rust, which seems to match the examples you gave about locks or regions of memory the common wisdom is that it takes longer to start a project, but one reaps the benefits later thanks to more confidence when refactoring or adding code.

However, even that weaker claim hasn’t been proven.

In my experience, the more information is encoded in the type system, the more effort is required to change code. My initial enthusiasm for the idea of Ada and Spark evaporated when I saw how much ceremony the code required.

Marazan · 2 months ago
> For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.

That is not novel and every declarative language precisely embodies it.

Dead Comment

devin · 2 months ago
> No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.

They really don’t. How did you arrive at such a conclusion?

nwah1 · 2 months ago
> Complex behavior between interconnected systems, out of the purview of the formal language (OS + database + network + developer + VM + browser + user + web server)

Isn't this what TLA+ was meant to deal with?

skydhash · 2 months ago
Not really, some components like components have a lot of properties that’s very difficult to modelize. Take latency in network, or storage performance in OS.
ScottBurson · 2 months ago
Actually, formal verification could help massively with four of those problems — all but the first (UI/UX) and fifth (requirements will always be hard).

A change in the API of a dependency should be detected immediately and handled silently.

Reliance on unspecified behavior shouldn't happen in the first place; the client's verification would fail.

Detecting breakage caused by library changes should be where verification really shines; when you get the update, you try to re-run your verification, and if that fails, it tells you what the problem is.

As for interconnected systems, again, that's pretty much the whole point. Obviously, achieving this dream will require formalizing pretty much everything, which is well beyond our capabilities now. But eventually, with advances in AI, I think it will be possible. It will take something fundamentally better than today's LLMs, though.

raxxorraxor · 2 months ago
That has been the problem with unit and integration tests all the time. Especially for systems that tend to be distributed.

AI makes creating mock objects much easier in some cases, but it still creates a lot of busy work and makes configuration more difficult. At at this points it often is difficult configuration management that cause the issues in the first place. Putting everything in some container doesn't help either, on the contrary.

ErroneousBosh · 2 months ago
> But I don't think that represents a lot of what most of us are tasked with

Give me a list of all the libraries you work with that don't have some sort of "okay but not that bit" rule in the business logic, or "all of those function are f(src, dst) but the one you use most is f(dst,src) and we can't change it now".

I bet it's a very short list.

Really we need to scrap every piece of software ever written and start again from scratch with all these weirdities written down so we don't do it again, but we never will.

bluGill · 2 months ago
Scrapping everything wouldn't help. 15 years ago the project I'm on did that - for a billion dollars. We fixed the old mistakes but made plenty of new ones along the way. We are trying to fix those now and I can't help but wonder what new mistakes we are making the in 15 years we will regret.
wolfgangbabad · 2 months ago
Yeah, there were about 5 or 10 videos about this "complexity" and unpredictability of 3rd parties and wheels involved that AI doesn't control and even forget - small context window - in like past few weeks. I am sure you have seen at least one of them ;)

But it's true. AI is still super narrow and dumb. Don't understand basic prompts even.

Look at the computer games now - they still don't look real despite almost 30 years since Half-life 1 started the revolution - I would claim. Damn, I think I ran it on 166 Mhz computer on some lowest details even.

Yes, it's just better and better but still looking super uncanny - at least to me. And it's been basically 30 years of constant improvements. Heck, Roomba is going bankrupt.

I am not saying things don't improve but the hype and AI bubble is insane and the reality doesn't match the expectation and predictions at all.

est · 2 months ago
> An API you rely on changes, is deprecated, etc

Formal verification will eventually lead to good, stable API design.

> Users use something in unexpected ways

> Complex behavior between interconnected systems

It happens when there's no formal verification during the design stage.

Formal verification literally means cover 100% state changes and for every possible input/output, every execution branch should be tested.

Almondsetat · 2 months ago
Formal verification has nothing to do with the quality of the API.

Given the spec, formal verification can tell you if your implementation follows the spec. It cannot tell you if the spec if good

Joker_vD · 2 months ago
> Formal verification will eventually lead to good, stable API design.

Why? Has it ever happened like this? Because to me it would seem that if the system verified to work, then it works no matter how API is shaped, so there is no incentive to change it to something better.

ehnto · 2 months ago
100% of state changes in business software is unknowable on a long horizon, and relies on thoroughly understanding business logic that is often fuzzy, not discrete and certain.
bkettle · 2 months ago
I think formal verification shines in areas where implementation is much more complex than the spec, like when you’re writing incomprehensible bit-level optimizations in a cryptography implementation or compiler optimization phases. I’m not sure that most of us, day-to-day, write code (or have AI write code) that would benefit from formal verification, since to me it seems like high-level programming languages are already close to a specification language. I’m not sure how much easier to read a specification format that didn’t concern itself with implementation could be, especially when we currently use all kinds of frameworks and libraries that already abstract away implementation details.

Sure, formal verification might give stronger guarantees about various levels of the stack, but I don’t think most of us care about having such strong guarantees now and I don’t think AI really introduces a need for new guarantees at that level.

pron · 2 months ago
> to me it seems like high-level programming languages are already close to a specification language

They are not. The power of rich and succinct specification languages (like TLA+) comes from the ability to succinctly express things that cannot be efficiently computed, or at all. That is because a description of what a program does is necessarily at a higher level of abstraction than the program (i.e. there are many possible programs or even magical oracles that can do what a program does).

To give a contrived example, let's say you want to state that a particular computation terminates. To do it in a clear and concise manner, you want to express the property of termination (and prove that the computation satisfies it), but that property is not, itself, computable. There are some ways around it, but as a rule, a specification language is more convenient when it can describe things that cannot be executed.

nyrikki · 2 months ago
TLA+ is not a silver bullet, and like all temporal logic, has constraints.

You really have to be able to reduce your models to: “at some point in the future, this will happen," or "it will always be true from now on”

Have probabilistic outcomes? Or even floats [0] and it becomes challenging and strings are a mess.

> Note there is not a float type. Floats have complex semantics that are extremely hard to represent. Usually you can abstract them out, but if you absolutely need floats then TLA+ is the wrong tool for the job.

TLA+ works for the problems it is suitable for, try and extend past that and it simply fails.

[0] https://learntla.com/core/operators.html

eru · 2 months ago
What you said certainly works, but I'm not sure computability is actually the biggest issue here?

Have a look at how SAT solvers or Mixed Integer Linear Programming solvers are used.

There you specify a clear goal (with your code), and then you let the solvers run. You can, but you don't need to, let the solvers run all the way to optimality. And the solvers are also allowed to use all kinds of heuristics to find their answers, but that doesn't impact the statement of your objective.

Compare that to how many people write code without solvers: the objective of what your code is trying to achieve is seldom clearly spelled out, and is instead mixed up with the how-to-compute bits, including all the compromises and heuristics you make to get a reasonable runtime or to accommodate some changes in the spec your boss asked for at the last minute.

Using a solver ain't formal verification, but it shows the same separation between spec and implementation.

Another benefit of formal verification, that you already imply: your formal verification doesn't have to determine the behaviour of your software, and you can have multiple specs simultaneously. But you can only have a single implementation active at a time (even if you use a high level implementation language.)

So you can add 'handling a user request must terminate in finite time' as a (partial) spec. It's an important property, but it tells you almost nothing about the required business logic. In addition you can add "users shouldn't be able to withdraw more than they deposited" (and other more complicated rules), and you only have to review these rules once, and don't have to touch them again, even when you implement a clever new money transfer routine.

avmich · 2 months ago
Peter Norvig once proposed to consider a really large grammar, with trillion rules, which could simulate some practically small applications of more complex systems. Many programs in practice don't need to be written in Turing-complete languages, and can be proven to terminate.
mrkeen · 2 months ago
Can TLA+ prove anything about something you specify but don't execute?
anon-3988 · 2 months ago
> To give a contrived example, let's say you want to state that a particular computation terminates. To do it in a clear and concise manner, you want to express the property of termination (and prove that the computation satisfies it), but that property is not, itself, computable. There are some ways around it, but as a rule, a specification language is more convenient when it can describe things that cannot be executed.

Do you really think it is going to be easier for the average developer to write a specification for their program that does not terminate

vs

Giving them a framework or a language that does not have for loop?

Edit: If by formal verification you mean type checking. That I very much agree.

Dead Comment

socketcluster · 2 months ago
Yes. I feel like people who are trying to push software verification have never worked on typical real-world software projects where the spec is like 100 pages long and still doesn't fully cover all the requirements and you still have to read between the lines and then requirements keep changing mid-way through the project... Implementing software to meet the spec takes a very long time and then you have to invest a lot of effort and deep thought to ensure that what you've produced fits within the spec so that the stakeholder will be satisfied. You need to be a mind-reader.

It's hard even for a human who understands the full business, social and political context to disambiguate the meaning and intent of the spec; to try to express it mathematically would be an absolute nightmare... and extremely unwise. You would literally need some kind of super intelligence... And the amount of stream-of-thought tokens which would have to be generated to arrive at a correct, consistent, unambiguous formal spec is probably going to cost more than just hiring top software engineers to build the thing with 100% test coverage of all main cases and edge cases.

Worst part is; after you do all the expensive work of formal verification; you end up proving the 'correctness' of a solution that the client doesn't want.

The refactoring required will invalidate the entire proof from the beginning. We haven't even figured out the optimal way to formally architect software that is resilient to requirement changes; in fact, the industry is REALLY BAD at this. Almost nobody is even thinking about it. I am, but I sometimes feel like I may be the only person in the world who cares about designing optimal architectures to minimize line count and refactoring diff size. We'd have to solve this problem first before we even think about formal verification of 'most software'.

Without a hypothetical super-intelligence which understands everything about the world; the risk of misinterpreting any given 'typical' requirement is almost 100%... And once we have such super-intelligence, we won't need formal verification because the super-intelligence will be able to code perfectly on the first attempt; no need to verify.

And then there's the fact that most software can tolerate bugs... If operationally important big tech software which literally has millions of concurrent users can tolerate bugs, then most software can tolerate bugs.

DennisP · 2 months ago
Software verification has gotten some use for smart contracts. The code is fairly simple, it's certain to be attacked by sophisticated hackers who know the source, and the consequence of failure is theft of funds, possibly in large amounts. 100% test coverage is no guarantee that an attack can't be found.

People spend gobs of money on human security auditors who don't necessarily catch everything either, so verification easily fits in the budget. And once deployed, the code can't be changed.

Verification has also been used in embedded safety-critical code.

robot-wrangler · 2 months ago
The whole perspective of this argument is hard for me to grasp. I don't think anyone is suggesting that formal specs are an alternative to code, they are just an alternative to informal specs. And actually with AI the new spin is that they aren't even a mutually exclusive alternative.

A bidirectional bridge that spans multiple representations from informal spec to semiformal spec to code seems ideal. You change the most relevant layer that you're interested in and then see updates propagating semi-automatically to other layers. I'd say the jury is out on whether this uses extra tokens or saves them, but a few things we do know. Chain of code works better than chain of thought, and chain-of-spec seems like a simple generalization. Markdown-based planning and task-tracking agent workflows work better than just YOLOing one-shot changes everywhere, and so intermediate representations are useful.

It seems to me that you can't actually get rid of specs, right? So to shoot down the idea of productive cooperation between formal methods and LLM-style AI, one really must successfully argue that informal specs are inherently better than formal ones. Or even stronger: having only informal specs is better than having informal+formal.

ad_hockey · 2 months ago
I agree that trying to produce this sort of spec for the entire project is probably a fool's errand, but I still see the value for critical components of the system. Formally verifying the correctness of balance calculation from a ledger, or that database writes are always persisted to the write ahead log, for example.
qingcharles · 2 months ago
I used to work adjacent to a team who worked from closely-defined specs for web sites, and it used to infuriate the living hell out of me. The specs had all sorts of horrible UI choices and bugs and stuff that just plain wouldn't work when coded. I tried my best to get them to implement the intent of the spec, not the actual spec, but they had been trained in one method only and would not deviate at any cost.
marcosdumay · 2 months ago
There are many really important properties to enforce even on the most basic CRUD system. You can easily say things like "an anonymous user must never edit any data, except for the create account form", or "every user authorized to see a page must be listed on the admin page that lists what users can see a page".

People don't verify those because it's hard, not for lack of value.

nextos · 2 months ago
Yes, in fact there is research on type systems to ensure information flow control, avoiding unauthorized data access by construction.

Concrete Semantics [1] has a little example in §9.2.

[1] http://concrete-semantics.org/concrete-semantics.pdf

bkettle · 2 months ago
Yeah fair enough. I can definitely see the value of property-based verification like this and agree that useful properties could be easy to express and that LLMs could feasibly verify them. I think full verification that an implementation implements an entire spec and nothing else seems much less practical even with AI, but of course that is just one flavor of verification.
Maxion · 2 months ago
Even

> "an anonymous user must never edit any data, except for the create account form"

Can quickly end up being

> "an anonymous user must never edit any data, except for the create account form, and the feedback form"

And a week later go to

> "an anonymous user must never edit any data, except for the create account form, the feedback form, and the error submission form if they end up with a specific type of error"

And then during christmas

> > "an anonymous user must never edit any data, except for the create account form, the feedback form, and the error submission form if they end up with a specific type of error, and the order submission form if they visit it from this magic link. Those visiting from the magic link, should not be able to use the feedback form (marge had a bad experience last christmas going through feedbacks from the promotional campaign)"

rmah · 2 months ago
Yes, except their cookie preferences to comply with european law. Oh, and they should be able to change their theme from light/dark but only that. Oh and maybe this other thing. Except in situations where it would conflict with current sales promotions. Unless they're referred by a reseller partner. Unless it's during a demo, of course. etc, etc, etc.

This is the sort of reality that a lot of developers in the business world deals with.

amw-zero · 2 months ago
Compare the spec with the application here: https://concerningquality.com/model-based-testing/

I think we've become used to the complexity in typical web applications, but there's a difference between familiar and simple (simple vs. easy, as it were). The behavior of most business software can be very simply expressed using simple data structures (sets, lists, maps) and simple logic.

No matter how much we simply it, via frameworks and libraries or whatever have you, things like serialization, persistence, asynchrony, concurrency, and performance end up complicating the implementation. Comparing this against a simpler spec is quite nice in practice - and a huge benefit is now you can consult a simple in-memory spec vs. worrying about distributed system deployments.

giancarlostoro · 2 months ago
> especially when we currently use all kinds of frameworks and libraries that already abstract away implementation details.

This is my issue with algorithm driven interviewing. Even the creator of Homebrew got denied by Google because he couldn't do some binary sort or whatever it even was. He made a tool used by millions of developers, but apparently that's not good enough.

StilesCrisis · 2 months ago
Google denies qualified people all the time. They would much rather reject a great hire than take a risk on accepting a mediocre one. I feel for him but it's just the nature of the beast. Not everyone will get in.
UltraSane · 2 months ago
AWS has said that having formal verification of code lets them be more aggressive in optimization while being confidant it still adheres to the spec. They claim they were able to double the speed of IAM API auth code this way.
simonw · 2 months ago
I'm convinced now that the key to getting useful results out of coding agents (Claude Code, Codex CLI etc) is having good mechanisms in place to help those agents exercise and validate the code they are writing.

At the most basic level this means making sure they can run commands to execute the code - easiest with languages like Python, with HTML+JavaScript you need to remind them that Playwright exists and they should use it.

The next step up from that is a good automated test suite.

Then we get into quality of code/life improvement tools - automatic code formatters, linters, fuzzing tools etc.

Debuggers are good too. These tend to be less coding-agent friendly due to them often having directly interactive interfaces, but agents can increasingly use them - and there are other options that are a better fit as well.

I'd put formal verification tools like the ones mentioned by Martin on this spectrum too. They're potentially a fantastic unlock for agents - they're effectively just niche programming languages, and models are really good at even niche languages these days.

If you're not finding any value in coding agents but you've also not invested in execution and automated testing environment features, that's probably why.

roadside_picnic · 2 months ago
I very much agree, and believe using languages with powerful types systems could be a big step in this direction. Most people's first experience with Haskell is "wow this is hard to write a program in, but when I do get it to compile, it works". If this works for human developers, it should also work for LLMs (especially if the human doesn't have to worry about the 'hard to write a program' part).

> The next step up from that is a good automated test suite.

And if we're going for a powerful type system, then we can really leverage the power of property tests which are currently grossly underused. Property tests are a perfect match for LLMs because they allow the human to create a small number of tests that cover a very wide surface of possible errors.

The "thinking in types" approach to software development in Haskell allows the human user to keep at a level of abstraction that still allows them to reason about critical parts of the program while not having to worry about the more tedious implementation parts.

Given how much interest there has been in using LLMs to improve Lean code for formal proofs in the math community, maybe there's a world where we make use of an even more powerful type systems than Haskell. If LLMs with the right language can help prove complex mathematical theorems, they it should certain be possible to write better software with them.

nextos · 2 months ago
That's my opinion as well. Some functional language, that can also offer access to imperative features when needed, plus an expressive type system might be the future.

My bet is on refinement types. Dafny fits that bill quite well, it's simple, it offers refinement types, and verification is automated with SAT/SMT.

In fact, there are already serious industrial efforts to generate Dafny using LLMs.

Besides, some of the largest verification efforts have been achieved with this language [1].

[1] https://www.andrew.cmu.edu/user/bparno/papers/ironfleet.pdf

astrostl · 2 months ago
This is why I use Go as much as reasonably possible with vibe coding: types, plus great quality-checking ecosystem, plus adequate training data, plus great distribution story. Even when something has stuff like JS and Python SDKs, I tend to skip them and go straight to the API with Go.
justatdotin · 2 months ago
I love ML types, but I've concluded they serve humans more than they do agents. I'm sure it affects the agent, maybe just not as much as other choices.

I've noticed real advantages of functional languages to agents, for disposable code. Which is great, cos we can leverage those without dictating the human's experience.

I think the correct way forward is to choose whatever language the humans on your team agree is most useful. For my personal projects, that means a beautiful language for the bits I'll be touching, and whatever gets the job done elsewhere.

bcrosby95 · 2 months ago
Ada when?

It even lets you separate implementation from specification.

ManuelKiessling · 2 months ago
Isn‘t it funny how that’s exactly the kind of stuff that helps a human developer be successful and productive, too?

Or, to put it the other way round, what kind of tech leads would we be if we told our junior engineers „Well, here’s the codebase, that’s all I‘ll give you. No debuggers, linters, or test runners for you. Using a browser on your frontend implementation? Nice try buddy! Now good luck getting those requirements implemented!“

Wowfunhappy · 2 months ago
> Isn‘t it funny how that’s exactly the kind of stuff that helps a human developer be successful and productive, too?

I think it's more nuanced than that. As a human, I can manually test code in ways an AI still can't. Sure, maybe it's better to have automated test suites, but I have other options too.

mccoyb · 2 months ago
Claude Code was a big jump for me. Another large-ish jump was multi-agents and following the tips from Anthropic’s long running harnesses post.

I don’t go into Claude without everything already setup. Codex helps me curate the plan, and curate the issue tracker (one instance). Claude gets a command to fire up into context, grab an issue - implements it, and then Codex and Gemini review independently.

I’ve instructed Claude to go back and forth for as many rounds as it takes. Then I close the session (\new) and do it again. These are all the latest frontier models.

This is incredibly expensive, but it’s also the most reliable method I’ve found to get high-quality progress — I suspect it has something to do with ameliorating self-bias, and improving the diversity of viewpoints on the code.

I suspect rigorous static tooling is yet another layer to improve the distribution over program changes, but I do think that there is a big gap in folk knowledge already between “vanilla agents” and something fancy with just raw agents, and I’m not sure if just the addition of more rigorous static tooling (beyond the compiler) closes it.

idiotsecant · 2 months ago
How expensive is incredibly expensive?
rmah · 2 months ago
I think the main limitation is not code validation but assumption verification. When you ask an LLM to write some code based on a few descriptive lines of text, it is, by necessity, making a ton of assumptions. Oddly, none of the LLM's I've seen ask for clarification when multiple assumptions might all be likely. Moreover, from the behavior I've seen, they don't really backtrack to select a new assumption based on further input (I might be wrong here, it's just a feeling).

What you don't specify, it must to assume. And therein lies a huge landscape of possibilities. And since the AI's can't read your mind (yet), its assumptions will probably not precisely match your assumptions unless the task is very limited in scope.

crazygringo · 2 months ago
> Oddly, none of the LLM's I've seen ask for clarification when multiple assumptions might all be likely.

It's not odd, they've just been trained to give helpful answers straight away.

If you tell them not to make assumptions and to rather first ask you all their questions together with the assumptions they would make because you want to confirm before they write the code, they'll do that too. I do that all the time, and I'll get a list of like 12 things to confirm/change.

That's the great thing about LLM's -- if you want them to change their behavior, all you need to do is ask.

Davidzheng · 2 months ago
OK but if the verification loop really makes the agents MUCH more useful, then this usefulness difference can be used as a training signal to improve the agents themselves. So this means the current capabilities levels are certainly not going to remain for very long (which is also what I expect but I would like to point out it's also supported by this)
hamiecod · 2 months ago
Thats a strong RL technique that could equal the quality of RLHF.
oxag3n · 2 months ago
Where they'd get training data?

Source code generation is possible due to large training set and effort put into reinforcing better outcomes.

I suspect debugging is not that straightforward to LLM'ize.

It's a non-sequential interaction - when something happens, it's not necessarily caused the problem, timeline may be shuffled. LLM would need tons of examples where something happens in debugger or logs and associate it with another abstraction.

I was debugging something in gdb recently and it was a pretty challenging bug. Out of interest I tried chatgpt, and it was hopeless - try this, add this print etc. That's not how you debug multi-threaded and async code. When I found the root cause, I was analyzing how I did it and where did I learn that specific combination of techniques, each individually well documented, but never in combination - it was learning from other people and my own experience.

jimmaswell · 2 months ago
How long ago was this? I've had outstansingly impressive results asking Copilot Chat with Sonnet 4.5 or ChatGPT to debug difficult multithreaded C++.
simonw · 2 months ago
Have you tried running gdb from a Claude Code or Codex CLI session?
fragmede · 2 months ago
> Where they'd get training data?

They generated it, and had a compiler compile it, and then had it examine the output. Rinse, repeat.

RA_Fisher · 2 months ago
LLMs are okay at bisecting programs and identifying bugs in my experience. Sometimes they require guidance but often enough I can describe the symptom and they identify the code causing the issue (and recommend a fix). They’re fairly methodical, and often ask me to run diagnostic code (or do it themselves).
anon-3988 · 2 months ago
> I suspect debugging is not that straightforward to LLM'ize.

Debugging is not easy but there should be a lot of training corpus for "bug fixing" from all the commits that have ever existed.

christophilus · 2 months ago
Debugging has been excellent for me with Opus 4.5 and Claude Code.
QuercusMax · 2 months ago
I've only done a tiny bit of agent-assisted coding, but without the ability to run tests the AI will really go off the rails super quick, and it's kinda hilarious to watch it say "Aha! I know what the problem is!" over and over as it tries different flavors until it gives up.
CobrastanJorji · 2 months ago
I might go further and suggest that the key to getting useful results out of HUMAN coding agents is also to have good mechanisms in place to help them exercise and validate the code.

We valued automated tests and linters and fuzzers and documentation before AI, and that's because it serves the same purpose.

Deleted Comment

rodphil · 2 months ago
> At the most basic level this means making sure they can run commands to execute the code - easiest with languages like Python, with HTML+JavaScript you need to remind them that Playwright exists and they should use it.

So I've been exploring the idea of going all-in on this "basic level" of validation. I'm assembling systems out of really small "services" (written in Go) that Claude Code can immediately run and interact with using curl, jq, etc. Plus when building a particular service I already have all of the downstream services (the dependencies) built and running so a lot of dependency management and integration challenges disappear. Only trying this out at a small scale as yet, but it's fascinating how the LLMs can potentially invert a lot of the economics that inform the current conventional wisdom.

(Shameless plug: I write about this here: https://twilightworld.ai/thoughts/atomic-programming/)

My intuition is that LLMs will for many use cases lead us away from things like formal verification and even comprehensive test suites. The cost of those activities is justified by the larger cost of fixing things in production; I suspect that we will eventually start using LLMs to drive down the cost of production fixes, to the point where a lot of those upstream investments stop making sense.

jcranmer · 2 months ago
> My intuition is that LLMs will for many use cases lead us away from things like formal verification and even comprehensive test suites. The cost of those activities is justified by the larger cost of fixing things in production; I suspect that we will eventually start using LLMs to drive down the cost of production fixes, to the point where a lot of those upstream investments stop making sense.

There is still a cost to having bugs, even if deploying fixes becomes much cheaper. Especially if your plan is to wait until they actually occur in practice to discover that you have a bug in the first place.

Put differently: would you want the app responsible for your payroll to be developed in this manner? Especially considering that the bug in question would be "oops, you didn't get paid."

simianwords · 2 months ago
Claude code and other AI coding tools must have a * mandatory * hook for verification.

For front end - the verification is make sure that the UI looks expected (can be verified by an image model) and clicking certain buttons results in certain things (can be verified by chatgpt agent but its not public ig).

For back end it will involve firing API requests one by one and verifying the results.

To make this easier, we need to somehow give an environment for claude or whatever agent to run these verifications on and this is the gap that is missing. Claude Code, Codex should now start shipping verification environments that make it easy for them to verify frontend and backend tasks and I think antigravity already helps a bit here.

------

The thing about backend verification is that it is different in different companies and requires a custom implementation that can't easily be shared across companies. Each company has its own way to deploy stuff.

Imagine a concrete task like creating a new service that reads from a data stream, runs transformations, puts it in another data stream where another new service consumes the transformed data and puts it into an AWS database like Aurora.

``` stream -> service (transforms) -> stream -> service -> Aurora ```

To one shot this with claude code, it must know everything about the company

- how does one consume streams in the company? Schema registry?

- how does one create a new service and register dependencies? how does one deploy it to test environment and production?

- how does one even create an Aurora DB? request approvals and IAM roles etc?

My question is: what would it take for Claude Code to one shot this? At the code level it is not too hard and it can fit in context window easily but the * main * problem is the fragmented processes in creating the infra and operations behind it which is human based now (and need not be!).

-----

My prediction is that companies will make something like a new "agent" environment where all these processes (that used to require a human) can be done by an agent without human intervention.

I'm thinking of other solutions here, but if anyone can figure it out, please tell!

pron · 2 months ago
Maybe in the short term, but that doesn't solve some fundamental problems. Consider, NP problems, problems whose solutions can be easily verified. But that they can all be easily verified does not (as far as we know) mean they can all be easily solved. If we look at the P subset of NP, the problems that can be easily solved, then the easy verification is no longer their key feature. Rather, it is something else that distinguishes them from the harder problems in NP.

So let's say that, similarly, there are programming tasks that are easier and harder for agents to do well. If we know that a task is in the easy category, of course having tests is good, but since we already know that an agent does it well, the test isn't the crucial aspect. On the other hand, for a hard task, all the testing in the world may not be enough for the agent to succeed.

Longer term, I think it's more important to understand what's hard and what's easy for agents.

jijijijij · 2 months ago
> At the most basic level this means making sure they can run commands to execute the code

Yeah, it's gonna be fun waiting for compilation cycles when those models "reason" with themselves about a semicolon. I guess we just need more compute...

zahlman · 2 months ago
One objection: all the "don't use --yolo" training in the world is useless if a sufficiently context-poisoned LLM starts putting malware in the codebase and getting to run it under the guise of "unit tests".
planckscnst · 2 months ago
For now, this is mitigated by only including trusted content in the context; for instance, absolutely do not allow it to access general web content.

I suspect that as it becomes more economical to play with training your own models, people will get better at including obscured malicious content in data that will be used during training, which could cause the LLM to intrinsically carry a trigger/path that would cause malicious content to be output by the LLM under certain conditions.

And of course we have to worry about malicious content being added to sources that we trust, but that already exists - we as an industry typically pull in public repositories without a complete review of what we're pulling. We outsource the verification to the owners of the repository. Just as we currently have cases of malicious code sneaking into common libraries, we'll have malicious content targeted at LLMs

thomasfromcdnjs · 2 months ago
shameless plug: I'm working on an open source project https://blocksai.dev/ to attempt to solve this. (and just added a note for me to add formal verification)

Elevator pitch: "Blocks is a semantic linter for human-AI collaboration. Define your domain in YAML, let anyone (humans or AI) write code freely, then validate for drift. Update the code or update the spec, up to human or agent."

(you can add traditional linters to the process if you want but not necessary)

The gist being you define a bunch of validators for a collection of modules you're building (with agentic coding) with a focus on qualifying semantic things;

- domain / business rules/measures

- branding

- data flow invariants — "user data never touches analytics without anonymization"

- accessibility

- anything you can think of

Then you just tell your agentic coder to use the cli tool before committing, so it keeps the code in line with your engineering/business/philosophical values.

(boring) example of it detecting if blog posts have humour in them, running in Claude Code -> https://imgur.com/diKDZ8W

baq · 2 months ago
Reminder YAML is a serialization format. IaC standardizing on it (hashicorp being an outlier) was a mistake. It’s a good compilation target, but please add a higher level language for whatever you’re doing.
akrauss · 2 months ago
Quick feedback: both the „learn more“ link at the very top and the „Explore all examples“ link lead to 404

Deleted Comment

apitman · 2 months ago
That's bad news for C++, Rust, and other slow compilers.
WhyOhWhyQ · 2 months ago
I've tried getting claude to set up testing frameworks, but what ends up happening is it either creates canned tests, or it forgets about tests, or it outright lies about making tests. It's definitely helpful, but feels very far from a robust thing to rely on. If you're reviewing everything the AI does then it will probably work though.
simonw · 2 months ago
Something I find helps a lot is having a template for creating a project that includes at least one passing test. That way the agent can run the tests at the start using the correct test harness and then add new tests as it goes along.

I use cookiecutter for this, here's my latest Python library template: https://github.com/simonw/python-lib

planckscnst · 2 months ago
LLMs are very good at looking at a change set and finding untested paths. As a standard part of my workflow, I always pass the LLM's work through a "reviewer", which is a fresh LLM session with instructions to review the uncommitted changes. I include instructions for reviewing test coverage.

I've also found that LLMs typically just partially implement a given task/story/spec/whatever. The reviewer stage will also notice a mismatch between the spec and the implementation.

I have an orchestrator bounce the flow back and forth between developing and reviewing until the review comes back clean, and only then do I bother to review its work. It saves so much time and frustration.

ramoz · 2 months ago
Claude Code hooks is a great way to integrate these things
htrp · 2 months ago
Better question is which tools at what level
dionian · 2 months ago
you've done some great articles on this topic and my experience aligns with your view completely.
agumonkey · 2 months ago
gemini and claude do that already IIUC, self debugging iterations

Dead Comment

formerly_proven · 2 months ago
Not so sure about formal verification though. ime with Rust LLM agents tend to struggle with semi-complex ownership or trait issues and will typically reach for unnecessary/dangerous escape hatches ("unsafe impl Send for ..." instead of using the correct locks, for example) fairly quickly. Or just conclude the task is impossible.

> automatic code formatters

I haven't tried this because I assumed it'll destroy agent productivity and massively increase number of tokens needed, because you're changing the file out under the LLM and it ends up constantly re-reading the changed bits to generate the correct str_replace JSON. Or are they smart enough that this quickly trains them to generate code with zero-diff under autoformatting?

But in general of course anything that's helpful for human developers to be more productive will also help LLMs be more productive. For largely identical reasons.

planckscnst · 2 months ago
I've directly faced this problem with automatic code formatters, but it was back around Claude 3.5 and 3.7. It would consistently write nonconforming code - regardless of having context demanding proper formatting. This caused both extra turns/invocations with the LLM and would cause context issues - both filling the context with multiple variants of the file and also having a confounding/polluting/poisoning effect due to having these multiple variations.

I haven't had this problem in a while, but I expect current LLMs would probably handle those formatting instructions more closely than the 3.5 era.

simonw · 2 months ago
I'm finding my agents generate code that conforms to Black quite effectively, I think it's probably because I usually start them in existing projects that were already formatted using Black so they pick up those patterns.
alexgotoi · 2 months ago
The funny part of “AI will make formal verification go mainstream” is that it skips over the one step the industry still refuses to do: decide what the software is supposed to do in the first place.

We already have a ton of orgs that can’t keep a test suite green or write an honest invariant in a code comment, but somehow we’re going to get them to agree on a precise spec in TLA+/Dafny/Lean and treat it as a blocking artifact? That’s not an AI problem, that’s a culture and incentives problem.

Where AI + “formal stuff” probably does go mainstream is at the boring edges: property-based tests, contracts, refinement types, static analyzers that feel like linters instead of capital‑P “Formal Methods initiatives”. Make it look like another checkbox in CI and devs will adopt it; call it “verification” and half the org immediately files it under “research project we don’t have time for”.

Will include this thread in my https://hackernewsai.com/ newsletter.

TimTheTinker · 2 months ago
> it skips over the one step the industry still refuses to do: decide what the software is supposed to do in the first place.

Not only that, but it's been well-established that a significant challenge with formally verified software is to create the right spec -- i.e. one that actually satisfies the intended requirements. A formally verified program can still have bugs, because the spec (which requires specialized skills to read and understand) may not satisfy the intent of the requirements in some way.

So the fundamental issue/bottleneck that emerges is the requirements <=> spec gap, which closing the spec <=> executable gap does nothing to address. Translating people's needs to an empirical, maintainable spec of one type or another will always require skilled humans in the loop, regardless of how easy everything else gets -- at minimum as a responsibility sink, but even more as a skilled technical communicator. I don't think we realize how valuable it is to PMs/executives and especially customers to be understood by a skilled, trustworthy technical person.

suspended_state · 2 months ago
> A formally verified program can still have bugs, because the spec (which requires specialized skills to read and understand) may not satisfy the intent of the requirements in some way.

That's not a bug, that's a misunderstanding, or at least an error of translation from natural language to formal language.

Edit:

I agree that one can categorize incorrect program behavior as a bug (apparently there's such a thing as "behavioral bug"), but to me it seems to be a misnomer.

I also agree that it's difficult to tell that to a customer when their expectations aren't met.

jandrese · 2 months ago
The danger of this is people start asking about formally verified specs, and down that road lies madness.

"If you can formally verify the spec the code can be auto-generated from it."

strbean · 2 months ago
> decide what the software is supposed to do in the first place.

That's where the job security is (and always has been). This has been my answer to "are you afraid for your job because of AI?"

Writing the code is very rarely the hard part. The hard part is getting a spec from the PM, or gathering requirements stakeholders. And then telling them why the spec / their requirements don't make sense or aren't feasible, and figuring out ones that will actually achieve their goals.

svat · 2 months ago
There are some basic invariants like "this program should not crash on any input" or "this service should be able to handle requests that look like X up to N per second" — though I expect those will be the last to be amenable to formal verification, they are also very simple ones that (when they become possible) will be easy to write down.
yencabulator · 2 months ago
> "this program should not crash on any input" [...] though I expect those will be the last to be amenable to formal verification,

In the world of Rust, this is actually the easiest to achieve level of formal proofs.

Simple lints can eliminate panics and potentially-panicking operations (forcing you/LLM to use variants with runtime error handling, e.g. `s[i]` can become `s.get(i).unwrap_or(MyError::RuhRoh)?`, or more purpose-specific handling; same thing for e.g. enforcing that arithmetic never underflows/overflows).

Kani symbolically evaluates simple Rust functions and ensures that the function does not panic on any possible value on it's input, and on top of that you can add invariants to be enforced (e.g. search for an item in an array always returns either None or a valid index, and the value at that index fulfills the search criteria).

(The real challenge with e.g. Kani is structuring a codebase such that it has those simple-enough subparts where formal methods are feasible.)

Verdex · 2 months ago
Yeah, the hyper majority of the history of "getting things done" has been: find some guy who can translate "make the crops grow" into a pile of food.

The people who care about the precise details have always been relegated to a tiny minority, even in our modern technological world.

anovick · 2 months ago
OP seems not broadly applicative to corporate software development.

Rather, it's directed at the kind of niche, mission-critical things, that not all of which are getting the formal verification solution that is needed for them and/or that don't get considered due to high costs (due to specialization skill).

I read OP as a realization that the costs have fallen, and thus we should see formal verification more than before.

fulafel · 2 months ago
This is the article's message as well:

"That doesn’t mean software will suddenly be bug-free. As the verification process itself becomes automated, the challenge will move to correctly defining the specification: that is, how do you know that the properties that were proved are actually the properties that you cared about? Reading and writing such formal specifications still requires expertise and careful thought. But writing the spec is vastly easier and quicker than writing the proof by hand, so this is progress."

General security properties come to mind as one area that could have good reusability for specs.

fsloth · 2 months ago
"decide what the software is supposed to do in the first place."

After 20 years of software development I think that is because most of the software out there, is the method itself of finding out what it's supposed to do.

The incomplete specs are not lacking feature requirements due to lack of discipline. It's because nobody can even know without trying it out what the software should be.

I mean of course there is a subset of all software that can be specified before hand - but a lot of it is not.

Knuth could be that forward thinking with TeX for example only because he had 500 years of book printing tradition to fall back on to backport the specs to math.

adverbly · 2 months ago
This smells like a Principia Mathematica take to me...

Reducing the problem to "ya just create a specification to formally verify" doesn't move the needle enough to me.

When it comes to real-world, pragmatic, boots-on-the-ground engineering and design, we are so far from even knowing the right questions to ask. I just don't buy it that we'd see huge mainstream productivity changes even if we had access to a crystal ball.

Its hilarious how close we're getting to Hitch hikers guide to the galaxy though. We're almost at that phase where we ask what the question is supposed to be.

rramadass · 2 months ago
Nope; you are quite wrong here. Most people have no idea of what Formal Specification/Verification via the usage of Formal Methods really means.

It is first and foremost about learning a way of thinking. Tools only exist to augment and systematize this thinking into a methodology. There are different levels of "Formal Methods Thinking" starting with informal all the way to completely rigorous. Understanding and using these methods of thinking as the "interface" to specify a problem to an AI agent/LLM is what is important to ensure "correctness by construction to a specification".

Everybody should read this excellent (and accessible) paper On Formal Methods Thinking in Computer Science Education which details the above approach - https://research.tue.nl/en/publications/on-formal-methods-th...

Excerpts:

One may ask What good is FM? Who needs it? Millions of programmers work everyday without it. Many think that FM in a CS curriculum is peddling the idea that Formal Logic (e.g.,propositional or predicate logic) is required for everyday programmers, that they need it to write programs that are more likely to be correct, and correspondingly less likely to fail the tests to which they subsequently (of course) must still be subjected. However, this degree of formality is not necessarily needed. What is required of everyday programmers is that, as they write their programs, they think — and code — in a way that respects a correctness-oriented point of view. Assertions can be written informally, in natural language: just the “thinking of what those assertions might be” guides the program-construction process in an astonishingly effective way. What is also required are the engineering principles referred to above. Connecting programs with their specifications through assertions provides training on abstraction, which, in turn, encourages simplicity and focus, helping build more robust, flexible and usable systems.

The answer to “Who needs it?” is that everyday programmers and software developers indeed may not need to know the theory of FM. But they do need to know is how to practise it, even if with a light touch, benefiting from its precepts. FM theory, which is what explains — to the more mathematically inclined — why FM works, has become confused with the FM practice of using the theory’s results to benefit from what it assures. Any “everyday programmer” can do that...except that most do not.

The paper posits 3 levels of "Formal Methods Thinking" viz.

a) Level 1 (“What’s True Here”). Level 1 of FM thinking is the application of FM in its most basic form. Students develop abilities to understand their programs and reason about their correctness using informal descriptions. By “What’s True Here”, we mean including natural language prose or informal diagrams to describe the properties that are true at different points of a program’s execution rather than the operations that brought them about.

b) Level 2 (Formal Assertions). Level 2 introduces greater precision to Level 1 by teaching students to write assertions that incorporate arithmetic and logical operators to capture FM thinking more rigorously. This may be accompanied by lightweight tools that can be used to test or check that their assertions hold.

c) Level 3 (Full Verification). This level enables students to prove program properties using tools such as a theorem prover, model checker or SMT solver. But in addition to tool-based checking of properties (now written using a formal language), this level can formally emphasise other aspects of system-level correctness, such as structural induction and termination.

amw-zero · 2 months ago
When you go to write a line of code, how do you decide what to write?
signa11 · 2 months ago
> When you go to write a line of code, how do you decide what to write?

depends ofcourse, what am i writing for ? a feature, a bugfix, refactor ... ?

adverbly · 2 months ago
Honestly? I usually look at the previous implementation and try to make some changes to fix an issue that I discovered during testing. Rarely an actual bug - usually we just changed our mind about what the intent should be.
infruset · 2 months ago
I was waiting for a post like this to hit the front page of Hacker News any day. Ever since Opus 4.5 and GPT 5.2 came out (mere weeks ago), I've been writing tens of thousands of lines of Lean 4 in a software engineering job and I feel like we are on the eve of a revolution. What used to take me 6 months of work when I was doing my PhD in Coq (now Rocq), now takes from a few hours to a few days. Whole programming languages can get formalized executable semantics in little time. Lean 4 already has a gigantic amount of libraries for math but also for computer science; I expect open source projects to sprout with formalizations of every language, protocol, standard, algorithm you can think of.

Even if you have never written formal proofs but are intrigued by them, try asking a coding agent to do some basic verification. You will not regret it.

Formal proof is not just about proving stuff, it's also about disproving stuff, by finding counterexamples. Once you have stated your property, you can let quickcheck/plausible attack it, possibly helped by a suitable generator which does not have to be random: it can be steered by an LLM as well.

Even further, I'm toying with the idea of including LLMs inside the formalization itself. There is an old and rich idea in the domain of formal proof, that of certificates: rather than proving that the algorithm that produces a result is correct, just compute a checkable certificate with untrusted code and verify it is correct. Checkable certificates can be produced by unverified programs, humans, and now LLMs. Properties, invariants, can all be "guessed" without harm by an LLM and would still have to pass a checker. We have truly entered an age of oracles. It's not halting-problem-oracle territory of course, but it sometimes feels pretty close for practical purposes. LLMs are already better at math than most of us and certainly than me, and so any problem I could plausibly solve on my own, they will do faster without my having to wonder if there is a subtle bug in the proof. I still need to look at the definitions and statements, of course, but my role has changed from finding to checking. Exploring the space of possible solutions is now mostly done better and faster by LLMs. And you can run as many in parallel as you can keep up with, in attention and in time (and money).

If anyone else is as excited about all this as I am, feel free to reach out in comments, I'd love to hear about people's projects !

baq · 2 months ago
People are sleeping on the new models being capable of this, 100%. Been telling Opus to make Alloy specs recently and it… just does. Ensuring conformance is rapidly becoming affordable, folks in this thread needed to update their priors!
dbdr · 2 months ago
Do you now use Lean instead of Rocq because your new employer happened to prefer that, or is it superior in your opinion? Which one would you recommend to look at first?
nomadygnt · 2 months ago
Where do you work that you get to write Lean? That sounds awesome!
infruset · 2 months ago
I can't disclose that, but what I can say is no one at my company writes Lean yet. I'm basically experimenting with formalizing in Lean stuff I normally do in other languages, and getting results exciting enough I hope to trigger adoption internally. But this is bigger than any single company!
jrowen · 2 months ago
This is perhaps only tangentially related to formal verification, but it made me wonder - what efforts are there, if any, to use LLMs to help with solving some of the tough questions in math and CS (P=NP, etc)? I'd be curious to know how a mathematician would approach that.
infruset · 2 months ago
So as for math of that level, (the best) humans are still kings by far. But things are moving quickly and there is very exciting human-machine collaboration, one need only look at recent interviews of Terence Tao!
qingcharles · 2 months ago
I agree. I think we've gotta get through the rough couple of "AI slop" years of code and we'll come out of it the other side with some incredible tools.

The reason we don't all write code to the level that can operate the Space Shuttle is because we don't have the resources and the projects most of us work on all allow some wiggle room for bugs since lives generally aren't at risk. But we'd all love to check in code that was verifiably bug-free, exploit-free, secure etc if we could get that at a low, low price.

MobiusHorizons · 2 months ago
at some level it's not really an engineering issue. "bug free" requires that there is some external known goal with sufficient fidelity that it can classify all behaviors as "bug" or "not bug". This really doesn't exist in the vast majority of software projects. It is of course occasionally true that programmers are writing code that explicitly doesn't meet one of the requirements they were given, but most of the time the issue is that nothing was specified for certain cases, so code does whatever was easiest to implement. It is only when encountering those unspecified cases (either via a user report, or product demo, or manual QA) that the behavior is classified as "bug" or "not bug".

I don't see how AI would help with that even if it made writing code completely free. Even if the AI is writing the spec and fully specifies all possible outcomes, the human reviewing it will glance over the spec and approve it only to change their mind when confrunted with the actual behavior or user reports.

igornotarobot · 2 months ago
This sounds amazing! What kind of systems take you a few hours to a few days now? Just curious whether it works in a niche (like sequential code), or does it work for concurrent and distributed systems as well?
riku_iki · 2 months ago
> I've been writing tens of thousands of lines of Lean 4 in a software engineering job

I am wondering what exactly you are doing? What tasks you are solving using generated lean?

ajcp · 2 months ago
Having known nothing of this field before now I have to say your excitement has me excited!
pron · 2 months ago
> it’s not hard to extrapolate and imagine that process becoming fully automated in the next few years. And when that happens, it will totally change the economics of formal verification.

There is a problem with this argument similar to one made about imagining the future possibilities of vibe coding [1]: once we imagine AI to do this task, i.e. automatically prove software correct, we can just as easily imagine it to not have to do it (for us) in the first place. If AI can do the hardest things, those it is currently not very good at doing, there's no reason to assume it won't be able to do easier things/things it currently does better. In particular, we won't need it to verify our software for us, because there's no reason to believe that it won't be able to come up with what software we need better than us in the first place. It will come up with the idea, implement it, and then decide to what extent to verify it. Formal verification, or programming for that matter, will not become mainstream (as a human activity) but go extinct.

Indeed, it is far easier for humans to design and implement a proof assistant than it is to use one to verify a substantial computer program. A machine that will be able to effectively use a proof checker, will surely be able to come up with a novel proof checker on its own.

I agree it's not hard to extrapolate technological capabilities, but such extrapolation has a name: science fiction. Without a clear understanding of what makes things easier or harder for AI (in the near future), any prediction is based on arbitrary guesses that AI will be able to do X yet not Y. We can imagine any conceivable capability or limitation we like. In science fiction we see technology that's both capable and limited in some rather arbitrary ways.

It's like trying to imagine what problems computers can and cannot efficiently solve before discovering the notion of compuational complexity classes.

[1]: https://news.ycombinator.com/item?id=46207505

thatoneengineer · 2 months ago
I disagree. Right now, feedback on correctness is a major practical limitation on the usefulness of AI coding agents. They can fix compile errors on their own, they can _sometimes_ fix test errors on their own, but fixing functionality / architecture errors takes human intervention. Formal verification basically turns (a subset of) functionality errors into compile errors, making the feedback loop much stronger. "Come up with what software we need better than us in the first place" is much higher on the ladder than that.

TL;DR: We don't need to be radically agnostic about the capabilities of AI-- we have enough experience already with the software value chain (with and without AI) for formal verification to be an appealing next step, for the reasons this author lays out.

pron · 2 months ago
I completely agree it's appealing, I just don't see a reason to assume that an agent will be able to succeed at it and at the same time fail at other things that could make the whole exercise redundant. In other words, I also want agents to be able to consistently prove software correct, but if they're actually able to accomplish that, then they could just as likely be able to do everything else in the production of that software (including gathering requirements and writing the spec) without me in the loop.
qingcharles · 2 months ago
It's getting better every day, though, at "closing the loop."

When I recently booted up Google Antigravity and had it make a change to a backend routine for a web site, I was quite surprised when it opened Chrome, navigated to the page, and started trying out the changes to see if they had worked. It was janky as hell, but a year from now it won't be.

pron · 2 months ago
To make this more constructive, I think that today AI tools are useful when they do things you already know how to do and can assess the quality of the output. So if you know how to read and write a formal specification, LLMs can already help translating natural-language descriptions to a formal spec.

It's also possible that LLMs can, by themselves, prove the correctness of some small subroutines, and produce a formal proof that you can check in a proof checker, provided you can at least read and understand the statement of the proposition.

This can certainly make formal verification easier, but not necessarily more mainstream.

But once we extrapolate the existing abilities to something that can reliably verify real large or medium-sized programs for a human who cannot read and understand the propositions (and the necessary simplifying assumptions) that it's hard to see a machine do that and at the same time not able to do everything else.

1121redblackgo · 2 months ago
First human robot war is us telling the AI/robots 'no', and them insisting that insert technology here is good for us and is the direction we should take. Probably already been done, but yeah, this seems like the tipping point into something entirely different for humanity.
pron · 2 months ago
... if it's achievable at all in the near future! But we don't know that. It's just that if we assume AI can do X, why do we assume it cannot, at the same level of capability, also do Y? Maybe the tipping point where it can do both X and Y is near, but maybe in the near future it will be able to do neither.
rgmerk · 2 months ago
(sarcasm on)

Woohoo, we're almost all of the way there! Now all you need to do is ensure that the formal specification you are proving that the software implements is a complete and accurate description of the requirements (which are likely incomplete and contradictory) as they exist in the minds of the set of stakeholders affected by your software.

(sarcasm off).

qingcharles · 2 months ago
I mean, I don't disagree. Specs are usually horrible, way off the mark, outdated, and written by folks who don't understand how the rest of the vertical works. But, that's a problem for another day :)
Keyframe · 2 months ago
That's a problem for Super Saiyan AGI to solve :)