We already have a ton of orgs that can’t keep a test suite green or write an honest invariant in a code comment, but somehow we’re going to get them to agree on a precise spec in TLA+/Dafny/Lean and treat it as a blocking artifact? That’s not an AI problem, that’s a culture and incentives problem.
Where AI + “formal stuff” probably does go mainstream is at the boring edges: property-based tests, contracts, refinement types, static analyzers that feel like linters instead of capital‑P “Formal Methods initiatives”. Make it look like another checkbox in CI and devs will adopt it; call it “verification” and half the org immediately files it under “research project we don’t have time for”.
Will include this thread in my https://hackernewsai.com/ newsletter.
The people who care about the precise details have always been relegated to a tiny minority, even in our modern technological world.
Security itself is a journey, not a destination. To say that you are secure is to say that you have been so clever that nobody else in the history of ever again will ever be as clever as you just were. Even knowing that they can study you being clever.
Even a super intelligent AI might not be able to replace lawyerhood unless it is also dynamically going out into the world and investigating new legal theory, researching old legal theory, socializing with the powers that be to ensure that they accept their approach, and carefully curating clients that can take advantage of the results.