That is significantly harder to do than writing an implementation from tests, especially for codebases that previously didn't have any testing infrastructure.
That is significantly harder to do than writing an implementation from tests, especially for codebases that previously didn't have any testing infrastructure.
Obviously, the components should talk using MCP.
Office also used to come with its own clipboard viewer app, with history support. It is still there in recent versions, as an option within the Word/Excel/etc UIs, but if you go back far enough (97? 2000?) it was a separate EXE you could use without having any of the main Office apps open.
There were many questions with no answers for literal centuries and thousands trying, and failing, to crack them. A solution was ultimately found despite that.
A new "math" might be needed, but an answer (affirming or not) will be found.
What if there exists a proof that P!=NP, but the shortest possible proof of that proposition is a googolplex symbols that long? Then P!=NP would be true, and provable and knowable in theory, yet eternally unprovable and unknowable in practice
Or even a galactic algorithm-an algorithm for solving an NP-complete problem that is technically in P, but completely useless for anything in practice, e.g. O(n^10000000)
My completely unscientific hunch is someone will eventually prove that P=?=NP is independent of ZF(C). Maybe the universe just really wants to mess with complexity theorists
When I first saw ServiceNow, I was impressed - because my point of comparison (I worked for a university at the time) was BMC Remedy, which was terrible. And some years later I did some consulting for a major bank which was using some 3270-based IBM solution (Tivoli something… I believe it has finally been discontinued) and ServiceNow is light years ahead of that too.
A lot of libraries document invariants that are either not checked at all, only at runtime, or somewhere in between. For instance, the requirement that a collection not be modified during interaction. Or that two region of memory do not overlap, or that a variable is not modified without owning a lock. These are all things that, in principle, can be formally verified.
No one claims that good type systems prevent buggy software. But, they do seem to improve programmer productivity.
For LLMs, there is an added benefit. If you can formally specify what you want, you can make that specification your entire program. Then have an LLM driven compiler produce a provably correct implementation. This is a novel programming paradigm that has never before been possible; although every "declarative" language is an attempt to approximate it.
The problem is there is always some chance a coding agent will get stuck and be unable to produce a conforming implementation in a reasonable amount of time. And then you are back in a similar place to what you were with those pre-LLM solutions - needing a human expert to work out how to make further progress.