I agree with you that OpenAI seems much more risky in terms of it's actual true viability as a business, but the risk:reward must be there for Softbank.
This is sell-side idealist thinking and blurred view of reality. We're not approaching it, we're not even seeing metrics to suggest that any sub-division of any business is making serious progress there at all.
Too many people are hyping something that will not happen in our lifetimes and we risk looking beyond the terrible state of large global economies, poor business practice and human exploitation on mass scales to a place we will never see. It's more fun to try and shape future possibilities for large profit that we'll probably never have to justify, than attempt to deal with current realities, and thus go against the grain of investment trends today, for an uncertain benefit.
NextJS is a pile of garbage, and their platform is absurdly expensive and leans heavily on vendor lock in.
Far too many smart people are putting their energies into such discussions that add a lot of drag to the process of society and humanity moving forward for no net gain at all.
The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.
There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.
> Sean proposes that in the AI future, the specs will become the real code. That in two years, you'll be opening python files in your IDE with about the same frequency that, today, you might open up a hex editor to read assembly.
> It was uncomfortable at first. I had to learn to let go of reading every line of PR code. I still read the tests pretty carefully, but the specs became our source of truth for what was being built and why.
This doesn't make sense as long as LLMs are non-deterministic. The prompt could be perfect, but there's no way to guarantee that the LLM will turn it into a reasonable implementation.
With compilers, I don't need to crack open a hex editor on every build to check the assembly. The compiler is deterministic and well-understood, not to mention well-tested. Even if there's a bug in it, the bug will be deterministic and debuggable. LLMs are neither.
This seems like a typical engineer forgets people aren't machines line of thinking.