I agree with you that OpenAI seems much more risky in terms of it's actual true viability as a business, but the risk:reward must be there for Softbank.
This is sell-side idealist thinking and blurred view of reality. We're not approaching it, we're not even seeing metrics to suggest that any sub-division of any business is making serious progress there at all.
Too many people are hyping something that will not happen in our lifetimes and we risk looking beyond the terrible state of large global economies, poor business practice and human exploitation on mass scales to a place we will never see. It's more fun to try and shape future possibilities for large profit that we'll probably never have to justify, than attempt to deal with current realities, and thus go against the grain of investment trends today, for an uncertain benefit.
NextJS is a pile of garbage, and their platform is absurdly expensive and leans heavily on vendor lock in.
Far too many smart people are putting their energies into such discussions that add a lot of drag to the process of society and humanity moving forward for no net gain at all.
The article sort of goes sideways with this idea but pointing out that AI coding robs you a deep understanding of the code it produces is a valid and important criticism of AI coding.
A software engineer's primary job isn't producing code, but producing a functional software system. Most important to that is the extremely hard to convey "mental model" of how the code works and an expertise of the domain it works in. Code is a derived asset of this mental model. And you will never know code as well as a reader and you would have as the author for anything larger than a very small project.
There are other consequences of not building this mental model of a piece of software. Reasoning at the level of syntax is proving to have limits that LLM-based coding agents are having trouble scaling beyond.
> Sean proposes that in the AI future, the specs will become the real code. That in two years, you'll be opening python files in your IDE with about the same frequency that, today, you might open up a hex editor to read assembly.
> It was uncomfortable at first. I had to learn to let go of reading every line of PR code. I still read the tests pretty carefully, but the specs became our source of truth for what was being built and why.
This doesn't make sense as long as LLMs are non-deterministic. The prompt could be perfect, but there's no way to guarantee that the LLM will turn it into a reasonable implementation.
With compilers, I don't need to crack open a hex editor on every build to check the assembly. The compiler is deterministic and well-understood, not to mention well-tested. Even if there's a bug in it, the bug will be deterministic and debuggable. LLMs are neither.
This seems like a typical engineer forgets people aren't machines line of thinking.
2. People value convenience over privacy and security
3. Cloud is easy.
I can't believe so many replies are struggling with the easy answer: privacy, security, "local first", "open source", "distributed", "open format" etc etc etc are developer goals projected onto a majority cohort of people who have never, and will never, care and yet hold all the potential revenue you need.
One thing I did notice though from looking through the examples is this:
Uncaught errors automatically cause retries of tasks using your settings. Plus there are helpers for granular retrying inside your tasks.
This feels like one of those gotchas that is absolutely prone to benign refactoring causing huge screwups, or at least someone will find they pinged a pay for service 50x by accident without realising.
ergonomics like your helper of await retry.onThrow feel like a developer friendly default "safe" approach rather than just an optional helper, though granted it's not as magic feeling when you're trying convert eyeballs into users.