Step 1: Some upstarts create a new way of doing something. It’s clunky and unrefined.
Step 2: "Experts" and senior folks in the field dismiss it as a "toy." It doesn't follow their established rules or best practices and seems amateurish. They wouldn't recommend it to anyone serious.
Step 3: The "toy" gets adopted by a small group of outsiders or newcomers who aren't burdened by the "right way" of doing things. They play with it, improve it, and find new applications for it.
Step 4: The "toy" becomes so effective and widespread that it becomes the new standard. The original experts are left looking out of touch, their deep knowledge now irrelevant to the new way of doing things.
We're at step 2, bordering on 3.
* Executives at Nokia and BlackBerry saw the first iPhone, with its lack of a physical keyboard, as an impractical toy for media consumption, not a serious work device.
* Professional photographers viewed the first low-resolution digital cameras as flimsy gadgets, only for them to completely decimate the film industry.
Many "new ways" of doing something die before becoming the norm. Using the examples where it prevailed without looking at all the times it failed is just bad rationale.
"vibe coding" (what a horrid jargon) may be the new digital camera. It also may be the new metaverse (just to use a recent example still fresh in people's minds).
Contrary to digital camera and iphone, "vibe coding" is muddled by an army of people deeply invested in Gen AI adoption (either directly or indirectly) that want it to succeed no matter if it makes sense or not.
Even Apple can’t get around that. The Mac sticks around for this very reason: as a dev platform
Now suddenly the problem is that AI can't write a lot of code.
LLMs have an unsolvable problem of "hallucination". It is a bad description of what the problem is because hallucination is all they do, it just also happens to be correct in many cases. The larger the codebase or the problem space, the less accurate LLMs tend to be.
And developers to a lot more than generating LOC.
What is not clear to me is that they’ll get expensive enough to not be worth it for a company.
A good engineer costs a lot of money and comes with limitations derived from being human.
Let’s say AI manages to be a 2x multiplier in productivity. Prices for that multiplier can rise a lot before they reach 120k/year for 40 hours a week, the point at which you’re better off hiring someone.
That's extremely optimistic.
It's only trendy now to say those things publicly without PR and media training filter.
Why would anyone interested in programming use anything else?
I am forced to use a Mac at work, but I digress.