If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.
In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.
I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.
The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."
"full effect" is a pretty squishy term.
My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.
With every new frontier model released [0]:
1. the level of technical expertise required to achieve a given task decreases, or
2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.
I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.
[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.
What do you recommend for getting a message out that which people can see?
But there’s new things like sweep [0] that you now can do locally.
And 2-3 years ago capable open models weren’t even a thing. Now we’ve made progress on that front. And I believe they’ll keep improving (both on accessibility and competency).
That’s the thing, us as programmers are supposed to be creators/makers, not mere consumers/users, but I do agree that that has been changing as of late.
> us as programmers are supposed to be creators/makers, not mere consumers/users
But that's a false dichotomy. As a programmer I am very much a consumer of the language I use, the IDE, the compiler, and of most of my dependencies. (to say nothing of the OS and the hardware).
I, and I'd wager most people around here, haven't and are aren't individually building at all layers of that stack at once.
It's more cost effective for someone to pay $20 to $100 month for a Claude subscription compared to buying a 512 gig Mac Studio for $10K. We won't discuss the cost of the NVidia rig.
I mess around with local AI all the time. It's a fun hobby, but the quality is still night and day.
1. You are not forced to use the AI in the first place.
2. If you want to use one, you can self host it one of the open models.
That at any moment in time the open models are not equivalent in capabilities to the SOTA paid models is beside the point.
How they run their business is none of my business. I can download the weights right now and use them as I see fit under the open source license terms.
Google Maps was never a self contained binary you could download. But even now it remains free to use.
Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.
Moving around does help, but even so, the mental fatigue is real!
1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.
2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.
How much serious work can it do versus chatgpt3 (SOTA only a few years ago)?