I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
Here are some clipped comments that I pulled from the overall post
> I don't get it.
> I'm using LLMs to code and I'm still thinking hard.
> I don't. I miss being outside, in the sun, living my life. And if there's one thing AI has done it's save my time.
> Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems.
> I am thinking harder than ever due to vibe coding.
> Skill issue
> Maybe this is just me, but I don't miss thinking so much.
The last comment pasted is pure gold, a great one to put up on a wall. Gave me a right chuckle thanks!!!
Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well. The "they don't reason" people tend to, in my opinion/experience, underestimate them by a lot, often claiming that they will never be able to do <thing that LLMs have been able to do for a year>.
To be fair, the "they reason/are conscious" people tend to, in my opinion/experience, overestimate how much a LLM being able to "act" a certain way in a certain situation says about the LLM/LLMs as a whole ("act" is not a perfect word here, another way of looking at it is that they visit only the coast of a country and conclude that the whole country must be sailors and have a sailing culture).
It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?
> Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well.
My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.
A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.
What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition. Some things are so hard to define (and people have tried for centuries) e.g. what is consciousness? That they are a problem set within themselves please see Hard problem of consciousness.
But you could make the exact same argument for a human mind? (could just simulate all those neural interactions with pen and paper)
The only way to get out of it is to basically admit magic (or some other metaphysical construct with a different name).
It would be an argument and you are free to make it. What the human mind is, is an open scientific and philosophical problem many are working on.
The point is that LLM's are NOT the same because we DO know that LLM's are. Please see the myriad of tutorials 'write an LLM from scratch'
They can and are improved (papered over) over time. For example by improving and tweaking the training data. Adding in new data sets is the usual fix. A prime example 'count the number of R's in Strawberry' caused quite a debacle at a time where LLM's were meant to be intelligent. Because they aren't they can trip up over simple problems like this. Continue to use an army of people to train them and these edge cases may become smaller over time. Fundamentally the LLM tech hasn't changed.
I am not saying that LLM's aren't amazing, they absolutely are. But WHAT they are is an understood thing so lets not confuse ourselves.
Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".
There is no higher thinking. They were literally built as a mimicry of intelligence.
LLM's do not think, understand, reason, reflect, comprehend and they never shall. I have commented elsewhere but this bears repeating
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.
But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.
Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.
LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.
But then OP says stuff like:
> I am not sure if there will ever be a time again when both needs can be met at once.
In my head that translates to "I don't think there will ever be a time again when I can actually ride my bike for more than 100 feet." At which point you probably start getting responses more like "I don't get it" because there's only so much empathy you can give someone before you start getting a little frustrated and being like "cmon it's not THAT bad, just keep trying, we've all been there".
> I keep trying to ride a bike but I keep falling off
I do not think this analogy is apt.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do.
The article is lamenting the disappearing of something meaningful for the OP. One can feel sad for this alone. It is not an equation to balance: X is gone but Y is now available. The lament stands alone. As the OP indicates with his 'pragmatism' we now collectively have little choice about the use of AI. The flood waters do not ask they take everyone in their path.