Readit News logoReadit News
encyclopedism commented on I miss thinking hard   jernesto.com/articles/thi... · Posted by u/jernestomg
johnfn · 8 days ago
When I read the article, I feel the same emotions that I feel if someone were to tell me "I keep trying to ride a bike but I keep falling off". My experience with LLMs is that the "lack of thinking" is mostly a quick trough you fall into before you come out the other side understanding how to deal with LLMs better. And yes, there's nothing wrong with relating to someone's experience, but mostly I just want to tell that guy, just keep trying, it'll get better, and you'll be back to thinking hard if you keep at it.

But then OP says stuff like:

> I am not sure if there will ever be a time again when both needs can be met at once.

In my head that translates to "I don't think there will ever be a time again when I can actually ride my bike for more than 100 feet." At which point you probably start getting responses more like "I don't get it" because there's only so much empathy you can give someone before you start getting a little frustrated and being like "cmon it's not THAT bad, just keep trying, we've all been there".

encyclopedism · 8 days ago
If I can 'speak' for the OP:

> I keep trying to ride a bike but I keep falling off

I do not think this analogy is apt.

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do.

The article is lamenting the disappearing of something meaningful for the OP. One can feel sad for this alone. It is not an equation to balance: X is gone but Y is now available. The lament stands alone. As the OP indicates with his 'pragmatism' we now collectively have little choice about the use of AI. The flood waters do not ask they take everyone in their path.

encyclopedism commented on I miss thinking hard   jernesto.com/articles/thi... · Posted by u/jernestomg
keyle · 8 days ago
I don't get it.

I think just as hard, I type less. I specify precisely and I review.

If anything, all we've changed is working at a higher level. The product is the same.

But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"

Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!

We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.

Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!

Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.

Just chill, it's programming. The tools just got even better.

You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.

We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.

encyclopedism · 8 days ago
I find it interesting, the comments on this post (not just this particular comment per se) and the sheer inability to relate or ATTEMPT to relate to another persons experience or feeling. The post itself articulated a viewpoint and experience, your having a different one does not negate the other. Nor does your perspective mean the other does not exist. I'm dumbfounded at many of the comments.

Here are some clipped comments that I pulled from the overall post

> I don't get it.

> I'm using LLMs to code and I'm still thinking hard.

> I don't. I miss being outside, in the sun, living my life. And if there's one thing AI has done it's save my time.

> Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems.

> I am thinking harder than ever due to vibe coding.

> Skill issue

> Maybe this is just me, but I don't miss thinking so much.

The last comment pasted is pure gold, a great one to put up on a wall. Gave me a right chuckle thanks!!!

encyclopedism commented on Scaling long-running autonomous coding   simonwillison.net/2026/Ja... · Posted by u/srameshc
Zababa · 23 days ago
Can you give examples of how that "LLM's do not think, understand, reason, reflect, comprehend and they never shall" or that "completely mechanical process" helps you understand better when LLM works and when they don't?

Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well. The "they don't reason" people tend to, in my opinion/experience, underestimate them by a lot, often claiming that they will never be able to do <thing that LLMs have been able to do for a year>.

To be fair, the "they reason/are conscious" people tend to, in my opinion/experience, overestimate how much a LLM being able to "act" a certain way in a certain situation says about the LLM/LLMs as a whole ("act" is not a perfect word here, another way of looking at it is that they visit only the coast of a country and conclude that the whole country must be sailors and have a sailing culture).

encyclopedism · 23 days ago
We know what an LLM is in fact you can build one from scratch if you like. e.g. https://www.manning.com/books/build-a-large-language-model-f...

It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?

> Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well.

My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.

A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.

What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition. Some things are so hard to define (and people have tried for centuries) e.g. what is consciousness? That they are a problem set within themselves please see Hard problem of consciousness.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

encyclopedism commented on Scaling long-running autonomous coding   simonwillison.net/2026/Ja... · Posted by u/srameshc
myrmidon · 23 days ago
> If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model.

But you could make the exact same argument for a human mind? (could just simulate all those neural interactions with pen and paper)

The only way to get out of it is to basically admit magic (or some other metaphysical construct with a different name).

encyclopedism · 23 days ago
> But you could make the exact same argument for a human mind?

It would be an argument and you are free to make it. What the human mind is, is an open scientific and philosophical problem many are working on.

The point is that LLM's are NOT the same because we DO know that LLM's are. Please see the myriad of tutorials 'write an LLM from scratch'

encyclopedism commented on Scaling long-running autonomous coding   simonwillison.net/2026/Ja... · Posted by u/srameshc
gjadi · 23 days ago
Have this shortcomings of llms been addressed by better models or by better integration with other tools? Like, are they better at coding because the models are truly better or because the agentic loops are better designed?
encyclopedism · 23 days ago
Fundamentally these shortcomings cannot be addressed.

They can and are improved (papered over) over time. For example by improving and tweaking the training data. Adding in new data sets is the usual fix. A prime example 'count the number of R's in Strawberry' caused quite a debacle at a time where LLM's were meant to be intelligent. Because they aren't they can trip up over simple problems like this. Continue to use an army of people to train them and these edge cases may become smaller over time. Fundamentally the LLM tech hasn't changed.

I am not saying that LLM's aren't amazing, they absolutely are. But WHAT they are is an understood thing so lets not confuse ourselves.

encyclopedism commented on Scaling long-running autonomous coding   simonwillison.net/2026/Ja... · Posted by u/srameshc
Gazoche · 23 days ago
> they don't feel intelligence but rather an attempt at mimicking it

Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".

There is no higher thinking. They were literally built as a mimicry of intelligence.

encyclopedism · 23 days ago
I don't understand why this point is NOT getting across to so many on HN.

LLM's do not think, understand, reason, reflect, comprehend and they never shall. I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.

encyclopedism commented on Two Concepts of Intelligence   cacm.acm.org/blogcacm/two... · Posted by u/1970-01-01
tucnak · 24 days ago
I really wish all these LessWrong, what is the meaning of intelligence types cared enough to study Wittgenstein a bit rather than hear themselves talk; it would save us all a lot of time.
encyclopedism · 24 days ago
I fully agree with your sentiments. People really need to study a little!
encyclopedism commented on Two Concepts of Intelligence   cacm.acm.org/blogcacm/two... · Posted by u/1970-01-01
p-e-w · 24 days ago
> The Turkey was an LLM. It predicted the future based entirely on the distribution of the past. It had no "understanding" of the purpose of the farmer.

But we already know that LLMs can do much better than that. See the famous “grokking” paper[1], which demonstrates that with sufficient training, a transformer can learn a deep generalization of its training data that isn’t just a probabilistic interpolation or extrapolation from previous inputs.

Many of the supposed “fundamental limitations” of LLMs have already been disproven in research. And this is a standard transformer architecture; it doesn’t even require any theoretical innovation.

[1] https://arxiv.org/abs/2301.02679

encyclopedism · 24 days ago
LLM's have surpassed being Turing machines? Turing machines now think?

LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.

u/encyclopedism

KarmaCake day234October 5, 2025View Original