Readit News logoReadit News
roenxi · 3 months ago
> On the one hand, we’re pretty sure these systems don’t do anything like what humans do to produce or classify language or images. They use massive amounts of data, whereas we seem to use relatively little;

This isn't entirely correct; humans work with a roughly 16hr/day audio-visual feed running at very high resolution. That seems to be more data than ChatGPT was trained on. We spend less time looking at character glyphs, but the glyphs are the end of a process for building up language. When we say that cats sit on mats, that is linked to us having seen cats, mats and a lot of physics.

Although that strongly supports that humans learn in a way different from an LLM. And humans seem to have a strategy that involves seeking novelty that I don't think the major LLMs have cracked yet. But we use more data than they do.

globnomulous · 3 months ago
Where such direct, numerical comparison is possible, it's my understanding that Weatherby is correct. Both children and adults are exposed to far fewer words than an LLM before achieving comparable fluency, and LLMs have, statistically or in aggregate, perfect recall, whereas humans do not.
joe_the_user · 3 months ago
I would claim that any reasonable "bright line" critique of AI is going to be a "remainder" theory. If one models and "tightly" articulates a thing that AI can't do, well, one has basically created a benchmark that systems are going to gradually (or quickly) move to surpassing. But the ability to surpass benchmarks isn't necessarily an ability to do anything and one can still sketch which remainders tend to remain.

The thing is, high social science theorists like the person interviewed, want to claim a positive theory rather than a remainder theory because such a theory seems more substantial. But for the above reason, I think such substance is basically an illusion.

skhameneh · 3 months ago
Anecdotally, LLMs as a whole haven't made my life noticeably any better. I see some great use cases and some impressive demos, but they are just that. I look at how many things that LLMs have noticeably made worse and by my own impression it outweighs improvements.

- I asked when a software EOL will be, the LLM response (incorrectly) provided past tense for an event yet to happen. - The replacement of Google Assistant with Gemini broke using my phone while locked and the home automation is noticeably less reliable. - I asked an LLM about whether a device "phones home" and the answer was wrong. - I asked an LLM to generate some boiler plate code with very specific instructions and the generated code was unusable. - I gave critical feedback to a company that works with LLMs regarding a poor experience (along with some suggestions) and they seemed to have no interest in making adjustments. - I've seen LLM note takers with incorrect notes, often skipping important or nuanced details.

I have had good experiences with LLMs and other ML models, but most of those experiences were years ago before LLMs were being unnecessarily shoved into every possible scenario. At the end of the day, it doesn't matter if the experience is powered by an LLM, it matters whether the experience is effective overall (by many different measures).

gametorch · 3 months ago
My experience is the opposite.

I have an extensive, strong traditional CS background. I built and shipped a production grade SaaS in 2 months that has paying users. I've built things in day that would have taken me 3+ days manually. Through all of that, I hardly wrote a single line of code. It was all GPT-4.1 and o3.

Granted, I think you need quite a lot of knowledge and experience to know how to come up with coherent prompts and to be able to do the surgery necessary to get yourself out of a jam. But LLMs have easily 3x'd my productivity by very quantifiable metrics, like number of features shipped, for example.

I've noticed people who actually build stuff agree with me. That's because it's such a tremendous addition of value to our lives. Armchair speculators seem to see only the negative side.

Dead Comment