Readit News logoReadit News
jqpabc123 · a year ago
LLMs turn traditional computing upside down.

Instead of very accurate results at low cost, they produce inaccurate results at high cost.

Generalized intelligence and reasoning are not achievable by brute force statistical simulation --- regardless of the amount of money and hope invested/wasted.

elpocko · a year ago
Finally, the highly reputable science publication LA Times provides proof that LLMs are in fact large language models, rather than large math solvers or large fact models.
jqpabc123 · a year ago
...large language models, rather than large math solvers or large fact models.

And why not?

Math is the most logical and precise language ever invented.

If LLMs can truly think and reason and understand, I would expect them to excel at math problems. Or at least admit that it can't do math and logic.

elpocko · a year ago
>If LLMs can truly think and reason and understand

But they can't, which was kind of my point. They are clever token predictors, they know language, which makes them really good text generators ("stochastic parrots"), but even trivial tasks like counting the letters in a word is hit-or-miss, especially if the solution is not found in their training data.

I don't understand why people find this surprising. It's remarkable that LLMs can solve some problems at all, not the other way around.

McBainiel · a year ago
I wonder if AI being really useful when it comes to programming caused some to miscalculate its usefulness in general.
jqpabc123 · a year ago
I wonder if watching too much science fiction caused some to miscalculate its usefulness in general.

Expecting real intelligence to "emerge" from a binary logic playback device (aka a computer as we know) is just a variation on the Infinite Monkey Theorem in my opinion. In other words, the odds are not quite zero --- but they are very near it.

https://www.sciencealert.com/scientists-confirm-monkeys-do-n...

rsynnott · a year ago
I think a lot of people overestimate their usefulness there, too, tbh, possibly because they’re new and shiny. In actual use the AI things feel more like having a massively over-confident intern; trouble is, interns learn (that’s kind of the whole point). The magic robot does not. One could question how useful having an eternally overconfident yet incompetent intern is.

Deleted Comment

Deleted Comment