Readit News logoReadit News
SubiculumCode · 2 months ago
I definitely would be okay if we hit an AI winter; our culture and world cannot adapt fast enough for the change we are experiencing. In the meantime, the current level of AI is just good enough to make us more productive, but not so good as to make us irrelevant.
kookamamie · 2 months ago
I hope this will happen, too. I think it might as soon as investors realize the LLMs will not become the AGI they were sold as an idea.
bitmasher9 · 2 months ago
I think negative feedback loops of AIs trained on AI generated data might lead to a position where AI quality peaks and slides backwards.
energy123 · 2 months ago
I would not bet against synthetic data. AlphaZero is trained only on synthetic data and it's better than any human, and keeps getting better with more training compute. There is no negative feedback loop in the narrow cases we have tried previously. There may be trade-offs but on net we are going forward.
adventured · 2 months ago
AI will radically leap forward in specialized function gain over the next decade. That's what everybody should be focusing on. It'll rapidly splinter and acquire dominance over the vast minutia. The intricacy of the endeavor will be led by the AI itself, as it'll fly-wheel itself on becoming an expert at every little thing far faster than we can. We're just seeding that possibility now. Not only will it not slide backwards, it'll leap a great distance forward from where it's at now.

Mainframes -> desktop computers -> a computer in every hand

Obese LLMs you visit -> agents riding with you whereever you are, integrated into your life and things -> everything everywhere, max specialization and distribution into every crevice, dominance over most tasks whether you're there or active or not

They haven't even really started working together yet. They're still largely living in sandboxes. We're barely out of the first inning. Pick a field and it's likely hardly even at the first pitch for most of them you can name, eg aircraft/flight.

In hindsight people will (jokingly?) wonder whether AI self-selected software development as one of its first conquests, as the ultimate foot in the door so it could pursue dominion over everything else (of course it had to happen in that progression; it'll prompt some chicken or the egg debates 30-50 years out).

sgt101 · 2 months ago
Thank goodness we have version control systems then.
Paradigma11 · 2 months ago
We are just at the beginning of integrating external tools in the process and developing complex cognitive structures. LLM is just one part of it. Till now it was cheaper and easier to improve that part especially if other work would be rendered obsolete by LLM improvements.
player1234 · 2 months ago
Please show the evidence of more productive. How did you measure it?
whamlastxmas · 2 months ago
The amount of human suffering and death that could be massively mitigated by advanced AI is overwhelmingly worth the unknown risk in my opinion. If you had people close to you die from something where medicine or healthcare resources are close but not quite there to have allowed them to survive then you might feel the same.
hn_throwaway_99 · 2 months ago
I hate this argument, because all you have to do is look around the world today to see that if we have massively powerful technology that is controlled only by a few that it sure ain't leading to the "think of all the diseases we can cure!" utopia you describe.

We have many, many people around the world die all the time from easily curable and preventable diseases, we just choose not to. This is largely not a technology problem. Just look at PEPFAR, which saved tens of millions of lives from HIV/AIDS. We just decided to stop funding it: https://en.wikipedia.org/wiki/President%27s_Emergency_Plan_f...

Dead Comment

voidhorse · 2 months ago
The whole thing is silly. Look, we know that LLMs are just really good word predictors. Any argument that they are thinking is essentially predicated on marketing materials that embrace anthropomorphic metaphors to an extreme degree.

Is it possible that reason could emerge as the byproduct of being really good at predicting words? Maybe, but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case. Many people think in images as direct sense datum, and it's not clear that a digital representation of this is equivalent to the thing in itself.

To use an example another HN'er suggested, We don't claim that submarines are swimming. Why are we so quick to claim that LLMs are "reasoning"?

Velorivox · 2 months ago
> Is it possible that reason could emerge as the byproduct of being really good at predicting words?

Imagine we had such marketing behind wheels — they move, so they must be like legs on the inside. Then we run around imagining what the blood vessels and bones must look like inside the wheel. Nevermind that neither the structure nor the procedure has anything to do with legs whatsoever.

Sadly, whoever named it artificial intelligence and neural networks likely knew exactly what they were doing.

SubiculumCode · 2 months ago
I was having a discussion with Gemini. It claimed that because Gemini, as a large language model, cannot experience emotion, that the output of Gemini is less likely to be emotionally motivated. I countered that the experience of emotion is irrelevant. Gemini was trained on data written by humans who do experience emotion, who often wrote to express that emotion, and thus Gemini's output can be emotionally motivated, by proxy.
rented_mule · 2 months ago
> this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic. It's not obvious to me that this is the case

I'm with you on this. Software engineers talk about being in the flow when they are at their most productive. For me, the telltale sign of being in the flow is that I'm no longer thinking in English, but I'm somehow navigating the problem / solution space more intuitively. The same thing happens in many other domains. We learn to walk long before we have the language for all the cognitive processes required. I don't think we deeply understand what's going in these situations, so how are we going to build something to emulate it? I certainly don't consciously predict the next token, especially when I'm in the flow.

And why would we try to emulate how we do it? I'd much rather have technology that complements. I want different failure modes and different abilities so that we can achieve more with these tools than we could by just adding subservient humans. The good news is that everything we've built so far is succeeding at this!

We'll know that society is finally starting to understand these technologies and how to apply them when we are able to get away from using science fiction tropes to talk about them. The people I know who develop LLMs for a living, and the others I know that are creating the most interesting applications of them, already talk about them as tools without any need to anthropomorphize. It's sad to watch their frustration as they are slowed down every time a person in power shows up with a vision based on assumptions of human-like qualities rather than a vision informed by the actual qualities of the technology.

Maybe I'm being too harsh or impatient? I suppose we had to slowly come to understand the unique qualities of a "car" before we could stop limiting our thinking by referring to it as a "horseless carriage".

voidhorse · 2 months ago
Couldn't agree more. I look forward to the other side of this current craze where we actually have reasonable language around what these machines are best for.

On a more general level, I also never understood this urge to build machines that are "just like us". Like you I want machines that, arguably, are best characterized by the ways in which they are not like us—more reliable, more precise, serving a specific function. It's telling that critiques of the failures of LLMs are often met with "humans have the same problems"—why are humans the bar? We have plenty of humans. We don't need more humans. If we're investing so much time and energy, shouldn't the bar be bette than humans? And if it isn't, why isn't it? Oh, right it's because actually human error is good enough and the actual benefit of these tools is that they are humans that can work without break, don't have autonomy, and that you don't need to listen to or pay. The main beneficiaries of this path are capital owners who just want free labor. That's literally all this is. People who actually want to build stuff want precision machines that are tailored for the task at hand, not some grab bag of sort of works sometimes stochastic doohickeys.

trainerxr50 · 2 months ago
I think more importantly there is this stupid argument that because the submarine is not swimming it will never be able to "swim" as fast as us.

This is true of course in a pointlessly rhetorical sense.

Completely absurd though once we change "swimming" to the more precise "moving through water".

The solution is not to put arms and legs on the submarine so it can ACTUALLY swim.

It would be quite trivial to make a Gary Marcus style argument that humans still can't fly. We would need much longer and wider arms, much less core body mass, feathers.

cageface · 2 months ago
but this depends on the antecedent claim that much if not all of reason is strictly representational and strictly linguistic.

Most of these newer models are multi-modal, so tokens aren't necessary linguistic.

comp_throw7 · 2 months ago
What use of the word "reasoning" are you trying to claim that current language models knowably fail to qualify for, except that it wasn't done by a human?
sgt101 · 2 months ago
Well - all of them.

The mechanism by which they work prohibits reasoning.

This is easy to see if you look at a transformer architecture and think through what each step is doing.

The amazing thing is that they produce coherent speech, but they literally can't reason.

etaioinshrdlu · 2 months ago
I don't think it's accurate anymore to say LLMs are just really good word predictors. Especially in the last year, they are trained with reinforcement learning to solve specific problems. They are functions that predict next tokens, but the function they are trained to approximate doesn't have to be just plain internet text.
voidhorse · 2 months ago
Yeah, that's fair. It's probably more accurate to call them sequence predictors or general data predictors than to limit it to words (unless we mean words in the broad, mathematical sense) they are free monoid emulators
extr · 2 months ago
I find Gary's arguments increasingly semantic and unconvincing. He lists several examples of how LLMs "fail to build a world model", but his definition of "world model" is an informal hand-wave ("a computational framework that a system (a machine, or a person or other animal) uses to track what is happening in the world"). His examples are lifted from a variety of unclear or obsolete models - what is his opinion of O3? Why doesn't he create or propose a benchmark that researchers could use to measure progress of "world model creation"?

What's more, his actual point is unclear. Even if you simply grant, "okay, even SOTA LLMs don't have world models", why do I as a user of these models care? Because the models could be wrong? Yes, I'm aware. Nevertheless, I'm still deriving subtantial personal and professional value from the models as they stand today.

voidhorse · 2 months ago
I think the point is that category errors or misinterpreting what a tool does can be dangerous.

Both statistical data generators and actual reasoning are useful in many circumstances, but there are also circumstances in which thinking that you are doing the latter when you are only doing the former can have severe consequences (example: building a bridge).

If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.

As for benchmarks, if you fundamentally don't believe that stochastic data generation leads to reason as an emergent property, developing a benchmark is pointless. Also, not everyone has to be on the same side. It's clear that Marcus is not a fan of the current wave. Asking him to produce a substantive contribution that would help them continue to achieve their goals is preposterous. This game is highly political too. If you think the people pushing this stuff are less than estimable or morally sound, you wouldn't really want to empower them or give them more ideas.

NitpickLawyer · 2 months ago
> If nothing else, his perspective is a counterbalance to what is clearly an extreme hype machine that is doing its utmost to force adoption through overpromising, false advertising, etc. These are bad things even if the tech does actually have some useful applications.

In other words, overhyped in the short term, underhyped in the long term. Where short and long term are extremely volatile.

Take programming as an example. 2.5 years ago, gpt3.5 was seen as "cute" in the programming world. Oh, look, it does poems and e-mails, and the code looks like python but it's wrong 9 times out of 10. But now a 24B model can handle end-to-end SWE tasks in 0-shot a lot of the times.

squirrel · 2 months ago
He cites o3 and o4-mini as examples of LLMs that play illegal chess moves.
Lerc · 2 months ago
I don't understand the reasoning behind drawing a conclusion that if something fails a task that requires reasoning implies that thing cannot reason.

To use chess as an example. Humans sometimes play illegal moves. That does not mean Humans cannot reason. It is an instance of failing to show proof of reasoning. Not a proof of the inability to reason.

seanhunter · 2 months ago
But really, so what? We already have specialised chess engines (stockfish, leela, alphazero etc) that are far far stronger than humans will ever be, so insofar as that’s an interesting goal, we achieved it with deep blue and have gone way way beyond it since. The fact that a large Language model isn’t able to discern legal chess moves seems to me to be neither here nor there. Most humans can’t do that either. I don’t see it as evidence of lack of a world model either (because most people with a real chess board in front of them and a mental model of the world can’t play legal chess moves).

I find it astonishing that people pay any attention to Gary Marcus and doubly so here. Whether or not you are an “AI optimist”, he clearly is just a bloviator.

energy123 · 2 months ago
Why was Anthropic's interpretability work not discussed? Inconvenient for the conclusion?

https://www.anthropic.com/news/tracing-thoughts-language-mod...

lossolo · 2 months ago
The same work in which they show that the LLM doesn’t know what it "thinks"? or how it arrives at its conclusions where they demonstrate that it outputs what is statistically most probable? even though the logits indicate it was something else.
tim333 · 2 months ago
I usually disagree with Garry Marcus but his basic point seems fair enough if not surprising - Large Language Models model language about the world, not the world itself. For a human like understanding of the world you need some understanding of concepts like space, time, emotion, other creatures thoughts and so on, all things we pick up as kids.

I don't see much reason why future AI couldn't do that rather than just focusing on language though.

code51 · 2 months ago
The underlying assumption is that language and symbols are enough to represent phenomena. Maybe we are falling for this one in our own heads as well.

Understanding may not be a static symbolic representation. Contexts of the world infinite and continuously redefined. We believed we could represent all contexts tied to information, but that's a tough call.

Yes, we can approximate. No, we can't completely say we can represent every essential context at all times.

Some things might not be representable at all by their very chaotic nature.

tim333 · 2 months ago
I did think that human mental modeling of the world is also quite rough and often inaccurate. I don't see why AI can't become human like in it's abilities but accurately modeling all the relativistic quarks in an atom is a bit beyond anything just now.
sdenton4 · 2 months ago
"A wandering ant, for example, tracks where it is through the process of dead reckoning. An ant uses variables (in the algebraic/computer science sense) to maintain a readout of its location, even as as it wanders, constantly updated, so that it can directly return to its home."

Hm.

Dead reckoning is a terrible way to navigate, and famously led to lots of ships crashed on the shore of France before good clocks allowed tracking longitude accurately.

Ants lay down pheromone trails and use smell to find their way home... There's likely some additional tracking going on, but I would be surprised if it looked anything like symbolic GOFAI.

deadbabe · 2 months ago
Even if you find a pheromone trail, it doesn’t tell you what direction is home, or what path to take at branching paths. You need dead reckoning. The trail just helps you reduce the complexity of what you have to remember.
viraptor · 2 months ago
The lack of information in ant trails (beyond "it exists here") leads to death spirals https://en.m.wikipedia.org/wiki/Ant_mill
sdenton4 · 2 months ago
You take the branch with the stronger smell to get home. The branching point is where the trail divides, as different groups branch out, and thus the way home has more pheromones. Follow the trail and you don't need to remember the direction...

Many animals detect and interpret smells as chemical gradients. We don't have the hardware for it, but plenty of others do.

cma · 2 months ago
The trail also leads the other ants to food, hard for them to use your own dead reckoning.

Deleted Comment

vunderba · 2 months ago
Speaking of chess, a fun experiment is building a few positions such as on Lichess, taking a screenshot, and asking a state-of-the-art VLM to count the number of pieces on the board. In my experience, it had a much higher error ratio in less likely or impossible board situations (three kings on the board, etc).
Animats · 2 months ago
Note that this is the same problem engineers have talking to managers. The manager may lack a mental model of the task, but tries to direct it anyway.