Readit News logoReadit News
d4rkn0d3z · 3 days ago
In physics, the change from classical to quantum theory was a change from determinism to probabilistic determinism. There is not one physicist on earth that would ever exhort you to use quantum theory where classical theory will do. Furthermore, when you study physics you must learn the classical theory first or you will be hopelessly lost, just like the author of this article.

The central analogy of the article is entirely bogus.

This article does not rise to the level of being wrong.

hearsathought · 3 days ago
> In physics, the change from classical to quantum theory was a change from determinism to probabilistic determinism.

Don't you mean from determinism to nondeterminism?

> There is not one physicist on earth that would ever exhort you to use quantum theory where classical theory will do.

That's being practical.

d4rkn0d3z · 3 days ago
> "Don't you mean from determinism to nondeterminism?"

No, I mean exactly what I said. Given a system's state, one evolves the state using wave equation du jour, nondeterminism does not occur.

ares623 · 3 days ago
> This article does not rise to the level of being wrong.

Amazing

meindnoch · 3 days ago
What a bunch of pretentious nonsense. It is always a red flag when an author tries to shoehorn mathematical notation into an article that has nothing mathematical about it whatsoever. Gives off "igon value problem"-vibes.
ankit219 · 3 days ago
Building with non deterministic systems isnt new. It does not take a scientist. Though people who have experience with these systems are fewer in number today. You saw the same thing with TCP/IP development where we ended up developing systems that assumed the randomness and made sure that isnt passed on to the next layer. For every game, given the latency involved in previous networks, there is no way on the network games were deterministic.
golergka · 3 days ago
Isn't any kind of human in the loop make system non-deterministic?
pdhborges · 4 days ago
I will believe this theory if someone shows me that the ratio of scientists to engineers of leading teams of the leading companies deploying AI products is bigger than 1.
layer8 · 3 days ago
I don’t think the dichotomy between scientists and engineers that’s being established here is making much sense in the first place. Applied science is applied science.
therobots927 · 3 days ago
This is pure sophistry and the use of formal mathematical notation just adds insult to injury here:

“Think about it: we’ve built a special kind of function F' that for all we know can now accept anything — compose poetry, translate messages, even debug code! — and we expect it to always reply with something reasonable.”

This forms the axiom from which the rest of this article builds its case. At each step further fuzzy reasoning is used. Take this for example:

“Can we solve hallucination? Well, we could train perfect systems to always try to reply correctly, but some questions simply don't have "correct" answers. What even is the "correct" when the question is "should I leave him?".”

Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

The most disturbing part of my tech career has been witnessing the ability that many highly intelligent and accomplished people have to apparently fool themselves with faulty yet complex reasoning. The fact that this article is written in defense of chatbots that ALSO have complex and flawed reasoning just drives home my point. We’re throwing away determinism just like that? I’m not saying future computing won’t be probabilistic but to say that LLMs are probabilistic, so they are the future of computing can only be said by someone with an incredibly strong prior on LLMs.

I’d recommend Baudrillards work on hyperreality. This AI conversation could not be a better example of the loss of meaning. I hope this dark age doesn’t last as long as the last one. I mean just read this conclusion:

“It's ontologically different. We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.”

I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

That’s called the scientific method. Which is a PRECURSOR to planning and engineering. That’s how we built the technology we have today. I’ll stop now because I need to keep my blood pressure low.

bubblyworld · 3 days ago
You seem to be having strong emotions about this stuff, so I'm a little nervous that I'm going to get flamed in response, but my best take at a well-intentioned response:

I don't think the author is arguing that all computing is going to become probabilistic. I don't get that message at all - in fact they point out many times that LLMs can't be trusted for problems with definite answers ("if you need to add 1+1 use a calculator"). Their opening paragraph was literally about not blindly trusting LLM output.

> I don’t actually think the above paragraph makes any sense, does anyone disagree with me?

Yes - it makes perfect sense to me. Working with LLMs requires a shift in perspective. There isn't a formal semantics you can use to understand what they are likely to do (unlike programming languages). You really do need to resort to observation and hypothesis testing, which yes, the scientific method is a good philosophy for! Two things can be true.

> the use of formal mathematical notation just adds insult to injury here

I don't get your issue with the use of a function symbol and an arrow. I'm a published mathematician - it seems fine to me? There's clearly no serious mathematics here, it's just an analogy.

> This AI conversation could not be a better example of the loss of meaning.

The "meaningless" sentence you quote after this is perfectly fine to me. It's heavy on philosophy jargon, but that's more a taste thing no? Words like "ontology" aren't that complicated or nonsensical - in this case it just refers to a set of concepts being used for some purpose (like understanding the behaviour of some code).

AgentMatt · 3 days ago
> I’d recommend Baudrillards work on hyperreality.

Any specific piece of writing you can recommend? I tried reading Simulacra and Simulation (English translation) a while ago and I found it difficult to follow.

therobots927 · 3 days ago
I would actually recommend the YouTube channel Plastic Pills. This is a great video to start with: https://youtu.be/S96e6TdJlNE?si=gSVzXyyBq7t_q0Xp
nutjob2 · 3 days ago
> I’m not saying future computing won’t be probabilistic

Current and past computing has always been probabilistic in part, doesn't mean it will become 100% so. Almost all of the implementationof LLMs is deterministic except the part that is randomized. Its output is used in the same way. Humans combine the two approaches as well. Even reality is a combination of quantum uncertainty at a low level and very deterministic physics everywhere else.

> We're moving away from deterministic mechanicism, a world of perfect information and perfect knowledge, and walking into one made of emergent unknown behaviors, where instead of planning and engineering we observe and hypothesize.

The hype machine always involves pseudo-scientific babble and this is a particularly cringey example. The idea that seems to be promoted, that AI will be god like and therein we'll find all truth and knowledge is beyond delusional.

It a tool, like all other tools. Just like we see faces in everything we're also very susceptible to language (especially our own, consumed and regurgitated back to us) from a very neat chatbot.

AI hype is borderline mass hysteria at this point.

Deleted Comment

therobots927 · 3 days ago
“The hype machine always involves pseudo-scientific babble and this is a particularly cringey example.”

Thanks for confirming. As crazy as the chatbot fanatics are, hearing them talk makes ME feel crazy.

hgomersall · 3 days ago
There's another couple of principles underlying the most uses of science, which are consistency and smoothness. That is extrapolation and interpolation makes sense. Also, that if an experiment works now, it will work forever. Critically, the physical world is knowable.
voidhorse · 3 days ago
It's already wrong at the first step. A probabilistic system is by definition not a function (it is a relation). This is such a basic mistake I don't know how anyone can take this seriously. Many existing systems are also not strictly functions (internal state can make them return different outputs for a given input). People love to abuse mathematics and employ its concepts hastily and irresponsibly.
therobots927 · 3 days ago
The fact that the author is a data scientist at Anthropic should start ringing alarm bells for anyone paying attention. Isn’t Claude supposed to be at the front of the pack? To be honest I have a suspicion that Claude wrote the lions share of this essay. It’s that incomprehensible and soaked in jargon and formulas used completely out of context and incorrectly.
rexer · 3 days ago
I read the full article (really resonated with it, fwiw), and I'm struggling to understand the issues you're describing.

> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

Can you say more? It seems to me the article says the same thing you are.

> I don’t actually think the above paragraph makes any sense, does anyone disagree with me? “Instead of planning we observe and hypothesize”?

I think the author is drawing a connection to the world of science, specifically quantum mechanics, where the best way to make progress has been to describe and test theories (as opposed to math where we have proofs). Though it's not a great analog since LLMs are not probabilistic in the same way quantum mechanics is.

In any case, I appreciated the article because it talks through a shift from deterministic to probabilistic systems that I've been seeing in my work.

voidhorse · 3 days ago
Sure, but it's overblown. People have been reasoning about and building probabilistic systems formally since the birth of information theory back in the 1940s. Many systems we already rely on today are highly stochastic in their own ways.

Yes, LLMs are a bit of a new beast in terms of the use of stochastic processes as producers—but we do know how to deal with these systems. Half the "novelty" is just people either forgetting past work or being ignorant of it in the first place.

falcor84 · 3 days ago
> Yes of course relationship questions don’t have a “correct” answer. But physics questions do. Code vulnerability questions do. Math questions do. I mean seriously?

But as per Gödel's incompleteness theorem and the Halting Problem, math questions (and consequently physics and CS questions) don't always have an answer.

therobots927 · 3 days ago
Providing examples of questions without correct answers does not prove that no questions have correct answers. Or that it’s hallucinations aren’t problematic when they provide explicitly incorrect answers. The author is just avoiding addressing the hallucination problem at all by saying “well sometimes there is no correct answer”
layer8 · 3 days ago
There is a truth of the matter regarding whether a program will eventually halt or not, even when there is no computable proof for either case. Similar for the incompleteness theorems. The correct response in such cases is “I don’t know”.

Dead Comment

selinkocalar · a day ago
This hits on something we think about constantly at Delve.

The key is building systems that are transparent about their confidence levels and gracefully handle edge cases.

The companies that will win in AI aren't the ones with perfect algorithms - they're the ones who design for human understanding and real-world messiness.

camillomiller · 3 days ago
It seems to me that probabilistic approaches are more akin to magical AI thinking right now, so defending that as the new paradigm sounds quite egregious and reeks of (maybe involuntary?) inevitabilism.

Even if the assumption is correct, forcing a probabilistic system on a strongly deterministic society won't end well. Maybe for society, but mostly for the companies drumming up their probabilistic systems and their investors.

Also, anyone who wants to make money probabilistically is better off going to the casino. Baccarat is a good one. European Roulette also has a better house margin than chatGPT's error margin.

ath3nd · 3 days ago
> It seems to me that probabilistic approaches are more akin to magical AI thinking right now, so defending that as the new paradigm sounds quite egregious and reeks of (maybe involuntary?) inevitabilism

Thank you for saying that!

I read it as: "our product is fickle and unreliable and you have to get used to it and love it because we tell you that is the future".

But it's not the future, it's just one of many possible futures, and not one I and a large part of society wants to be a part of. These "leaders" are talking and talking but they are just salemen, trying to frame what they are seling you as good or inevitable. It's not.

Look for example, at the Ex CEO of Github and his clownish statements:

- 2nd of August: Developers, either embrace AI or leave the industry https://www.businessinsider.com/github-ceo-developers-embrac...

- 11th of August: Resigns. https://www.techradar.com/pro/github-ceo-resigns-is-this-the...

Tell me this is not pitiful, tell me this is the person I gotta believe in and who knows what's the future of tech?

Tell me I gotta believe Sama when he tells me for the 10th time that AGI is nearly there when his latest features were "study mode", "announcing OpenAI office suite" and ChatGPT5 (aka ChatGPT4.01).

Or Musk and his full self driving cars which he promised since 2019? The guy who bought Twitter, tanked its value in half so he can win the election for Trump and then got himself kicked out of the government? The guy making Nazi salutes?

Are those the guys telling what's the future and why are we even listening to them?

camillomiller · 3 days ago
Unfortunately the answer is “because they are rich beyond comprehension and no regulation was put in place at the right time to avoid that“.
bithive123 · 3 days ago
It became evident to me while playing with Stable Diffusion that it's basically a slot machine. A skinner box with a variable reinforcement schedule.

Harmless enough if you are just making images for fun. But probably not an ideal workflow for real work.

diggan · 3 days ago
> It became evident to me while playing with Stable Diffusion that it's basically a slot machine.

It can be, and usually is by default. If you set the seeds to deterministic numbers, and everything else remains the same, you'll get deterministic output. A slot machine implies you keep putting in the same thing and get random good/bad outcomes, that's not really true for Stable Diffusion.

bithive123 · 3 days ago
Strictly speaking, yes, but there is so much variability introduced by prompting that even keeping the seed value static doesn't change the "slot machine" feeling, IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.
A4ET8a8uTh0_v2 · 3 days ago
<< But probably not an ideal workflow for real work.

Hmm. Ideal is rarely an option, so I have to assume you are being careful about phrasing.

Still, despite it being a black box, one can still tip the odds on one's favor, so the real question is what is considered 'real work'? I personally would define that as whatever they you are being paid to do. If that premise is accepted, then the tool is not the issue, despite its obvious handicaps.