Readit News logoReadit News
lossolo commented on Hong Kong pro-democracy tycoon Jimmy Lai gets 20 years' jail   bbc.com/news/articles/c8d... · Posted by u/tartoran
komali2 · 2 days ago
The PRC will happily sell chips to the West. I live in Taiwan, I don't want it to happen, but people need to stop acting like countries will prevent an invasion because it means the CPC will control chip manufacturing.

The choice is between possible nuclear war, or, the 5090s are more expensive and sometimes Americans can't buy them when the PRC is punishing the west for something.

lossolo · 2 days ago
Honestly, this is the most reasonable comment here, especially coming from someone in Taiwan. I hear similar views when I'm in Asia, which are very different from what I hear back in the West.
lossolo commented on Experts Have World Models. LLMs Have Word Models   latent.space/p/adversaria... · Posted by u/aaronng91
famouswaffles · 2 days ago
>This is a deranged and factually and tautologically (definitionally) false claim.

Strong words for a weak argument. LLMs are trained on data generated by physical processes (keystrokes, sensors, cameras), not telepathically extracted "mental models." The text itself is the artifact of reality and not just a description of someone's internal state. If a sensor records the temperature and writes it to a log, is the log a "model of a model"? No, it’s a data trace of a physical reality.

>All this removal and all these intermediate representational steps make LLMs a priori obviously even more distant from reality than humans.

You're conflating mediation with distance. A photograph is "mediated" but can capture details invisible to human perception. Your eye mediates photons through biochemical cascades-equally "removed" from raw reality. Proximity isn't measured by steps in a causal chain.

>The model humans use is embodied, not the textbook summaries - LLMs only see the diminished form

You need to stop thinking that a textbook is a "corruption" of some pristine embodied understanding. Most human physics knowledge also comes from text, equations, and symbolic manipulation - not direct embodied experience with quantum fields. A physicist's understanding of QED is symbolic, not embodied. You've never felt a quark.

The "embodied" vs "symbolic" distinction doesn't privilege human learning the way you think. Most abstract human knowledge is also mediated through symbols.

>It's not clear LLMs learn to actually do physics - they just learn to write about it

This is testable and falsifiable - and increasingly falsified. LLMs:

Solve novel physics problems they've never seen

Debug code implementing physical simulations

Derive equations using valid mathematical reasoning

Make predictions that match experimental results

If they "only learn to write about physics," they shouldn't succeed at these tasks. The fact that they do suggests they've internalized the functional relationships, not just surface-level imitation.

>They can't run labs or interpret experiments like humans

Somewhat true. It's possible but they're not very good at it - but irrelevant to whether they learn physics models. A paralyzed theoretical physicist who's never run a lab still understands physics. The ability to physically manipulate equipment is orthogonal to understanding the mathematical structure of physical law. You're conflating "understanding physics" with "having a body that can do experimental physics" - those aren't the same thing.

>humans actually verify that the things they learn and say are correct and provide effects, and update models accordingly. They do this by trying behaviours consistent with the learned model, and seeing how reality (other people, the physical world) responds (in degree and kind). LLMs have no conception of correctness or truth (not in any of the loss functions), and are trained and then done.

Gradient descent is literally "trying behaviors consistent with the learned model and seeing how reality responds."

The model makes predictions

The Data provides feedback (the actual next token)

The model updates based on prediction error

This repeats billions of times

That's exactly the verify-update loop you describe for humans. The loss function explicitly encodes "correctness" as prediction accuracy against real data.

>No serious researcher thinks LLMs are the way to AGI... accepted by people in the field

Appeal to authority, also overstated. Plenty of researchers do think so and claiming consensus for your position is just false. LeCunn has been on that train for years so he's not an example of a change of heart. So far, nothing has actually come out of it. Even META isn't using V-JEPA to actually do anything, nevermind anyone else. Call me when these constructions actually best transformers.

lossolo · 2 days ago
> Plenty of researchers do think so and claiming consensus for your position is just false

Can you name a few? Demis Hassabis (Deepmind CEO) in his recent interview claims that LLMs will not get us to AGI, Ilya Sutskever also says there is something fundamental missing, same with LeCunn obviously etc.

lossolo commented on Bye Bye Humanity: The Potential AMOC Collapse   thatjoescott.com/2026/02/... · Posted by u/rolph
roenxi · 3 days ago
It isn't actually all that scary; humans cope pretty well over a wide variety of temperatures. If the change caught everyone by surprise it'd be a huge problem but it seems to be fairly well understood and there is lots of time to adjust.

Worst case scenario seems to be that people will stop migrating to Europe.

lossolo · 3 days ago
Europe is one of the world's largest agricultural producers and exporters. France alone is one of the top grain exporters globally. The EU exports massive quantities of wheat, barley, dairy, and processed food to North Africa, the Middle East, and Sub-Saharan Africa. Countries like Egypt, Algeria, and Nigeria are heavily dependent on European grain imports. An AMOC collapse would devastate growing seasons, slash yields, and potentially make large parts of Northern Europe unsuitable for current agriculture.

And it's not just food. Europe is a major producer and exporter of fertilizers. If European industrial and agricultural output collapses, the ripple effects hit global food supply chains hard. Countries that depend on those imports will face famine.

Then there's the knock-on, hundreds of millions of people in food-insecure regions losing a key supply source, simultaneous disruption to Atlantic weather patterns affecting rainfall in West Africa and the Amazon, potential shifts in monsoon systems affecting South and East Asia. It's a cascading global food security crisis.

> lots of time to adjust

This assumes a gradual slowdown, but paleoclimate evidence suggests AMOC transitions can happen within a decade or even less. The idea that we'd just smoothly adapt to one of the most dramatic climate shifts in human civilization is not supported by what we know about how these systems behave.

lossolo commented on Orchestrate teams of Claude Code sessions   code.claude.com/docs/en/a... · Posted by u/davidbarker
ttoinou · 6 days ago
If it was so obvious and easy, why didn't we have this a year ago ? Models were mature enough back then to make this work
lossolo · 5 days ago
Because gathering training data and doing post-training takes time. I agree with OP that this is the obvious next step given context length limitations. Humans work the same way in organizations, you have different people specializing in different things because everyone has a limited "context length".
lossolo commented on We tasked Opus 4.6 using agent teams to build a C Compiler   anthropic.com/engineering... · Posted by u/modeless
NitpickLawyer · 6 days ago
It's a bit disappointing that people are still re-hashing the same "it's in the training data" old thing from 3 years ago. It's not like any LLM could 1for1 regurgitate millions of LoC from any training set... This is not how it works.

A pertinent quote from the article (which is a really nice read, I'd recommend reading it fully at least once):

> Previous Opus 4 models were barely capable of producing a functional compiler. Opus 4.5 was the first to cross a threshold that allowed it to produce a functional compiler which could pass large test suites, but it was still incapable of compiling any real large projects. My goal with Opus 4.6 was to again test the limits.

lossolo · 5 days ago
They couldn't do it because they weren't fine-tuned for multi-agent workflows, which basically means they were constrained by their context window.

How many agents did they use with previous Opus? 3?

You've chosen an argument that works against you, because they actually could do that if they were trained to.

Give them the same post-training (recipes/steering) and the same datasets, and voila, they'll be capable of the same thing. What do you think is happening there? Did Anthropic inject magic ponies?

lossolo commented on We tasked Opus 4.6 using agent teams to build a C Compiler   anthropic.com/engineering... · Posted by u/modeless
RobMurray · 5 days ago
How often do you need to invent novel algorithms or data structures? Most human written code is just rehashing existing ideas as well.
lossolo · 5 days ago
They're very good at reiterating, that's true. The issue is that without the people outside of "most humans" there would be no code and no civilization. We'd still be sitting in trees. That is real intelligence.
lossolo commented on We tasked Opus 4.6 using agent teams to build a C Compiler   anthropic.com/engineering... · Posted by u/modeless
Philpax · 6 days ago
What Rust-based compiler is it plagiarising from?
lossolo · 5 days ago
Language doesn't really matter, it's not how things are mapped in the latent space. It only needs to know how to do it in one language.
lossolo commented on GPT-5.3-Codex   openai.com/index/introduc... · Posted by u/meetpateltech
trilogic · 6 days ago
When 2 multi billion giants advertise same day, it is not competition but rather a sign of struggle and survival. With all the power of the "best artificial intelligence" at your disposition, and a lot of capital also all the brilliant minds, THIS IS WHAT YOU COULD COME UP WITH?

Interesting

lossolo · 6 days ago
What's funny is that most of this "progress" is new datasets + post-training shaping the model's behavior (instruction + preference tuning). There is no moat besides that.
lossolo commented on Battle-Testing Lynx at Allegro   blog.allegro.tech/2026/02... · Posted by u/tgebarowski
renegat0x0 · 6 days ago
dziwnie sie czyta komentarz po polsku na takiej stronie jak ta
lossolo · 6 days ago
Na początku próbowałem czytać to po angielsku i czytam "Ale był.." i potem nic nie miało już sensu haha
lossolo commented on Battle-Testing Lynx at Allegro   blog.allegro.tech/2026/02... · Posted by u/tgebarowski
self_awareness · 6 days ago
The background story is that Allegro defaults the selection of infrastructure from their competitors to their own, even if the user uses competitor all the time. Sometimes the user forgets to check, and it will result in using Allegro's infrastructure even if the user didn't want it.

It's called "a dark pattern".

lossolo · 6 days ago
> Sometimes the user forgets to check, and it will result in using Allegro's infrastructure even if the user didn't want it.

Strasznie denerwujące, też mnie to spotkało.

u/lossolo

KarmaCake day3317January 6, 2016View Original