Vanguard has an odd corporate structure where it’s owned by the funds that it manages, so it’s effectively a co-op owned by its customers.
Vanguard has an odd corporate structure where it’s owned by the funds that it manages, so it’s effectively a co-op owned by its customers.
Yes we do, we literally built them.
> We understand how we brought them about via setting up an optimization problem in a specific way, that isn't the same at all as knowing how they work.
You're mistaking "knowing how they work" with "understanding all of the emergent behaviors of them"
If I build a physics simulation, then I know how it works. But that's a separate question from whether I can mentally model and explain the precise way that a ball will bounce given a set of initial conditions within the physics simulation which is what you seem to be talking about.
By knowing how they work I specifically mean understanding the emergent capabilities and behaviors, but I don't see how it is a mistake. If you understood physics but knew nothing about cars, you can't claim to understand how a car works "simple, it's just atoms interacting according to the laws of physics." That would not let you, e.g. explain its engineering principles or capabilities and limitations in any meaningful way.
We also don't understand exactly how humans reason, so any claims that humans are capable of reasoning is also mostly an observation about abilities/capabilities.
The question to me seems: If we expand on this architecture (in some direction, compute, size etc.), will we get something much more powerful? Whereas if you give nature more time to iterate on the parrot, you'd probably still end up with a parrot.
There's a giant impedance mismatch here (time scaling being one). Unless people want to think of parrots being a subset of all animals, and so 'stochastic animal' is what they mean. But then it's really the difference of 'stochastic human' and 'human'. And I don't think people really want to face that particular distinction.
You don't seem to understand how they work, they recurse their solution meaning if they have remembered components it parrots back sub solutions. Its a bit like a natural language computer, that way you can get them to do math etc, although the instruction set isn't of a turing language.
They can't recurse sub sub parts they haven't seen, but problems that has similar sub parts can of course be solved, anyone understands that.
I don't think anyone understands how they work- these type of explanations aren't very complete or accurate. Such explanations/models allow one to reason out what types of things they should be capable of vs incapable of in principle regardless of scale or algorithm tweaks, and those predictions and arguments never match reality and require constant goal post shifting as the models are scaled up.
We understand how we brought them about via setting up an optimization problem in a specific way, that isn't the same at all as knowing how they work.
I tend to think in the totally abstract philosophical sense, independent of the type of model, at the limit of an increasingly capable function approximator trained on an increasingly large and diverse set of real world cause/effect time series data, you eventually develop and increasingly accurate and general predictive model of reality organically within the model. Some model types do have fundamental limits in their ability to scale like this, but we haven't yet found one with these models.
It is more appropriate to objectively test what they can and cannot do, and avoid trying to infer what we expect from how we think they work.
They are certainly capable of doing is a wide variety of computations that simulate reasoning, and maybe that's good enough for your use case. But it is unpredictably brittle unless you spend a lot on o1-pro (and even then...). Raschka has a line about "whether and how an LLM actually 'thinks' is a separate discussion" but this isn't about semantics. R1 clearly sucks at deductive reasoning and you will not understand "reasoning" LLMs if you take DeepSeek's claims at face value.
It seems especially incurious for him to copy-paste the "a-ha moment" from Deepseek's technical report without critically investigating it. DeepSeek's claims are unscientific, without real evidence, and seem focused on hype and investment:
This moment is not only an "aha moment" for the model but also for the researchers observing its behavior. It underscores the power and beauty of reinforcement learning: rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-solving strategies.
The "aha moment" serves as a powerful reminder of the potential of RL to unlock new levels of intelligence in artificial systems, paving the way for more autonomous and adaptive models in the future.
Perhaps it was able to solve that tricky Olympiad problem, but there are an infinite variety of 1st grade math problems it is not able to solve. I doubt it's even reliably able to solve simple variations of that root problem. Maybe it is! But it's frustrating how little skepticism there is about CoT, reasoning traces, etc.But the link demonstrates the opposite- these models absolutely are able to reason out of distribution, just not with perfect fidelity. The fact that they can do better than random is itself really impressive. And o1-preview does impressively well, only vary rarely getting the wrong answer on variants of that Alice in Wonderland problem.
If you would listen to most of the people critical of LLMs saying they're a "stochastic parrot" - it should be impossible for them to do better than random on any out of distribution problem. Even just changing one number to create a novel math problem should totally stump them and result in entirely random outputs, but it does not.
Overall, poor reasoning that is better than random but frequently gives the wrong answer is fundamentally, categorically entirely different from being incapable of reasoning.
I had read and heard that many places had started using preliminary tools like Lucaprot that scan viral dark matter retrieved using nano-pore sequencers to identify the sequences and common secondary structures of proteins which all viruses need to replicate, to automate detection of new viruses. Is this not widespread?
I'm aware of Pollack's research, but as you said he's suffered reputational harm which started when he began that research. The stories surrounding Luc Montagnier and Benviste, were pretty poorly handled, and they both were somewhat discredited for merely pointing out undiscovered anomalies that merited further investigation.
Nature sent their hatchet man James Randi, who has been known for discrediting people, sometimes without sound basis especially in cases where the underlying mechanism is not understood.
There is something to be said that When you suddenly can't get any funding because you published something which no one else had found in a methodological scientific way, that could be duplicated; that tends to gives teeth to those calling something conspiracy theory, where it seems more like a conspiracy practice.
Every little quirk we find, can potentially be used in an engineered solution to get to some amazing outcome not previously considered. Quantum dot based technologies are an example of this, from what I've read with regards to their history.
That’s why I’m saying we can’t rule out a virus here easily- not until some other cause is proven.
You can also have more complex mechanisms that also involve a virus plus generic or environmental factors- for example the recent finding that implicates HSV in Alzheimers, despite the fact that most people with the virus still never get Alzheimers.
The real world was worse than this fiction. It's easy to find pictures of old advertisements for radioactive products, in the years following the discovery of radioactivity. Radioactive pills "to boost your energy", radioactive cosmetics, radioactive false teeth... Now imagine what happened to the people that used these.
Even that is nothing close to the fiction I am talking about, having a class of slaves that operate a fully open nuclear reactor with hand tools.
As a solution to never have unencrypted files in iCloud.