Readit News logoReadit News
UniverseHacker commented on U.K. orders Apple to let it spy on users’ encrypted accounts   washingtonpost.com/techno... · Posted by u/Despegar
sharpshadow · 7 months ago
Question: Would it be technically feasible to make an Apple app which encrypts/decrypts the files used in iCloud and is able to use iCloud itself?

As a solution to never have unencrypted files in iCloud.

UniverseHacker · 7 months ago
Apple basically already has this built into macos - you can create an encrypted disk image and mount it to access the files. I'm not sure if it is possible to open these on ios.
UniverseHacker commented on Open source AI: Red Hat's point-of-view   redhat.com/en/blog/open-s... · Posted by u/alexrustic
mattkrause · 7 months ago
It’s even weirder than that!

Vanguard has an odd corporate structure where it’s owned by the funds that it manages, so it’s effectively a co-op owned by its customers.

UniverseHacker · 7 months ago
I don’t understand why customer owned co-ops aren’t ubiquitous. Vanguard is amazing- low fees, and great services- they beat all of the competition. I had to call their support line today and it was the most professional customer service I’ve ever experienced.
UniverseHacker commented on Avoiding outrage fatigue while staying informed   scientificamerican.com/po... · Posted by u/headalgorithm
esafak · 7 months ago
You do have influence. How much is up to your ingenuity and effort. You may choose not to exercise it.
UniverseHacker · 7 months ago
Of course you do, that's the whole point: to focus on what you actually can control- your own actions, which absolutely includes using your own ingenuity and effort to influence things for the better.
UniverseHacker commented on Understanding Reasoning LLMs   magazine.sebastianraschka... · Posted by u/sebg
jakefromstatecs · 7 months ago
> I don't think anyone understands how they work

Yes we do, we literally built them.

> We understand how we brought them about via setting up an optimization problem in a specific way, that isn't the same at all as knowing how they work.

You're mistaking "knowing how they work" with "understanding all of the emergent behaviors of them"

If I build a physics simulation, then I know how it works. But that's a separate question from whether I can mentally model and explain the precise way that a ball will bounce given a set of initial conditions within the physics simulation which is what you seem to be talking about.

UniverseHacker · 7 months ago
> You're mistaking "knowing how they work" with "understanding all of the emergent behaviors of them"

By knowing how they work I specifically mean understanding the emergent capabilities and behaviors, but I don't see how it is a mistake. If you understood physics but knew nothing about cars, you can't claim to understand how a car works "simple, it's just atoms interacting according to the laws of physics." That would not let you, e.g. explain its engineering principles or capabilities and limitations in any meaningful way.

UniverseHacker commented on Understanding Reasoning LLMs   magazine.sebastianraschka... · Posted by u/sebg
codr7 · 7 months ago
Exactly, we don't understand, but we want to believe it's reasoning, which would be magic.
UniverseHacker · 7 months ago
There's no belief or magic required, the word 'reasoning' is used here to refer to an observed capability, not a particular underlying process.

We also don't understand exactly how humans reason, so any claims that humans are capable of reasoning is also mostly an observation about abilities/capabilities.

UniverseHacker commented on Understanding Reasoning LLMs   magazine.sebastianraschka... · Posted by u/sebg
gsam · 7 months ago
I don't like wading into this debate when semantics are very personal/subjective. But to me, it seems like almost a sleight of hand to add the stochastic part, when actually they're possibly weighted more on the parrot part. Parrots are much more concrete, whereas the term LLM could refer to the general architecture.

The question to me seems: If we expand on this architecture (in some direction, compute, size etc.), will we get something much more powerful? Whereas if you give nature more time to iterate on the parrot, you'd probably still end up with a parrot.

There's a giant impedance mismatch here (time scaling being one). Unless people want to think of parrots being a subset of all animals, and so 'stochastic animal' is what they mean. But then it's really the difference of 'stochastic human' and 'human'. And I don't think people really want to face that particular distinction.

UniverseHacker · 7 months ago
I'm sure both of you know this, but "stochastic parrot" refers to the title of a research article that contained a particular argument about LLM limitations that had very little to do with parrots.
UniverseHacker commented on Understanding Reasoning LLMs   magazine.sebastianraschka... · Posted by u/sebg
Jensson · 7 months ago
> If you would listen to most of the people critical of LLMs saying they're a "stochastic parrot" - it should be impossible for them to do better than random on any out of distribution problem. Even just changing one number to create a novel math problem should totally stump them and result in entirely random outputs, but it does not.

You don't seem to understand how they work, they recurse their solution meaning if they have remembered components it parrots back sub solutions. Its a bit like a natural language computer, that way you can get them to do math etc, although the instruction set isn't of a turing language.

They can't recurse sub sub parts they haven't seen, but problems that has similar sub parts can of course be solved, anyone understands that.

UniverseHacker · 7 months ago
> You don't seem to understand how they work

I don't think anyone understands how they work- these type of explanations aren't very complete or accurate. Such explanations/models allow one to reason out what types of things they should be capable of vs incapable of in principle regardless of scale or algorithm tweaks, and those predictions and arguments never match reality and require constant goal post shifting as the models are scaled up.

We understand how we brought them about via setting up an optimization problem in a specific way, that isn't the same at all as knowing how they work.

I tend to think in the totally abstract philosophical sense, independent of the type of model, at the limit of an increasingly capable function approximator trained on an increasingly large and diverse set of real world cause/effect time series data, you eventually develop and increasingly accurate and general predictive model of reality organically within the model. Some model types do have fundamental limits in their ability to scale like this, but we haven't yet found one with these models.

It is more appropriate to objectively test what they can and cannot do, and avoid trying to infer what we expect from how we think they work.

UniverseHacker commented on Understanding Reasoning LLMs   magazine.sebastianraschka... · Posted by u/sebg
aithrowawaycomm · 7 months ago
I like Raschka's writing, even if he is considerably more optimistic about this tech than I am. But I think it's inappropriate to claim that models like R1 are "good at deductive or inductive reasoning" when that is demonstrably not true, they are incapable of even the simplest "out-of-distribution" deductive reasoning: https://xcancel.com/JJitsev/status/1883158738661691878

They are certainly capable of doing is a wide variety of computations that simulate reasoning, and maybe that's good enough for your use case. But it is unpredictably brittle unless you spend a lot on o1-pro (and even then...). Raschka has a line about "whether and how an LLM actually 'thinks' is a separate discussion" but this isn't about semantics. R1 clearly sucks at deductive reasoning and you will not understand "reasoning" LLMs if you take DeepSeek's claims at face value.

It seems especially incurious for him to copy-paste the "a-ha moment" from Deepseek's technical report without critically investigating it. DeepSeek's claims are unscientific, without real evidence, and seem focused on hype and investment:

  This moment is not only an "aha moment" for the model but also for the researchers observing its behavior. It underscores the power and beauty of reinforcement learning: rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-solving strategies. 

  The "aha moment" serves as a powerful reminder of the potential of RL to unlock new levels of intelligence in artificial systems, paving the way for more autonomous and adaptive models in the future.
Perhaps it was able to solve that tricky Olympiad problem, but there are an infinite variety of 1st grade math problems it is not able to solve. I doubt it's even reliably able to solve simple variations of that root problem. Maybe it is! But it's frustrating how little skepticism there is about CoT, reasoning traces, etc.

UniverseHacker · 7 months ago
> they are incapable of even the simplest "out-of-distribution" deductive reasoning

But the link demonstrates the opposite- these models absolutely are able to reason out of distribution, just not with perfect fidelity. The fact that they can do better than random is itself really impressive. And o1-preview does impressively well, only vary rarely getting the wrong answer on variants of that Alice in Wonderland problem.

If you would listen to most of the people critical of LLMs saying they're a "stochastic parrot" - it should be impossible for them to do better than random on any out of distribution problem. Even just changing one number to create a novel math problem should totally stump them and result in entirely random outputs, but it does not.

Overall, poor reasoning that is better than random but frequently gives the wrong answer is fundamentally, categorically entirely different from being incapable of reasoning.

UniverseHacker commented on Mystery brain disease patients in New Brunswick say they welcome investigation   ctvnews.ca/atlantic/new-b... · Posted by u/luu
trod1234 · 7 months ago
> You cannot just "check for it"...

I had read and heard that many places had started using preliminary tools like Lucaprot that scan viral dark matter retrieved using nano-pore sequencers to identify the sequences and common secondary structures of proteins which all viruses need to replicate, to automate detection of new viruses. Is this not widespread?

I'm aware of Pollack's research, but as you said he's suffered reputational harm which started when he began that research. The stories surrounding Luc Montagnier and Benviste, were pretty poorly handled, and they both were somewhat discredited for merely pointing out undiscovered anomalies that merited further investigation.

Nature sent their hatchet man James Randi, who has been known for discrediting people, sometimes without sound basis especially in cases where the underlying mechanism is not understood.

There is something to be said that When you suddenly can't get any funding because you published something which no one else had found in a methodological scientific way, that could be duplicated; that tends to gives teeth to those calling something conspiracy theory, where it seems more like a conspiracy practice.

Every little quirk we find, can potentially be used in an engineered solution to get to some amazing outcome not previously considered. Quantum dot based technologies are an example of this, from what I've read with regards to their history.

UniverseHacker · 7 months ago
Yes, that is the process basically for new virus discovery- sequencing and then looking for similarity to known viral sequences. That is still an expensive and time consuming research project, and it fails if the virus is too different to identify any sequence homology. We still find a lot of DNA and RNA we can’t make any sense of in almost every sequencing experiment- there’s a ton of stuff out there undiscovered and unexplained. I suspect a lot of currently mysterious diseases and health problems may have viral origins.

That’s why I’m saying we can’t rule out a virus here easily- not until some other cause is proven.

You can also have more complex mechanisms that also involve a virus plus generic or environmental factors- for example the recent finding that implicates HSV in Alzheimers, despite the fact that most people with the virus still never get Alzheimers.

UniverseHacker commented on Natural fission reactors in the Franceville basin, Gabon   sciencedirect.com/science... · Posted by u/nickcotter
idoubtit · 7 months ago
> I once read a horrifying fiction story about a pre-industrial culture that used nuclear reactions in open piles of uranium ore

The real world was worse than this fiction. It's easy to find pictures of old advertisements for radioactive products, in the years following the discovery of radioactivity. Radioactive pills "to boost your energy", radioactive cosmetics, radioactive false teeth... Now imagine what happened to the people that used these.

UniverseHacker · 7 months ago
Those real world early uses of nuclear technology were indeed awful- nuclear testing at Bikini Atoll comes to mind, which did intentional treatment vs. non-treatment experiments on the Marshall Islanders.

Even that is nothing close to the fiction I am talking about, having a class of slaves that operate a fully open nuclear reactor with hand tools.

u/UniverseHacker

KarmaCake day7133March 5, 2020View Original