Readit News logoReadit News
100ideas commented on Dropbox discontinuing Vault, moving it to a normal folder    · Posted by u/vinnyglennon
jwds · 7 months ago
I got a response to my Dropbox ticket asking why they're discontinuing their Vault…

"We discontinue Dropbox Vault primarily due to significant technical risks that could compromise the security we provide to our users.

We want to concentrate our efforts on further enhancing our existing security features."

100ideas · 7 months ago
what a non-answer answer!
100ideas commented on UnitedHealth overcharged cancer patients for drugs by over 1,000%   fortune.com/2025/01/15/ft... · Posted by u/this_weekend
NickC25 · 8 months ago
>In other words. Health insurance firms have capped profits in the US. But in this case one conglomerate can own both an insurer and a PBM, so it can just overcharge consumers for insurance and then launder its profits through the PBM.

Most insightful comment in this thread. THIS is the crux of the issue, and we've allowed the likes of UHC to buy PBMs and other pieces of the supply chain / customer lifecycle because UHC lobbyists claim it would reduce costs across the board and also improve efficiency. Load of absolute bullshit obviously but here we are.

100ideas · 8 months ago
You just answered my question:

Is it the case that UnitedHealth and Cigna each own (or control) one of the "big three" PBMs? If so, that is a just crazy - the control insurance premium pricing, benefit decisions, AND the pricing of covered medications?

yadaebo wrote below "Medical Loss Ratio (MLR) is capped at 85% in the US which means 85% of revenue must go to patients". Does controlling a big PBM allow an insurance company a loophole?

Deleted Comment

100ideas commented on Truly portable C applications   lwn.net/Articles/997238/... · Posted by u/signa11
100ideas · 10 months ago
Very interesting comments and moderation discussion on this article.
100ideas commented on Solomonic learning: Large language models and the art of induction   amazon.science/blog/solom... · Posted by u/100ideas
talldayo · 10 months ago
The article is bullshit shrouded in turboencabulator-speak. He's trying to roll the same Sisyphian boulder that crushed the computational linguists, thinking that AI makes it different this time. Conveniently he also waived any attempts at providing proof for his claims.

>> If the training data subtend latent logical structures, as do sensory data such as visual or acoustic data, models trained as optimal predictors are forced to capture their statistical structure.

There are a lot of red flags to pick from in this article, but this one stood out to me as the most absurd. AI doesn't get magical multimodal powers from reading secondhand accounts describing a sensation. You can say it in as fancy of a phrasing as you want, but the proof is in the pudding. The "statistical structure" of that text doesn't propagate a meaningful understanding of almost anything in the real world.

> And really, a person will never grasp machine learning and AI as long as they keep drawing unbased parallels to humans and machines.

I think you're right on the money with this one.

100ideas · 10 months ago
I think you both make valid points, but I also get the sense that the article is articulating insights gained from pure math explorations into the theoretical limitations of learning, which in the article can sound "turboencabulator-speak" when compressed into words.

Maybe I should have just linked to the research paper:

[B'MOJO: Hybrid state space realizations of foundation models with eidetic and fading memory](https://www.arxiv.org/abs/2407.06324)

100ideas commented on Solomonic learning: Large language models and the art of induction   amazon.science/blog/solom... · Posted by u/100ideas
gnabgib · 10 months ago
Per the guidelines: please use the original title https://news.ycombinator.com/newsguidelines.html
100ideas · 10 months ago
Oops, thanks. I changed it.
100ideas commented on Solomonic learning: Large language models and the art of induction   amazon.science/blog/solom... · Posted by u/100ideas
100ideas · 10 months ago
I found the opening quote of this article to be intriguing, especially since it was from a 1992 research lab:

“One year of research in neural networks is sufficient to believe in God.” The writing on the wall of John Hopfield’s lab at Caltech made no sense to me in 1992. Three decades later, and after years of building large language models, I see its sense if one replaces sufficiency with necessity: understanding neural networks as we teach them today requires believing in an immanent entity.

100ideas · 10 months ago
Basically, as LLMs scale up, the author (Soatto, VP at AWS) suggests they're beginning to resemble Solomonoff inference: hypothetically optimal but computationally infinite approach that executes all possible programs to match observed data. Repeating this approach for any given question by definition gives the best answer, yet requires no learning, since the entire process can be repeated for any query (thanks to infinite computation).

The article develops a theoretical framework contrasting traditional inductive learning (which emphasizes generalization over memorization) with transductive inference (which embraces memorization and reasoning). Here's a quote:

"What matters is that LLMs are inductively trained transductive-inference engines and can therefore support both forms of inference.[2] They are capable of performing inference by inductive learning, like any trained classifier, akin to Daniel Kahneman’s “system 1” behavior — the fast thinking of his book title Thinking Fast and Slow. But LLMs are also capable of rudimentary forms of transduction, such as in-context-learning and chain of thought, which we may call system 2 — slow-thinking — behavior. The more sophisticated among us have even taught LLMs to do deduction — the ultimate test for their emergent abilities."

Sadly, the opening quote is not elucidated.

100ideas commented on Solomonic learning: Large language models and the art of induction   amazon.science/blog/solom... · Posted by u/100ideas
gnabgib · 10 months ago
Blog title: Solomonic learning: Large language models and the art of induction
100ideas · 10 months ago
Yes, I should have made that clear in my first comment. Thanks for doing so. I used the quote in my title because I found it a fascinating way to start a technical blog post, and it made me want to read the article to understand what the author was planning to write from such a beginning.
100ideas commented on Solomonic learning: Large language models and the art of induction   amazon.science/blog/solom... · Posted by u/100ideas
100ideas · 10 months ago
I found the opening quote of this article to be intriguing, especially since it was from a 1992 research lab:

“One year of research in neural networks is sufficient to believe in God.” The writing on the wall of John Hopfield’s lab at Caltech made no sense to me in 1992. Three decades later, and after years of building large language models, I see its sense if one replaces sufficiency with necessity: understanding neural networks as we teach them today requires believing in an immanent entity.

u/100ideas

KarmaCake day403May 5, 2009
About
tools that make biotech fun and accessible
View Original