Readit News logoReadit News
gdiamos commented on A message from Intel CEO Lip-Bu Tan to all company employees   newsroom.intel.com/corpor... · Posted by u/rntn
tengwar2 · 17 days ago
Partly home-made. Arm Holdings is British-based, but owned by Softbank Group (Japanese).
gdiamos · 17 days ago
Arm makes a specification and standard (the ARM ISA).

Apple licenses that and develops their own chip, which is then manufactured by TSMC.

So I guess if Intel dies the US will still have a few good CPU design firms, but no manufacturing

Also note that Foxconn (China) assembles the iPhones

Eg https://www.businessinsider.com/apple-iphone-factory-foxconn...

gdiamos commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
highfrequency · 18 days ago
It is frequently suggested that once one of the AI companies reaches an AGI threshold, they will take off ahead of the rest. It's interesting to note that at least so far, the trend has been the opposite: as time goes on and the models get better, the performance of the different company's gets clustered closer together. Right now GPT-5, Claude Opus, Grok 4, Gemini 2.5 Pro all seem quite good across the board (ie they can all basically solve moderately challenging math and coding problems).

As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.

Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?

gdiamos · 17 days ago
Scaling laws enabled an investment in capital and GPU R&D to deliver 10,000x faster training.

That took the wold from autocomplete to Claude and GPT.

Another 10,000x would do it again, but who has that kind of money or R&D breakthrough?

The way scaling laws work, 5,000x and 10,000x give a pretty similar result. So why is it surprising that competitors land in the same range? It seems hard enough to beat your competitor by 2x let alone 10,000x

gdiamos commented on Tokens are getting more expensive   ethanding.substack.com/p/... · Posted by u/admp
gdiamos · 21 days ago
The NVIDIA stock price keeps going up because accuracy/intelligence is more valuable than efficiency

Scaling laws let you spend more transistors and watts on intelligence

Do you want more tokens or smarter tokens?

gdiamos commented on Ozzy Osbourne has died   bbc.co.uk/news/live/cn0qq... · Posted by u/fantunes
sillysaurusx · a month ago
I’m coming in from a position of ignorance here, so I was hoping the community would help me understand: the only thing I know about Ozzy is that he’s bitten the heads off of various animals, including doves and bats. That happened before I was even born. But, looking over the comments, no one seems to be talking about it.

My question is, is it just not a big deal? If someone did that today, they’d be crucified in the courts of public opinion.

One could argue that it’s disrespectful to bring this up on his death thread, but, two points: one, I hope that people will bring up my mistakes when I pass, so that others can learn from it; and two, this is the only opportunity to talk about it, since Ozzy has rarely been a topic on HN.

Ozzy fans, can you help me understand why few people seem to care? It’s hard to wrap my head around the idea that someone can decapitate some animals with their own teeth and then still build a loyal following. Was he just that good at music?

I’m posting this from a place of curiosity, not malice, for what it’s worth.

EDIT: Even if the bat was a mistake, what about the doves? https://kiisfm.iheart.com/content/2022-01-24-theres-another-...

gdiamos · a month ago
As someone who listened to the music, it’s surprising to see this as the top comment.

Yes their lyrics are dark. That was the point.

Eating animals isn’t what comes to mind for me. I also rationalize it that 100s of millions of animals are slaughtered every day, especially birds. Which one of those facts is darker?

It’s surprising to see what people are remembered for.

gdiamos commented on Felix Baumgartner, who jumped from stratosphere, dies in Italy   theinternational.at/felix... · Posted by u/signa11
jamwil · a month ago
Yes but you’re answering the wrong question. It’s not, “what is the probability of death on my next jump?”. It’s “what is the accumulated probability of death by jumping repeatedly.”

The way you answer it is by flipping it upside down (what is the probability of surviving a single jump?) and multiplying that value by itself n times, where n is the number of jumps.

.99999 * .99999 * .99999 * …

gdiamos · a month ago
That’s only if you are planning to jump 100 more times.
gdiamos commented on Felix Baumgartner, who jumped from stratosphere, dies in Italy   theinternational.at/felix... · Posted by u/signa11
Simon_O_Rourke · a month ago
There's a concept I read about before called micromorts, where activities are given a danger rating something like the expected number of fatalities per million events.

So riding a motorbike 100 miles is 8 micromorts.

Hang-gliding is 9 micromorts.

Base jumping is 430 micromorts.

And summiting Everest is 37,000 micromorts.

Incidentally, of those - I know of two guys who died either on Everest or at base camp there over the past 15 years. First guy fell on the descent, and the second guy developed health issues at altitude (apparently related to an Israeli team immediately prior stealing their oxygen bottles).

gdiamos · a month ago
probability is memoryless.

If you have been base jumping for 20 years, you have the same risk on your next jump as someone trying it for the first time.

gdiamos commented on Hierarchical Modeling (H-Nets)   cartesia.ai/blog/hierarch... · Posted by u/lukebechtel
gdiamos · a month ago
How does it handle images?
gdiamos commented on Smollm3: Smol, multilingual, long-context reasoner LLM   huggingface.co/blog/smoll... · Posted by u/kashifr
gdiamos · 2 months ago
Nice work anton et al.

I hope you continue the 50-100M parameter models.

I think there is a case for models that finish fast on CPUs in solve by llm test cases.

gdiamos commented on Mercury: Ultra-fast language models based on diffusion   arxiv.org/abs/2506.17298... · Posted by u/PaulHoule
nradclif · 2 months ago
A million trillion operations per second is literally an exaflop. That's one hell of a GPU you have.
gdiamos · 2 months ago
Thanks, I missed a factor of 1000x, it should be a million billion
gdiamos commented on Mercury: Ultra-fast language models based on diffusion   arxiv.org/abs/2506.17298... · Posted by u/PaulHoule
mathiaspoint · 2 months ago
You can absolutely tune causal LLMs. In fact the original idea with GPTs was that you had to tune them before they'd be useful for anything.
gdiamos · 2 months ago
Yes I agree you can tune autoregressive LLMs

You can also tune diffusion LLMs

After doing so, the diffusion LLM will be able to generate more tokens/sec during inference

u/gdiamos

KarmaCake day534December 18, 2014View Original