Readit News logoReadit News
Ologn commented on Nvidia results show spending on A.I. infrastructure remains robust   nytimes.com/2025/08/27/te... · Posted by u/cuttothechase
adidoit · a day ago
Amazing how high expectations were that a 56% jump in sales still led to a 3% drop in the stock.

It's very clear that the ramp for GPUs continues.

Ologn · a day ago
NVDA had a surprise earnings on May 24, 2023, closing at $305 and opening at $385 (before the 10 for 1 split). Pretty much every earnings day since then has been the same - the numbers come out, they beat what they estimated, the stock goes down a little. People have been doom and glooming it every earnings call - you can read the threads here from May 2023, people were saying it was a bubble then, and it's over four times what it was then.
Ologn commented on When did AI take over Hacker News?   zachperk.com/blog/when-di... · Posted by u/zachperkel
Ologn · 11 days ago
It sure wasn't when AlexNet won the ImageNet challenge 13 years ago

https://news.ycombinator.com/item?id=4611830

Ologn commented on Outside of the top stocks, S&P 500 forward profits haven't grown in 3 years   insight-public.sgmarkets.... · Posted by u/Terretta
Ologn · 17 days ago
The ten most valuable S&P 500 companies are, in order of market cap:

Nvidia, Microsoft, Apple, Alphabet/Google, Amazon, Meta/Facebook, Broadcom, Tesla, Berkshire Hathaway, and Walmart.

A commonality to most of them (and to a lesser extent all of them) is they write software.

If a company not on the list like Ford has an F-150 truck come off the assembly line, some of that $40,000 cost is in the capital expenditure for the plant, any automation it has, the software in the car and so on. But Ford has to pay for the aluminum, steel and glass for each truck. It has to pay for thousands of workers on the assembly line to attach and assemble parts for each truck.

Meanwhile, at Apple a team writes iOS 18, mostly based on iOS 17, and it ships with the devices. Once it is written that's it for what goes off on iPhone 16. There may be some additional tweaks up until iOS 18.6. The relatively small team working on iOS has it going out with tens of millions of units. Their work is not as connected to the process of production as the assembly line people attaching and assembling the F-150 truck. If some inessential feature is not done as a phone is being made, it will be punted to next release. This can't be done with an F-150 truck.

Software properly done is just much more profitable than non-software work. We can see this here. Yes, some of the latest boost is due to AI hype (which may or may not come to fruition in the near future), but these companies got to this position before all of that.

I was watching a speech by Gabe Newell talking about the (smaller) software industry of the 1990s, and the idea back then to outsource and try to save on salary costs. He said he and his partners went the other way and decided to look for the most expensive and best programmers they could find, and Valve has had great success with that. Over the past 2 1/2 years we've seen a lot of outsourcing to cheaper foreign labor, FAANG layoffs (including Microsoft's recent Xbox layoffs), and more recently attempts to lower costs by having software produced by less experienced vibe coders using "AI". I have seen myself at Fortune 100 companies, especially non-tech ones, that the lessons of the late 1960s NATO software engineering conferences, or the lessons learned by Fred Brooks while managing the OS/360 project in the 1960s haven't been learned. Software can be a very, very profitable enterprise, and it is sometimes done right, but companies are still often doing things in the same way they were attempting such projects in the early 1960s. Even attempts to fix things like agile and scrum get twisted around as window dressing to doing things in the old-fashioned corporate way.

Ologn commented on OpenAI claims gold-medal performance at IMO 2025   twitter.com/alexwei_/stat... · Posted by u/Davidzheng
mikert89 · a month ago
The cynicism/denial on HN about AI is exhausting. Half the comments are some weird form of explaining away the ever increasing performance of these models

I've been reading this website for probably 15 years, its never been this bad. many threads are completely unreadable, all the actual educated takes are on X, its almost like there was a talent drain

Ologn · a month ago
> I've been reading this website for probably 15 years, its never been this bad.

People here were pretty skeptical about AlexNet, when it won the ImageNet challenge 13 years ago.

https://news.ycombinator.com/item?id=4611830

Ologn commented on Nvidia Becomes First Company to Reach $4T Market Cap   cnbc.com/2025/07/09/nvidi... · Posted by u/mfiguiere
ra7 · 2 months ago
I understand Nvidia is in a very dominant position. But $4T market cap still seems absolutely insane to me. I've only read about the Cisco boom and bust during the Internet era, and this feels eerily similar (people who actually experienced it might feel differently though).

What could actually drag Nvidia down and make them spend decades in the dark like Cisco still does? So far the two things I've come up with are: (a) general disillusionment in AI and companies not being able to monetize enough to justify spending on GPUs. (b) Big companies designing their own chips in-house lowering demand for Nvidia GPUs.

I don't think Nvidia can counter (a), but can they overcome (b) by also offering custom chip design services instead of insisting on selling a proprietary AI stack?

Ologn · 2 months ago
Cisco stock (which I thought about buying in 1992 and didn't, unfortunately) doubled in 1990, tripled in 1991, doubled in 1992, and kept going up every year - in 1995 it doubled, in 1998 it doubled, in 1999 it doubled. So it had a long run (and is also still worth over $250 billion).

The monetary push is very LLM based. One thing being pushed that I am familiar with is LLM assisted programming. LLMs are being pushed to do other things as well. If LLMs don't improve more, or if companies don't see the monetary benefits of using them in the short/medium term, that would drag Nvidia down.

Nvidia has a lot of network effects. Probably only Google has some immunity to that (with its TPUs). I doubt Nvidia will have competition in training LLMs for a while. It is possible a competitor could start taking market share on the low end for inference, but even that would take a while. People have been talking about AMD competition for over two years, and I haven't seen anything that even seems like it might have potential yet, especially on the high end.

Ologn commented on Nvidia Becomes First Company to Reach $4T Market Cap   cnbc.com/2025/07/09/nvidi... · Posted by u/mfiguiere
nativeit · 2 months ago
Time for some grossly oversimplified back-of-the-proverbial-envelope value crunching! I’ll assume the average GPU price, for the sake of argument, is $1000. Let’s also assume their per-unit profit margin is roughly 30% (I found conflicting numbers for this on a casual search, esp. between figures that measure quarterly and annual income, I suppose it isn’t a surprise that their accountants frequently pull rabbits from hats).

Nvidia would need to move on the order of 4,000,000,000 units to hit $4T in revenue, more than triple that to realize $4T in profits. Even if the average per-unit costs are 2-3x my estimated $1k, as near as I’ve been able to tell they “only” move a few million units each year for a given sku.

I am struggling to work out how these markets get so inflated, such that it pins a company’s worth to some astronomical figure (some 50x total equity, in this case) that seems wholly untethered to any material potential?

My intuition is that the absence of the rapid, generationally transformative, advances in tech and industry that were largely seen in the latter half of the 20th-century (quickly followed with smartphones and social networking), stock market investors seem content to force similar patterns onto any marginally plausible narrative that can provide the same aesthetics of growth, even if the most basic arithmetic thoroughly perforates it.

That said, I nearly went bankrupt buying a used car recently, so this is a whole lot of unqualified conjecture on my part (but not for nothing, my admittedly limited personal wealth isn’t heavily dependent on such bets).

Ologn · 2 months ago
Nvidia's trailing P/E ratio is 53 (stock hitting a new high today). Its forward P/E ratio is 38.

A year ago both its trailing and forward P/E were higher. So the stock is relatively a bargain compared to what it was a year ago.

The price implies that revenues and profits are expected to continue to grow.

> My intuition is that the absence of the rapid, generationally transformative, advances in tech and industry that were largely seen in the latter half of the 20th-century (quickly followed with smartphones and social networking), stock market investors seem content to force similar patterns onto any marginally plausible narrative that can provide the same aesthetics of growth

I wouldn't disagree with this.

Ologn commented on Nvidia won, we all lost   blog.sebin-nyshkim.net/po... · Posted by u/todsacerdoti
Kon5ole · 2 months ago
TSMC can only make about as many Nvidia chips as OpenAI and the other AI guys wants to buy. Nvidia releases gpus made from basically the shaving leftovers from the OpenAI products, which makes them limited in supply and expensive.

So gamers have to pay much more and wait much longer than before, which they resent.

Some youtubers make content that profit from the resentment so they play fast and loose with the fundamental reasons in order to make gamers even more resentful. Nvidia has "crazy prices" they say.

But they're clearly not crazy. 2000 dollar gpus appear in quantities of 50+ from time to time at stores here but they sell out in minutes. Lowering the prices would be crazy.

Ologn · 2 months ago
Yes. In 2021, Nvidia was actually making more revenue from its home/consumer/gaming chips than from its data center chips. Now 90% of its revenue is from its data center hardware, and less than 10% of its revenue is from home gpus. The home gpus are an afterthought to them. They take up resources that can be devoted to data center.

Also, in some sense there can be some fear 5090s could cannibalize the data center hardware in some aspects - my desktop has a 3060 and I have trained locally, run LLMs locally etc. It doesn't make business sense at this time for Nvidia to meet consumer demand.

Ologn commented on I convinced HP's board to buy Palm and watched them kill it   philmckinney.substack.com... · Posted by u/AndrewDucker
Ologn · 3 months ago
The book Androids by Chet Haase talks about how the early Android team had a lot of ex-Palm people on it.
Ologn commented on A Man Out to Prove How Dumb AI Still Is   theatlantic.com/technolog... · Posted by u/fortran77
janalsncm · 5 months ago
> best solved with what used to be called symbolic AI before it started working

Right, the current paradigm of requiring an LLM to do arbitrary digit multiplication will not work and we shouldn’t need to. If your task is “do X” and it can be reliably accomplished with “write a python program to do X” that’s good enough as far as I’m concerned. It’s preferable, in fact.

Btw Chollet has said basically as much. He calls them “stored programs” I think.

I think he is onto something. The right atomic to approach these problems is probably not the token, at least at first. Higher level abstraction should be refined to specific components, similar to the concept of diffusion.

Ologn · 5 months ago
Most human ten year olds in school can add two large numbers together. If a connectionist network is supposed to model the human brain, it should be able to do that. Maybe LLMs can do a lot of things, but if they can't do that, then they're an incomplete model of the human brain.
Ologn commented on I genuinely don't understand why some people are still bullish about LLMs   twitter.com/skdh/status/1... · Posted by u/ksec
Ologn · 5 months ago
People have different opinions about this, but I think one problem is there are different questions.

One is - Google, Facebook, OpenAI, Anthropic, Deepseek etc. have put a lot of capital expenditure into train frontier large language models, and are continuing to do so. There is a current bet that growing the size of LLMs, with more or maybe even synthetic data, with some minor breakthroughs (nothing as big as the Alexnet deep learning breakthrough, or transformers), will have a payoff for at least the leading frontier model. Similar to Moore's law for ICs, the bet is that more data and more parameters will yield a more powerful LLM - without that much more innovation needed. So the question for this is whether the capital expenditure for this bet will pay off.

Then there's the question of how useful current LLMs are, whether we expect to see breakthroughs at the level of Alexnet or transformers in the coming decades, whether non-LLM neural networks will become useful - text-to-image, image-to-text, text-to-video, video-to-text, image-to-video, text-to-audio and so on.

So there's the business side question, of whether the bet that spending a lot of capital expenditure training a frontier model will be worth it for the winner in the next few years - with the method being an increase in data, perhaps synthetic data, and increasing the parameter numbers - without much major innovation expected. Then there's every other question around this. All questions may seem important but the first one is what seems important to business, and is connected to a lot of the capital spending being done on all of this.

u/Ologn

KarmaCake day1355November 8, 2012
About
Programmer.

Currently located in the US.

View Original