Readit News logoReadit News
iteratethis · 9 months ago
Just out of curiosity, I wish online LLMs would show real-time power usage and actual dollar costs as you interact with it. It would be so insightful to understand to which degree the technology is subsidized and what the actual value/cost ratio is.

I've read somewhere that generating a single AI image draws as much power as a full smartphone charge.

In case the suspicion is true that costs are too high to be monetized, then the current scale-up phase is going to be interesting. Right now people infrequently have a chat with AI. That's quite a different scenario from having it integrated across every stack and it constantly being used in the background, by billions of people.

Late as they may be, for the consumer space I think Apple is clever to push as much as possible to the local device.

Saigonautica · 9 months ago
Also out of curiosity, I did some quick math regarding that claim you read somewhere.

Cellphone battery charge: I have a 5000mAh cellphone battery. If we ignore charging losses (pretty low normally, but not sure at 67W fast charging)... That battery stores about 18.5 watt-hours of energy, or about 67 kilojoules.

Generating a single image at 1024x1024 resolution with Stable Diffusion on my PC takes somewhere under a minute at a maximum power draw under 500W. Lets cap that at 500*60 = 30 kilojoules.

So it seems plausible that for cellphones with smaller batteries, and/or using intense image generation settings, there could be overlap! For typical cases, I think that you could get multiple (but low single digit) of AI generated images for the power cost of a cellphone charge, maybe a bit better at scale.

So in other words, maybe "technically incorrect" but not a bad approximation to communicate power use in terms most people would understand. I've heard worse!

grandmczeb · 9 months ago
Your home setup is much less efficient than production inference in a data center. Open source implementation of SDXL-Lightning runs at 12 images a second on TPU v5e-8, which uses ~2kW at full load. That’s 170J or about 1/400th the phone charge.

https://cloud.google.com/blog/products/compute/accelerating-...

https://arxiv.org/pdf/2502.01671

elpocko · 9 months ago
My PC with a 3060 draws 200 W when generating an image and it takes under 30 seconds at that resolution, in some configurations (LCM) way under 10 seconds. That's a low end GPU. High end GPUs can generate at interactive frame rates.

You can generate a lot of images with the energy you would use to play a game instead for two hours; generating an image for 30 seconds uses the same amount of energy as playing a game on the same GPU for 30 seconds.

gdhkgdhkvff · 9 months ago
One point missing from this comparison is that cell phones just don’t take all that much electricity to begin with. A very rough calculation is that it takes around 0.2 cents to fully charge a cell phone. You spend maybe around $1 PER YEAR on cell phone charging per phone. Cell phones are just confusingly not energy intensive.
mrob · 9 months ago
How about if you cap the power of the GPU? Modern semiconductors have non-linear performance:efficiency curves. It's often possible to get big energy savings with only small loss in performance.
bluefirebrand · 9 months ago
> Generating a single image at 1024x1024 resolution

That's not a very big image, though. Maybe if this were 25 years ago

You should at least be generating 1920x1080, pretend you're making desktop backgrounds from 10 years ago

facile3232 · 9 months ago
> Generating a single image at 1024x1024 resolution with Stable Diffusion on my PC takes somewhere under a minute at a maximum power draw under 500W

That's insane, holy shit. That's not even a very large image.

Apparently I was off on my estimates about how power hungry gpus are these days by an order of magnitude.

nkrisc · 9 months ago
A 1024x1024 image seems like an unrealistically small image size in this day and age. That’s closer to an icon than a useful image size for display purposes.
tzs · 9 months ago
> I've read somewhere that generating a single AI image draws as much power as a full smartphone charge.

To put that in perspective, using the 67 kJ of energy for a smartphone charge given in Saigonautica's comment you can charge a smartphone 336 times for $1 if you are paying the average US residential electricity rate of just under $0.16/kWh.

You could charge a smartphone 128 times for $1 if you were in the state with the most expensive electricity (Hawaii) and paying the average rate there of around $0.42.

Saigonautica's battery is on the large size. It's a little bigger than the battery of an iPhone 16 Pro Max. A plain iPhone 16 could be charged 470 times for $1 at average US residential electricity prices.

For most people energy used to charge a smartphone is in the "this is too small to ever care about" category.

We can do a similar calculation for AA rechargeable batteries, and the results might be surprising.

$1 of electricity at the US average residential rate is enough to recharge an AA Eneloop nearly 2300 times. Of course there are inefficiencies in the charger and charging, but if we can get even 75% efficiency that's good enough for more then 1700 charges.

That really surprised me when I first learned it. I knew it wasn't going to be a lot...but 1700 charges is I think more than the number of times I'll swap out an AA battery over my entire lifetime. I hadn't expected that all my AA battery use for my whole life would be less than $1 worth of electricity.

chii · 9 months ago
> It would be so insightful to understand to which degree the technology is subsidized and what the actual value/cost ratio is.

it would be insightful for competitors too, because they could use this as part of their analysis and price strategies against you.

Therefore, no company would possibly allow such data to be revealed.

And in any case, if these LLM providers burn cash to provide a service to you, then you ought to take maximal advantage of it. Just like how uber subsidized rides.

polytely · 9 months ago
feel like if they did this the whole AI bubble would pop
keyringlight · 9 months ago
It's not just Apple integrating AI into the hardware, Microsoft has been part of a big push to "AI PCs" with a certain minimum capabilities (and I'm sure their partners don't mind selling new gear) and the copilot button on keyboards, and certain android models have the processors and memory capacities specifically for running AI
rchaud · 9 months ago
> It would be so insightful to understand to which degree the technology is subsidized and what the actual value/cost ratio is.

For whom would this be beneficial? The design goals of these products are to get as many users as fast as possible, using it for as long as possible. "Don't make me think" is the #1 UX principle at work here. You wouldn't expect a gas pump terminal to tut-tut about your carbon emissions.

kosh2 · 9 months ago
How much energy does it cost for a human to generate an image?
card_zero · 9 months ago
You mean, how much extra energy, compared to what the human was going to do instead? It might be a negative amount. But that might be a bad thing, an artist could get fat.
DebtDeflation · 9 months ago
Shortly after ChatGPT hit the scene, everyone said "Google invented this technology, how could they fall so far behind in commercializing it, haha they're IBM now".

Maybe they didn't fall behind in anything, maybe they just did an analysis of what it would cost to train transformer models with hundreds of billions of parameters, to run inferencing on them, and then decided that there was no way to actually be profitable doing this.

amelius · 9 months ago
Not with an ad-based monetization model. But what if consumers opened their wallets?
discreteevent · 9 months ago
There is no what if. The consumers haven't opened their wallets (in time for these companies to survive).
rchaud · 9 months ago
Ad revenue dwarfs any number that consumer dollars could put up. ChatGPT has been a household name for years, Copilot bloatware is shoved into every Office subscription on earth, and it still runs deep in the red. Ads are the only way.
DebtDeflation · 9 months ago
What if they considered that and determined that at typical consumer SaaS pricing of low tens of dollars per user per month it's "still impossible to make money"? What if they went a step further and looked at typical Enterprise SaaS pricing (low hundreds of dollars per seat per month) and determined "still can't make money"?
DonHopkins · 9 months ago
I'm using the (more expensive) Gemini 2.5 pro and it's like talking to an adult again after claud went all RFK Jr. Brain Worm on me.

People have mentioned on hacker news that there seems to kind of "weather patterns" with how hard the various llms think, like during business hours they get stupid. But of course there is some disagreement about what "business hours" are. It's one of those "vibes".

Imagine scheduling your life around the moods of AIs.

That's the business model. If you don't want a surly and moody AI with a hangover and bad attitude, you gotta pay more!

Like isitdownrightnow.com for crowd sourcing web site availability, there should be a isitdumbrightnow.ai site!

throwup238 · 9 months ago
> As the global tech research company forecasts worldwide generative AI (GenAI) spending will reach $644 billion in 2025, up around 76 percent from 2024

I’m having a hard time squaring the number $644 billion and the phrase “extinction phase.”

I don’t believe their actual estimate of GenAI spending but if it’s even in the same ballpark as the real value, that’s not an extinction.

somenameforme · 9 months ago
That's the entire point. Bubbles are caused when future valuations drive excessive spending far beyond a reasonable valuation of something. Then at some point reality hits and it turns into a game of hot potato as people don't want to left holding the bag.

Pushing towards a trillion bucks a year for what LLMs are mostly currently used for does not seem like a sustainable system.

pjc50 · 9 months ago
Where are they going to get the trillions in revenue to pay any of that back? That's 10% of 2023 US total wages and salaries. Do people really believe it'll replace that much labour?
alexdoesstuff · 9 months ago
Reading through the source [1] they basically get to that huuuuge number by including AI-enabled devices such as phones that have some AI functionality even if not core to their value proposition. That's basically reclassifying a big chunk of smartphones, TVs, and other consumer tech as GenAI spending.

Of the "real" categories, they expect: Service 27bn (+162% y/y) Software 37bn (+93% y/y) Servers 180bn (+33% y/y) for a total of $245bn (+58% y/y)

That's not shabby numbers, but way more reasonable. Hyperscaler total capex [2] is expected to be around $330bn in 2025 (up +32% y/y) so that'll most likely include a good chunk of the server spend.

[1] https://www.gartner.com/en/newsroom/press-releases/2025-03-3...

[2] https://www.marvin-labs.com/blog/deepseek-impact-of-high-qua...

rchaud · 9 months ago
The $644b number comes from Gartner, who are a 'trends' consultancy, not an accounting firm. It likely includes spending 'pledges', and doesn't account for things like a looming recession and self-inflicted trade war.
petesergeant · 9 months ago
I'm an AI-bro, but I think the value of equity in OpenAI or Anthropic is likely zero. They've achieved incredible things with their models, but big-tech only ever seem to be a few months behind and have the economies of scale to make inference profitable. I think both will be acquired with valuations significantly below what was invested in them.
ConSeannery · 9 months ago
Ads will eventually make their way into the responses or side bars. It will be interesting (and depressing) to see who does it first and who holds out hoping to squeeze out the ad-supported LLM providers.
isoprophlex · 9 months ago
In the light of this article, it makes sense that OpenAI are taking a "lmao we dont even pretend to care" approach to safety and intellectual property right now.

Altman loudly hyping "look you can ghibli-fy yourself", stating inflammatory things like "we are the death of the graphic designer"; a desparate ploy to rapidly consume the market before the bubble bursts.

PeterStuer · 9 months ago
It took Amazon around six to seven years to see its first profitable quarter, and they still went into the red sometimes when doing major investments thereafter.
jasode · 9 months ago
>It took Amazon around six to seven years to see its first profitable quarter,

A key difference from OpenAI is that Amazon was cash flow positive from very early on and before that first profitable quarter. They only needed one funding round Series A of $8 million instead of repeatedly trying to raise extra rounds of funding from new VC investors.

The Amazon startup already had enough free cash from operations to internally fund their warehouse expansions. The "Amazon had no profits" was an accounting side-effect because of re-investment. Anybody seriously studying Amazon's financial statements in the late 1990s would have paid more attention to their cash flow rather than "accounting profits".

On the other hand, OpenAI doesn't have the same positive cash flow situation as early Amazon. They are truly burning more money than they take in. They have to get billions from new investors to buy GPUs and pay salaries. ($40 billion raised in latest investment round.) They are cash flow negative. The cash flow from ChatGPT subscription fees is not enough to internally fund their growth.

[1] https://en.wikipedia.org/wiki/Free_cash_flow

nl · 9 months ago
Amazon was founded in 1995 and became cash flow positive in 2002[1], seven years later.

ChatGPT was released in Nov 2022. They are expecting $12B in revenue this year[2].

[1] https://adainsights.com/blog/when-did-amazon-start-making-mo...

[2] https://www.cnbc.com/2025/03/26/openai-expects-revenue-will-...

rchaud · 9 months ago
This is the textbook definition of survivorship bias.

Amazon out-priced everybody when it arrived, because it didn't charge any sales tax for years, until the laws had to be re-written to close the loophole. It didn't have the eye-watering sums poured into it that AI has had, nor did it have any significant competition internationally. Things couldn't be more different for OpenAI.

paulluuk · 9 months ago
I could be wrong, but I believe Amazon's business model was very simple: do everything as cheaply as possible, run at a loss until all competition is dead, and then raise prices once we dominate the market.

I don't think OpenAI has that option.

karmakurtisaani · 9 months ago
Also, I think Amazon invested everything into growth, for which there was a lot of potential. Seems different for the AI companies.
dukeyukey · 9 months ago
I don't think that's true, even in rural areas without choices Amazon is pretty cheap.
delecti · 9 months ago
Eh, it's more that per-transaction they were profitable, so they needed to scale enough to make up for their fixed costs. OpenAI's fixed costs are enormous, and they're only bringing in pennies from subscriptions.
rongrobert · 9 months ago
This wasn't the business model.

The business model was you could sell books over the internet at a much cheaper cost compared to Barnes and Noble or Borders because they weren't paying for physical locations and there was no sales tax on the transactions because it was on the internet.

Havoc · 9 months ago
A slow pruning here seems healthy.

The more interesting question to me is how gpu vs tpu plays out. Plus the other npu like approaches. Sambanova cerebras groq etc

internet_points · 9 months ago
If only. I think the more likely path is enshittification (ads etc. inside the llm's)