Readit News logoReadit News
llm_nerd commented on Amazon EC2 M9g Instances   aws.amazon.com/ec2/instan... · Posted by u/AlexClickHouse
winrid · 4 days ago
Single thread performance is important for many workloads. It's not nonsense. Things like index builds on an i4g vs i4i could be half as slow. That's really important!

I don't know why you continue to be a fucking asshole. It's just a hosting provider. Go touch grass.

llm_nerd · 4 days ago
"I don't know why you continue to be a fucking asshole"

Your very first comment was an obnoxious "Not true at all" to the absolutely, incontestably true statement that Gravitons offer better $/perf. So maybe you need to look in the mirror and go touch grass.

llm_nerd commented on Amazon EC2 M9g Instances   aws.amazon.com/ec2/instan... · Posted by u/AlexClickHouse
winrid · 5 days ago
Groan. Absolutely not. :)

c8g passmark score: 1853 c8i passmark score: 3008

I guess the fps column isn't a good representation of single thread score. Also looking at the passmark scores for i4i vs i4g, i4g is about 1k and intel is about 2k, and the more modern Graviton equivalent of i4 is the same price, so...

https://go.runs-on.com/instances/ec2/c8g

https://go.runs-on.com/instances/ec2/c8i

https://go.runs-on.com/instances/ec2/i4g

https://go.runs-on.com/instances/ec2/i4i

Silly amazon.

llm_nerd · 5 days ago
So confident. And exactly the whack-a-mole nonsense I predicted.

See the comment by electroly. They actually know what they're talking about.

See, the FPS score is for the whole machine. The c8g gives you 8 real cores. The C8i gives you 4 real cores, 4 hyperthreading pseudo-cores. So for those two machines the c8g unequivocally gives you more absolute computing performance, regardless of the passmark single thread (on a single core) on the c8i being better than a single core on the c8g. And the c8g comes at a big discount as well.

That's...the point. The Graviton processors are cheaper per core, and lower performance per core, and you make it up in bulk. You get more performance per $ if you're okay with the ARM stack and your software is good with it, and this is basically universally true comparing Graviton instances versus Intel/AMD alternatives.

You're wrong. Maybe cite some other random nonsense now?

llm_nerd commented on DeepSeek uses banned Nvidia chips for AI model, report says   finance.yahoo.com/news/ch... · Posted by u/goodway
llm_nerd · 5 days ago
There is a sudden groundswell of reports about China using nvidia chips, always by unnamed sources, and I suspect if you could trace it back you'll find nvidia pulling the levers.

nvidia is facing a lot of competitive threats and their moat is being filled in. Google with their Ironwood TPU. Amazon with Trainium3. Even Apple is adding tensor cores to their chips, and if Apple went big scale it would be legitimate in the space as well.

We know that China has a number of upstart TPU vendors, and Huawei has built some "better than H200" solutions with a roadmap to much higher heights.

So there is suddenly a bunch of secret-source reports that no, China actually is totally reliant on nvidia. nvidia needs this to be true, or at least people to believe it to be true.

I mean, after all the fanfare about the H200 being allowed to be exported, nvidia shares...dropped. The market doesn't seem to be buying the China reliance bluster.

llm_nerd commented on Amazon EC2 M9g Instances   aws.amazon.com/ec2/instan... · Posted by u/AlexClickHouse
winrid · 5 days ago
Not true at all. Single thread CPU scores for Graviton2 are about half that of Intel, while only being about 20% cheaper at best.
llm_nerd · 5 days ago
Groan. Yes, absolutely true.

While I know this thread will turn into some noisy whack-a-mole bit of nonsense, an easy comparison is the c8g.2xlarge vs the c8i.2xlarge. The former is Graviton 4 vs Granite Rapids in the latter. Otherwise both 16GB, 15Gbps networking, and both are compute optimized, 8 vCPU machines.

Performance is very similar. Indeed, since you herald the ffmpeg result elsewhere the Graviton machine beats the Intel device by 16%.

And the Graviton is 17% cheaper.

Like, this is a ridiculous canard to even go down. Over half of AWS' new machines are Graviton based, but per your rhetoric they're actually uncompetitive. So I guess no one is using them? Wow, silly Amazon.

llm_nerd commented on Amazon EC2 M9g Instances   aws.amazon.com/ec2/instan... · Posted by u/AlexClickHouse
bhouston · 5 days ago
Is there a list of Geekbench performance metrics for the various Graviton CPUs?

I need a reference point so I can compare it to Intel/AMD and Apple's ARM cpus.

Otherwise it is buzzwords and superlatives. I need numbers so I can understand.

llm_nerd · 5 days ago
While the 5 variant isn't yet available outside of the preview, you can of course spin a 4 up and run geekbench yourself. Plenty of people have and you can find them in the GB DB. And of course most people spin up their specific workload to see how it compares.

Core per core it pales compared to Apple's superlative processors, and falls behind AMD as well.

But...that doesn't matter. You buy cloud resources generally for $/perf, and the Graviton's are far and away ahead on that metric.

llm_nerd commented on Amazon EC2 M9g Instances   aws.amazon.com/ec2/instan... · Posted by u/AlexClickHouse
llm_nerd · 5 days ago
In Amazon's Graviton 5 PR they note that over half of all new compute capacity added to AWS over the past three years has been Graviton-based. That's an amazing stat.

It really is incredible how ARM basically commoditized processors (in a good way).

llm_nerd commented on Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?    · Posted by u/embedding-shape
nottorp · 6 days ago
Thing is, the comments that sound "AI" generated but aren't have about as much value as the ones that really are.

Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.

But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.

llm_nerd · 6 days ago
>the comments that sound "AI" generated but aren't have about as much value as the ones that really are

When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.

People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.

llm_nerd commented on Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?    · Posted by u/embedding-shape
skobes · 6 days ago
I hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.
llm_nerd · 6 days ago
I think people are just presuming that others are regurgitating AI pablum regardless.

People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.

And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.

I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.

llm_nerd commented on Trump Says U.S. Will Allow Nvidia H200 Chip Sales to China, Get 25% Cut   wsj.com/tech/nvidia-china... · Posted by u/sebastian_z
llm_nerd · 6 days ago
What a wild series of events. It's a demonstration of someone having "no cards", as Trump likes to say. Of having delusions about the hand they hold and thinking they're the dealmaker.

Earlier this year, during the beginning of this massively self-destructive trade war, Trump tried to use the existing H20 as a lever and banned its export to China. Not long after he allowed it again given that China didn't bend an iota, which was followed by China banning nvidia from their own data centres while strongly encouraging Chinese companies to not use them.

Does anyone seriously think Chinese companies are going to line up again? The US is a grossly unreliable partner (see: Canada via USMCA), and nvidia just doesn't have the monopoly they did. Seeing Huang shilling himself on Rogan just makes the whole thing look pathetic.

Everything about this clowncar cabinet has been a disaster, and forcing Bessent, Hasset, and Howard "Wig-Salesman" Lutnick to debase themselves with an endless series of ridiculous lies doesn't change this.

llm_nerd commented on Microsoft has a problem: lack of demand for its AI products   windowscentral.com/artifi... · Posted by u/mohi-kalantari
this_user · 7 days ago
Microsoft's entire business model for decades has been to shove shoddy products down people's throats. And somehow, they have figured out how to do it too, because otherwise Teams wouldn't be used by anyone.
llm_nerd · 7 days ago
Microsoft's entire business model has been tying. Countless millions are forced to use Copilot because their IT department has contracts with Microsoft, and those same contracts are why they use Office, Teams, and so on. Their developers use Visual Studio, deploy to Azure, and run it all against SQL Server. Their email comes from Exchange.

It has been an incredibly lucrative strategy. We all herald some CEO's prowess in growing revenue when they've been doing the same playbook for decades now, and have been running on the inertia of Windows dominance on the desktop. Every new entrant is pushed out through countless incredibly lazy IT departments that just adopt whatever Microsoft shits out.

It's actually surprising that the one and only area where this really failed was as they tried to lever tying to the mobile market. A couple of missteps along the way are the only reason every office drone isn't rocking their Lumia ExchangeLive! CoDevice.

u/llm_nerd

KarmaCake day3843May 19, 2023
About
...
View Original