The cost is a rate, like $2 per hour, not a purchase price.
So faster CPUs get work done more quickly and may justify a higher cost per hour.
Ah, the graph is wacky, but the text makes sense, looks like a disconnect:
1. C4A Axion: $2.16 reported cost per hour, test took average of 9 seconds per run: cost approximately 0.005 dollars.
2. T2A Ampere Altra: $1.85 reported cost per hour, test took average of 17 seconds per run: cost approximately 0.009 dollars.
3. C4 Xeon Platinum EMR: $2.37 reported cost per hour, test took average of 17 seconds per run: cost approximately 0.011 dollars.
So the C4A costs a bit more ($2.16 vs $1.85) but gets approximately perf/$ is around 2x in favor of the C4A.
> These new C4A instances are advertised as offering up to 50% better performance and up to 60% better energy efficiency than their current generation x86 instance types.
Hardware (see also, Google's TPUs and their performance vs. energy cost) is one reason why I'm fairly bullish on Google.
These are Neoverse-V2 based...same as NVIDIA Grace and AWS Graviton4. Also note they are comparing 48 cores in the C4A, to 24 cores and 24 hyperthreads in the Xeon C4 (Emerald Lake). So single threaded tasks will be somewhat worse than C4, but once the number of threads are significantly greater than the number of cores in C4, C4A will be better.
Intel and AMD are coming out with new server processors with 128/144/192/288 cores in a chip, which can easily put them back on top again.
Hardware companies have yearly releases and work closely with their customers. None of this describes google and it's the reason why nvidia is a trillion dollar company despite Google TPUs existing prior. Basically no one outside of Google uses Google hardware. If it's a generic arm target someone might use it because it's low effort, but it's not exactly a value add.
So all products must be available from one specific, primarily US focused, retailer of consumer-grade PC parts? I'm not sure I understand what your criticism is here.
That the author of the post wants such parts to be available and affordable as hardware you can put under your desk instead of something you only get to rent at Googles fickle will until they cancel it too.
They had three measures: A) Flops. Operations per second. B) Cost. Dollars per hour. C) Runtime. Seconds per task.
I expected the formula to be Flops/Cost, resulting in units of operations per dollar.
Instead it was computed as Flops / (Cost * Runtime), to get some units that don't make sense to me — operations * tasks per dollar seconds?
So faster CPUs get work done more quickly and may justify a higher cost per hour.
Ah, the graph is wacky, but the text makes sense, looks like a disconnect:
So the C4A costs a bit more ($2.16 vs $1.85) but gets approximately perf/$ is around 2x in favor of the C4A.Nice upgrade for Google's customers. I'm guessing it does it at much lower wattage as well.
Hardware (see also, Google's TPUs and their performance vs. energy cost) is one reason why I'm fairly bullish on Google.
Intel and AMD are coming out with new server processors with 128/144/192/288 cores in a chip, which can easily put them back on top again.
Why hide your light under a cloudy bushel?