Readit News logoReadit News
m4r1k commented on I got an Nvidia GH200 server for €7.5k on Reddit and converted it to a desktop   dnhkng.github.io/posts/ho... · Posted by u/dnhkng
m4r1k · 10 days ago
Wow! As others have said, deal of the century!! As a side note, a few years back, I used to scrape eBay for Intel QS Xeon and quite a few times managed to snag incredible deals, but this is beyond anything anyone has ever achieved!
m4r1k commented on Palantir could be the most overvalued company that ever existed   247wallst.com/investing/2... · Posted by u/Anon84
m4r1k · 13 days ago
Palantir is a shit show
m4r1k commented on Micron Announces Exit from Crucial Consumer Business   investors.micron.com/news... · Posted by u/simlevesque
m4r1k · 17 days ago
I have also bought Crucial for decades. Great quality and reliability for a fair price. Anybody doing anything semi-professional will be impacted by this questionable decision.
m4r1k commented on TPUs vs. GPUs and why Google is positioned to win AI race in the long term   uncoveralpha.com/p/the-ch... · Posted by u/vegasbrianc
villgax · 23 days ago
100 times more chips for equivalent memory, sure.
m4r1k · 23 days ago
Check the specs again. Per chip, TPU 7x has 192GB of HBM3e, whereas the NVIDIA B200 has 186GB.

While the B200 wins on raw FP8 throughput (~9000 vs 4614 TFLOPs), that makes sense given NVIDIA has optimized for the single-chip game for over 20 years. But the bottleneck here isn't the chip—it's the domain size.

NVIDIA's top-tier NVL72 tops out at an NVLink domain of 72 Blackwell GPUs. Meanwhile, Google is connecting 9216 chips at 9.6Tbps to deliver nearly 43 ExaFlops. NVIDIA has the ecosystem (CUDA, community, etc.), but until they can match that interconnect scale, they simply don't compete in this weight class.

m4r1k commented on TPUs vs. GPUs and why Google is positioned to win AI race in the long term   uncoveralpha.com/p/the-ch... · Posted by u/vegasbrianc
m4r1k · 23 days ago
Google's real moat isn't the TPU silicon itself—it's not about cooling, individual performance, or hyper-specialization—but rather the massive parallel scale enabled by their OCS interconnects.

To quote The Next Platform: "An Ironwood cluster linked with Google’s absolutely unique optical circuit switch interconnect can bring to bear 9,216 Ironwood TPUs with a combined 1.77 PB of HBM memory... This makes a rackscale Nvidia system based on 144 “Blackwell” GPU chiplets with an aggregate of 20.7 TB of HBM memory look like a joke."

Nvidia may have the superior architecture at the single-chip level, but for large-scale distributed training (and inference) they currently have nothing that rivals Google's optical switching scalability.

m4r1k commented on I took all my projects off the cloud, saving thousands of dollars   rameerez.com/send-this-ar... · Posted by u/sebnun
m4r1k · 2 months ago
yes but most of dude's websites are not loading and some are even without SSL in 2025.. https://imgur.com/a/irWADuq
m4r1k commented on Rapid Brightening of 3I/Atlas Ahead of Perihelion   arxiv.org/abs/2510.25035... · Posted by u/bikenaga
ceejayoz · 2 months ago
Don't worry, Avi Loeb is on it. https://avi-loeb.medium.com/3i-atlas-rapidly-brightens-and-g...

"Does it employ a power source that is hotter than the Sun?"

Sigh.

m4r1k · 2 months ago
Dr. Avi has a pretty clear point and he goes into details on the JRE released just yesterday. https://youtu.be/EaAun27gftk

What most stand out is the sheer amount of closed mind people in the accademia, Avi is not afraid of making suggestions of what it might be and even saying “if it turns out of being a rock, so be it”.

u/m4r1k

KarmaCake day431September 21, 2022View Original