A16Z is consistently the most embarrassing VC firm at any given point in time. I guess optimistically they might be doing “outrage marketing” but it feels more like one of those places where the CEO is just an idiot and tells his employees to jump on every trend.
The funny part is that they still make money. It seems like once you’ve got the connections, being a VC is a very easy job these days.
But is gassing up founders something they want? Idk, maybe. But just remember these guys crypto play and it feels like they'll just yes man you off a cliff if you're a founder...
Sequoia is also increasingly embarrassing. A shame because it wasn't but 10 years ago that these firms seemed like they were leading the charge of world-changing innovation, etc...
It's been such a mind-boggling decline in intellect, combined with really odd and intense conspiratorial behavior around crypto, that I went into a bit a few months ago.
My weak, uncited, understanding from then they're poorly positioned, i.e in our set they're still the guys who write you a big check for software, but in the VC set they're a joke: i.e. they misunderstood carpet bombing investment as something that scales, and went all in on way too many crypto firm. Now, they have embarrassed themselves with a ton of assets that need to get marked down, it's clearly behind the other bigs, but there's no forcing function to do markdowns.
So we get primal screams about politics and LLM-generated articles about how a $9K video card is the perfect blend between price and performance.
There's other comments effusively praising them on their unique technical expertise. I maintain a llama.cpp client on every platform you can think of. Nothing in this article makes any sense. If you're training, you wouldn't do it on only 4 $9K GPUs that you own. If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.
> If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.
I was with you up till here. Come on! CPU inferencing is not it, even macs struggle with bigger models, longer contexts (esp. visible when agentic stuff gets > 32k tokens).
The PRO6000 is the first gpu that actually makes sense to own from their "workstation" series.
A rough price out using new egg/microcenter pricing, substituting a few items where they didn't specify a specific item/brand. I didn't bother trying to figure out what case they used.
This could have been much more expensive. They built it from off-the-shelf parts anyone could get (except for the cost) and listed them which I appreciate. Microcenter/Newegg could put together a bundle as a joke and they'd likely get a few orders.
Of course I don't personally have any use for this but it's good to have an idea what it takes to run the best openweight models in a secure/controlled environment. To get started a single 96GB GPU system is only $16,115. For perspective I spent about $10k (today dollars) for a Toshiba Portege 320CT laptop with as much memory and accessories as I could get in 1998.
If you actually want a good multi-GPU system, just get an NVIDIA-designed system. There's no good reason to try to design and build one yourself when you will be relying almost entirely on NVIDIA cards and their multi-GPU communication.
In less than a year when A16z is finished with the few, pointless experiments they want to run on this and it is relegated to a closet forgotten, some poor founder is going to see it appear as part of their term sheet. "Oh, yes, $75,000 of our $500,000 commit is compensated through this state-of-the-art AI Workstation"
If you can’t already just buy a Lenovo or Dell workstation with this configuration, I’m sure you can just buy 4x GPUs and plug them into a base system that will support them.
Who is buying hardware this expensive from a business that probably doesn’t really know how to do (or isn’t setup to do) proper manufacturing tests?
How much heat does it generate and how loud is it at full tilt? Try keep comparing it to a modern under-desk computer, but I’m not sure you’d want to have that thing in the same room while you’re using it?
The funny part is that they still make money. It seems like once you’ve got the connections, being a VC is a very easy job these days.
My weak, uncited, understanding from then they're poorly positioned, i.e in our set they're still the guys who write you a big check for software, but in the VC set they're a joke: i.e. they misunderstood carpet bombing investment as something that scales, and went all in on way too many crypto firm. Now, they have embarrassed themselves with a ton of assets that need to get marked down, it's clearly behind the other bigs, but there's no forcing function to do markdowns.
So we get primal screams about politics and LLM-generated articles about how a $9K video card is the perfect blend between price and performance.
There's other comments effusively praising them on their unique technical expertise. I maintain a llama.cpp client on every platform you can think of. Nothing in this article makes any sense. If you're training, you wouldn't do it on only 4 $9K GPUs that you own. If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.
I was with you up till here. Come on! CPU inferencing is not it, even macs struggle with bigger models, longer contexts (esp. visible when agentic stuff gets > 32k tokens).
The PRO6000 is the first gpu that actually makes sense to own from their "workstation" series.
Well, you're getting the ability to maintain a context bigger than 8K or so, for one thing.
Deleted Comment
Grand total: ~ $41,000
Motherboard https://www.newegg.com/gigabyte-mh53-g40-amd-ryzen-threadrip... $895
CPU https://www.microcenter.com/product/674313/amd-ryzen-threadr... - $3500
Cooler https://www.newegg.com/p/3C6-013W-002G6 $585
RAM https://www.newegg.com/a-tech-256gb/p/1X5-006W-00702 $1600
SSDs https://www.newegg.com/crucial-2tb-t700-nvme/p/N82E168201563... $223 x 4 = $892
GPUs https://www.newegg.com/p/N82E16888892012 - $8295 x 4 = $33,180
Case https://www.newegg.com/fractal-design-atx-full-tower-north-s... $195
Power Supply https://www.newegg.com/thermaltake-toughpower-gf3-series-ps-... - $314
Of course I don't personally have any use for this but it's good to have an idea what it takes to run the best openweight models in a secure/controlled environment. To get started a single 96GB GPU system is only $16,115. For perspective I spent about $10k (today dollars) for a Toshiba Portege 320CT laptop with as much memory and accessories as I could get in 1998.
I’m glad they did. It’s weird and different.
Who is buying hardware this expensive from a business that probably doesn’t really know how to do (or isn’t setup to do) proper manufacturing tests?