This looks interesting (although I'm not in the target market, too small)...
But if I were looking at this, judging from the quality of people they've amassed in their engineering team, is there any chance they won't be acquired in 6 months?
To anyone looking to take a bet on this, what is the answer to "what's your plan for when your stellar team gets acquired?" And what answer will satisfy that buyer?
Update: Adding another question, does this "environment" (where any really great product with great talent in it can be acquired very quickly) have a chilling effect on purchases for products like this?
Hi! So, at every step -- from conception to funding to building the team and now building the product -- we have done so to build a big, successful public company. Not only do we (the founders) share that conviction, but it is shared by our investors and employees as well. For better or for ill, we are -- as we memorably declared to one investor -- ride or die.
Also, if it's of any solace, I really don't think any of the existing players would be terribly interested in buying a company that has so thoroughly and unequivocally rejected so many of their accrued decisions! ;) I'm pretty sure their due diligence would reveal that we have taken a first principles approach here that is anathema to the iterative one they have taken for decades -- and indeed, these companies have shown time and time again that they don't want to risk their existing product lines to a fresh approach, no matter how badly customers want it.
I read through your "Compensation as a Reflection of Values" article [1] and just wanted to say that I love it. It reflects and relates so much to my own values towards work, as a life philosophy, that I felt refreshed knowing others not only think this way but also have the power to implement such a culture. Thanks for trying that, I hope it becomes something more common to workers in general.
Your approach to pay is really refreshing and attractive as an engineer, and also seems like the exact type of thing most VC or larger tech firms would really hate. That alone feels like evidence of your conviction
I was literally remarking to a workmate 'this looks like Sun 2.0', then I see who's on the team :). Congrats, I'll be keeping an eye out if you ever start shipping to Australia.
I agree... what I think they meant to say is something along the lines of software defaults are already optimized to maximize and take advantage of the hardware's abilities so work is completed faster. The 'with the software baked in' should be changed to reflect the value proposition that Oxide is alluding to.
Going by that logic, you should never take a chance on a bad company because they are bad, and a good company because they are too good and might get acquired. So should you just never rely on a small company for anything?
That's the question I was genuinely asking. Do longer-term minded buyers think this way? Our company is too small and just use AWS, we're just not perspective buyers. But I'm trying to understand the mindset of a CapEx style buyer whose timelines are multiple years.
This team is, by all measures, going to hit it out of the park. There's just a solid amount of talent, experience and insight all-round.
And to be clear, I am not at all disparaging teams that get acquired – that would be silly. I'm just saying that we are in an environment these days where very few of these kinds of companies get a chance to grow before being acquired and WE are the ones that lose even though the people working at the company rightfully earn a nice payout.
I have the same "fear" about Tailscale, a company whose product we love and have started using, and are about to purchase.
But the fact that a member of the founding team themselves answered my message above in plain english (not surprising), is honestly refreshing.
Private companies can't just get bought out. They have to agree to be acquired. There is not some roaming force of Big Corp M&A people who forcefully acquihire companies.
But second, I'd love to understand the compute vs storage tradeoff chosen here. Looking at the (pretty!) picture [1], I was shocked to see "Wow, it's mostly storage?". Is that from going all flash?
Given how much of the rack is storage, I'm not sure which Milan was chosen (and so whether that's 2048 threads or 4196 [edit: real cores, 4196 threads]), but it seems like visually 4U is compute? [edit: nope] Is that a mistake on my part, because dual-socket Milan at 128 threads per socket is 256 threads per server, so you need at least 8 servers to hit 2048 "somethings", or do the storage nodes also
have Milans [would make sense] and their compute is included [also fine!] -- and so similarly that's how you get a funky 30 TiB of memory?
[Top-level edit from below: the green stuff are the nodes, including the compute. The 4U near the middle is the fiber]
P.S.: the "NETWORK SPEED 100 GB/S" in all caps / CSS loses the presumably 100 Gbps (though the value in the HTML is 100 gb/s which is also unclear).
Leaving that RAM for ZFS L2 ARC perhaps? I don't think they would use Illumos as the hypervisor OS without also using OpenZFS with it. They also need some for management, the control UI, a DB for metrics and more.
Btw. if I count correctly, they have 20 SSD slots per node (if a node is full width) and 16 nodes. They would need 2 TB to reach 1 PB of "raw" capacity with the obvious redundancy overhead of ~ 20%.
It is also quite possible, they don't use ZFS at all and use e.g. Ceph or something like it but I don't think that is the case, because that wouldn't be cantrillian. :-) E.g. using Minio, they can provide something S3 like on top of a cluster of ZFS storage nodes too but they most likely get better latency with local ZFS and not a distributed filesystem. Financial institutions especially seem to be part of the target here and there latency can be king.
Duh! I got tricked by the things near the PDU as "oh, these must be the pure-compute nodes".
So maybe that's the better question: what are the 4U worth of stuff surrounding the power? More networking stuff? Management stuff? (There was some swivel to the back of the rack / with networking, but I can't find it now)
Edit: Ahh! The rotating view is on /product and so that ~4U is the fiber. (Hat tip to Jon Olson, too)
Power footprint also confirms that the compute density is pretty low.
We built a few racks of Supermicro AMD servers (4 X computes in 2U), and we load tested it to 23kva peak usage (about 1/2 full with nthat type of nodes only, our DC would let us go further)
Were also over 1 PB of disks (unclear how much of this is redundancy), also in NVMe (15.36 TB x 24 in 2U is a lot of storage...)
Other then that not a bad concept, not sure of a premium they will charge or what will be comparable on price.
They basically reinvented mainframes.
Seems it has a lot in common with Z series.
Scalable locked in hardware, virtualization,
reliability, engineered for hardware swaps,
upgrades.
A proprietary operating system (?) from what someone said.
(Offshoot of Solaris +++ (???) By that I mean that most of it, or all of it might be open sourced forks, but it will be an OS only meant to run on their systems.
(It would be fun to get it working at home, on a couple of PCs or a bunch of PIs)
They lack specialized processors to offload some
workloads to.
Perhaps in modern terms shelfs of GPUs or a shelf fast FPGA
, DSP processors. The possibilities are huge.
I didn't find any mention of from what I read.
They also lack the gigantic legacy effort to be
compatible, which is a good thing.
Their approach to reliability isn't quite on par with mainframes, AIUI. At least, not yet. And the programming model is also quite different - a mainframe can seamlessly scale from lots of tiny VM workloads (what Oxide seems to be going for) to large vertically-scaled shared-everything SSI, and anything in between.
Ignoring hardware reliability, thanks to the integration, their solution should be more reliable than whatever byzantine solutions are currently used in their target market. I've worked in a shop (a well-known name that I won't mention) that had a mix of "chat ops" and Perl scripts integrated with JIRA where you could request a Linux VM through a JIRA ticket and get it automatically provisioned, I assume from some big chassis running VMWare, and then use git+Puppet to configure it. It works, but it's a lot of software from different sources and there is always one thing or the other failing. And the security of all that stuff is probably patchy, regardless of audits.
That being said, this solution is the mother of all lock ins...
I could see it used for the non-critical part of a company's infrastructure. I would not run production stuff on it, but it could work for development systems, test boxes, etc. Basically give developers access and let them create and destroy as many VMs as they need, whenever they need.
Yeah, I noticed that too. The green wireframe looking stuff is actually text in spans/divs next to, or overlayed on pictures. The little "nodes" are this character, for example: ⎕. The effect is pretty unique.
It’s all fun stories from people doing amazing things with computer hardware and low level software. Like Ring-Sub-Zero and DRAM driver level software.
> Our firmware is open source. We will be transparent about bug fixes. No longer will you be gaslit by vendors about bugs being fixed but not see results or proof.
There are lots of reasons to be enthusiastic about Oxide but for me, this one takes the cake. I hope they are successful, and I hope this attitude spreads far and wide.
- vendor-locked at the rack - if you have hardware from someone else, it can't live in the same cabinet
I guess if you just want a pretty data center in a box and look like what they consider a 'normal' enterprise to be, it might appeal. But I'm not sure how many people asked for Apple-style hardware in the DC.
Why is it important what kind of virtualization? It works and since it is built for this hardware it will likely be more reliable then anything you're putting together yourself.
The specs are damn good. When it is all top-of-the-line, inflexibility is kind of a mute point. Where else are you going to go?
> But I'm not sure how many people asked for Apple-style hardware in the DC.
Well integrated, performant and reliable hardware that runs VMs where you can put anything on it is pretty much all everyone running their own hardware is looking for.
Honestly I am surprised how many here completely misunderstand what their value proposition is.
> Why is it important what kind of virtualization?
Because if I ran this, would have to manage it. Given that I have lots of virtualization to manage already, I would want it to use the same tooling for rather obvious reasons.
> is pretty much all everyone running their own hardware is looking for.
I don't think you talk to many people who do this, but as someone who manages 8 figures worth of hardware, I can tell you that is absolutely not true.
> The specs are damn good. When it is all top-of-the-line, inflexibility is kind of a mute point. Where else are you going to go?
To some hardware that actually fits my use case, that is managable in an existing environment? Oh wait - I already have that. I mean, seriously - do you think they're the only shop selling nice machines?
The value-add is all wrong, unless you are a greenfield deployment willing to bet it all on this particular single vendor, and your needs match their offering.
> - vendor-locked at the rack - if you have hardware from someone else, it can't live in the same cabinet
This describes legacy IBM platforms quite well. If they can leverage hyperscaling tech to be better and cheaper than what IBM is currently offering, that's enough to make it worthwhile.
This is a selling point - if it's actually better (which, why not? most of the existing virtualization management solutions either suck or are hugely expensive).
If it's not better, big deal? I'm assuming you could just throw Linux on these things and run on the metal or use something different, right? Given how much bcantrill (and other Oxide team members) have discussed loving open hardware, I seriously doubt they would intentionally try to lock down their own product!
> vendor-locked at the rack - if you have hardware from someone else, it can't live in the same cabinet
This is aimed at players so big that they want to buy at the rack level and have no desire to ever touch or carve up anything. It's a niche market, but for them this is actually a plus.
"But I'm not sure how many people asked for Apple-style hardware in the DC."
It's probably selling to the "Amazon-style hardware in your DC market", which I think should be fairly ripe. Building your own private cloud from parts defeats a lot of the purpose...avoiding your own plumbing.
As I understand it, Oxide is going to have deep software integration into their hardware. So the expectation isn't that the servers in this rack will be running Windows or a generic Linux distribution. In case anyone from Oxide is here, is my understanding correct? And if so, will there be a way to run a smaller version of an Oxide system, say for testing or development, without purchasing an entire rack at a time?
Anyway, glad to finally get a glimpse of what Oxide has to offer. Looking forward to seeing a lot more.
My understanding is you will use a API to provision virtual machines on top of the Oxide hypervisor/software stack, which is bhyve running on Illumos. So you can still just run your favorite Linux distro or windows or a BSD if you want[1].
Agreed, I would love to hear more about the management plane. I'm glad it's API-driven, but I still have some questions about things like which hypervisor they are using.
If it's a custom software stack, might be nice to get a miniature dev-kit!
They will use Illumos with Bhyve, @bcantrill said it in a podcast just a few months ago. I have linked it somewhere in my comments (look at my profile).
But if I were looking at this, judging from the quality of people they've amassed in their engineering team, is there any chance they won't be acquired in 6 months?
To anyone looking to take a bet on this, what is the answer to "what's your plan for when your stellar team gets acquired?" And what answer will satisfy that buyer?
Update: Adding another question, does this "environment" (where any really great product with great talent in it can be acquired very quickly) have a chilling effect on purchases for products like this?
Hopefully some Oxide people can answer :-)
Also, if it's of any solace, I really don't think any of the existing players would be terribly interested in buying a company that has so thoroughly and unequivocally rejected so many of their accrued decisions! ;) I'm pretty sure their due diligence would reveal that we have taken a first principles approach here that is anathema to the iterative one they have taken for decades -- and indeed, these companies have shown time and time again that they don't want to risk their existing product lines to a fresh approach, no matter how badly customers want it.
[1] https://oxide.computer/blog/compensation-as-a-reflection-of-...
I hope you drop a new episode soon
Congrats on the announcement, here's hoping you're right! This looks too interesting to be swallowed by Oracle or HPe.
https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap...
Deleted Comment
Not sure that 'with the software baked in' is a good phrase to use. Sounds inflexible. Perhaps a different phrasing would help?
This team is, by all measures, going to hit it out of the park. There's just a solid amount of talent, experience and insight all-round.
And to be clear, I am not at all disparaging teams that get acquired – that would be silly. I'm just saying that we are in an environment these days where very few of these kinds of companies get a chance to grow before being acquired and WE are the ones that lose even though the people working at the company rightfully earn a nice payout.
I have the same "fear" about Tailscale, a company whose product we love and have started using, and are about to purchase.
But the fact that a member of the founding team themselves answered my message above in plain english (not surprising), is honestly refreshing.
I don't understand this concern at all.
But second, I'd love to understand the compute vs storage tradeoff chosen here. Looking at the (pretty!) picture [1], I was shocked to see "Wow, it's mostly storage?". Is that from going all flash?
Heading to https://oxide.computer/product for more details, lists:
- 2048 cores
- 30 TB of memory
- 1024 TB of flash (1 PiB)
Given how much of the rack is storage, I'm not sure which Milan was chosen (and so whether that's 2048 threads or 4196 [edit: real cores, 4196 threads]), but it seems like visually 4U is compute? [edit: nope] Is that a mistake on my part, because dual-socket Milan at 128 threads per socket is 256 threads per server, so you need at least 8 servers to hit 2048 "somethings", or do the storage nodes also have Milans [would make sense] and their compute is included [also fine!] -- and so similarly that's how you get a funky 30 TiB of memory?
[Top-level edit from below: the green stuff are the nodes, including the compute. The 4U near the middle is the fiber]
P.S.: the "NETWORK SPEED 100 GB/S" in all caps / CSS loses the presumably 100 Gbps (though the value in the HTML is 100 gb/s which is also unclear).
[1] https://oxide.computer/_next/image?url=%2Fimages%2Frenders%2...
eta: also suspect 30TB total just means they're leaving 64GB ram for the hypervisor OS on each node.
Btw. if I count correctly, they have 20 SSD slots per node (if a node is full width) and 16 nodes. They would need 2 TB to reach 1 PB of "raw" capacity with the obvious redundancy overhead of ~ 20%.
It is also quite possible, they don't use ZFS at all and use e.g. Ceph or something like it but I don't think that is the case, because that wouldn't be cantrillian. :-) E.g. using Minio, they can provide something S3 like on top of a cluster of ZFS storage nodes too but they most likely get better latency with local ZFS and not a distributed filesystem. Financial institutions especially seem to be part of the target here and there latency can be king.
Guessing they aren't counting threads (they say "cores"), so 64 cores per socket, 128 cores per server, 16 servers => 2048 cores.
So maybe that's the better question: what are the 4U worth of stuff surrounding the power? More networking stuff? Management stuff? (There was some swivel to the back of the rack / with networking, but I can't find it now)
Edit: Ahh! The rotating view is on /product and so that ~4U is the fiber. (Hat tip to Jon Olson, too)
We built a few racks of Supermicro AMD servers (4 X computes in 2U), and we load tested it to 23kva peak usage (about 1/2 full with nthat type of nodes only, our DC would let us go further)
Were also over 1 PB of disks (unclear how much of this is redundancy), also in NVMe (15.36 TB x 24 in 2U is a lot of storage...)
Other then that not a bad concept, not sure of a premium they will charge or what will be comparable on price.
- There's a bunch of RJ45 up top that I don't quite understand :)
- A bunch of storage sleds
- A compute sled, 100G QSFP switch, compute sled sandwich
- Power distribution (rectifiers, I'd think, unless it's AC to the trays?)
- Another CSC sandwich
- More storage.
I assume in reality we'd have many more cables making things less pretty, given the number of front-facing QSFPs on those ToRs.
Out of data-plane HW mgmt probably
They basically reinvented mainframes. Seems it has a lot in common with Z series.
Scalable locked in hardware, virtualization, reliability, engineered for hardware swaps, upgrades.
A proprietary operating system (?) from what someone said. (Offshoot of Solaris +++ (???) By that I mean that most of it, or all of it might be open sourced forks, but it will be an OS only meant to run on their systems.
(It would be fun to get it working at home, on a couple of PCs or a bunch of PIs)
They lack specialized processors to offload some workloads to.
Perhaps in modern terms shelfs of GPUs or a shelf fast FPGA , DSP processors. The possibilities are huge.
I didn't find any mention of from what I read.
They also lack the gigantic legacy effort to be compatible, which is a good thing.
That being said, this solution is the mother of all lock ins...
I could see it used for the non-critical part of a company's infrastructure. I would not run production stuff on it, but it could work for development systems, test boxes, etc. Basically give developers access and let them create and destroy as many VMs as they need, whenever they need.
It’s all fun stories from people doing amazing things with computer hardware and low level software. Like Ring-Sub-Zero and DRAM driver level software.
There are lots of reasons to be enthusiastic about Oxide but for me, this one takes the cake. I hope they are successful, and I hope this attitude spreads far and wide.
- dedicated to virtualization, done their way
- rather inflexible in hardware specs
- vendor-locked at the rack - if you have hardware from someone else, it can't live in the same cabinet
I guess if you just want a pretty data center in a box and look like what they consider a 'normal' enterprise to be, it might appeal. But I'm not sure how many people asked for Apple-style hardware in the DC.
The specs are damn good. When it is all top-of-the-line, inflexibility is kind of a mute point. Where else are you going to go?
> But I'm not sure how many people asked for Apple-style hardware in the DC.
Well integrated, performant and reliable hardware that runs VMs where you can put anything on it is pretty much all everyone running their own hardware is looking for.
Honestly I am surprised how many here completely misunderstand what their value proposition is.
Because if I ran this, would have to manage it. Given that I have lots of virtualization to manage already, I would want it to use the same tooling for rather obvious reasons.
> is pretty much all everyone running their own hardware is looking for.
I don't think you talk to many people who do this, but as someone who manages 8 figures worth of hardware, I can tell you that is absolutely not true.
> The specs are damn good. When it is all top-of-the-line, inflexibility is kind of a mute point. Where else are you going to go?
To some hardware that actually fits my use case, that is managable in an existing environment? Oh wait - I already have that. I mean, seriously - do you think they're the only shop selling nice machines?
The value-add is all wrong, unless you are a greenfield deployment willing to bet it all on this particular single vendor, and your needs match their offering.
Deleted Comment
> - rather inflexible in hardware specs
> - vendor-locked at the rack - if you have hardware from someone else, it can't live in the same cabinet
This describes legacy IBM platforms quite well. If they can leverage hyperscaling tech to be better and cheaper than what IBM is currently offering, that's enough to make it worthwhile.
This is a selling point - if it's actually better (which, why not? most of the existing virtualization management solutions either suck or are hugely expensive).
If it's not better, big deal? I'm assuming you could just throw Linux on these things and run on the metal or use something different, right? Given how much bcantrill (and other Oxide team members) have discussed loving open hardware, I seriously doubt they would intentionally try to lock down their own product!
> vendor-locked at the rack - if you have hardware from someone else, it can't live in the same cabinet
This is aimed at players so big that they want to buy at the rack level and have no desire to ever touch or carve up anything. It's a niche market, but for them this is actually a plus.
It's probably selling to the "Amazon-style hardware in your DC market", which I think should be fairly ripe. Building your own private cloud from parts defeats a lot of the purpose...avoiding your own plumbing.
If "Apple-style" means lower skilled-labor cost for maintenance - absolutely worth it.
A smart company would stay away from this kind of strong lock-in.
Anyway, glad to finally get a glimpse of what Oxide has to offer. Looking forward to seeing a lot more.
[1]: https://soundcloud.com/user-760920229/why-your-servers-suck-...
If it's a custom software stack, might be nice to get a miniature dev-kit!
Deleted Comment