As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Exactly what I was thinking when the OP comment brought up "regular launches containing replacement hardware", this is easily solvable by actually "treating servers as cattle and not pets" whereby one would simply over-provision servers and then simply replace faulty servers around once per year.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
The analysis has zero redundancy for either servers or support systems.
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
serious q: how much extra failure rate would you expect from the physical transition to space?
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
The original article even addresses this directly. Plus hardware returns over fast enough that you'll simply be replacing modules with a smattering of dead servers with entirely new generations anyways.
It would be interesting to see if the failure rate across time holds true after a rocket launch and time spent in space. My guess is that it wouldn’t, but that’s just a guess.
I'd naively assume that the stress of launch (vibration, G-forces) would trigger failures in hardware that had been working on the ground. So I'd expect to see a large-ish number of failures on initial bringup in space.
Appreciate the insights, but I think failing hardware is the least of their problems. In that underwater pod trial, MS saw lower failure rates than expected (nitrogen atmosphere could be a key factor there).
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Indeed, MS had it easier with a huge, readily available cooling reservoir and a layer of water that additionally protects (a little) against cosmic rays, plus the whole thing had to be heavy enough to sink. An orbital datacenter would be in a opposite situation: all cooling is radiative, many more high-energy particles, and the weight should be as light as possible.
> In that underwater pod trial, MS saw lower failure rates than expected
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
The biggest difference is radiation. Even in LEO, you will get radiation-caused Single Events that will affect the hardware. That could be a small error or a destructive error, depending on what gets hit.
>The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
I used to build and operate data center infrastructure. There is very limited reason to do anything more than a warranty replacement on a GPU. With a high quality hardware vendor that properly engineers the physical machine, failure rates can be contained to less than .5% per year. Particularly if the network has redundancy to avoid critical mass failures.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
If you want to have people you need to add in a whole lot of life support and additional safety to keep people alive. Robots are easier, since they don't die so easily. If you can get them to work at all, that is.
That isn't going to last for much longer with the way power density projections are looking.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
This sort of work is ideal for robots. We don't do it much on Earth because you can pay a tech $20/hr to swap hardware modules, not because it's hard for robots to do.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
I suspect they'd stop at automatic rendezvous & docking. Use some sort of cradle system that holds heat fins, power, etc that boxes of racks would slot into. Once they fail just pop em out and let em burn up. Someone else will figure out the landing bit
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
Thanks for the thorough comment—yes, the heat pipes etc haven’t been accounted for. Might be a future addition but the idea was to look at some key large parts and see where that takes us in terms of launch. The pipes would definitely skew the business case further. Similarly, the analysis is missing trusses.
Don’t even get me started on the costs of maintenance. I am sweating bricks just thinking of the mission architecture for assembly and how the robotic system might actually look. Unless there’s a single 4 km long deployable array (of what width?), which would be ridiculous to imagine.
Don’t you need to look at different failure scenarios or patterns in orbit due to exposure to cosmic rays as well?
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
What if we just integrate the hardware so it fails softly?
That is, as hardware fails, the system looses capacity.
That seems easier than replacing things on orbit, especially if StarShip becomes the cheapest way to launch to orbit because StarShip launches huge payloads, not a few rack mounted servers.
I worked in aerospace for a couple of years in the beginning of my career. While my area of expertise was the mechanical design I shared my office with the guy who did the thermal design and I learned two things:
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
Sure it is doable. My point is that at room temperature convection is a so much more efficient heat transfer mechanism that I wonder why someone would even think about doing without it.
The caveat to this is that that this also dependent on where in the lifecycle of the satellite you are at. For example after launch, you might just have your survival heaters on, which will keep you within generally an industrial range (e.g. >-40c), and you might not reach higher temps until you hit nominal operations. But a lot of the hardware specs for temperature often are closer standard "industrial" specs rather than special mil or NASA specs.
Lay people associate space with cold because nearly every scifi movie has people freezing over in seconds when exposed to the vacuum of space (insert Picard face-palm gif).
Even The Expanse, even them! Although they are otherwise so realistic, that I have to say I started doubting myself a bit. I wonder what would really would happen and how fast...
People even complained that Leia did not freeze over (in stead of complaining about her sudden use of the force where previously she did not show any such talents.)
Well empty space has a temperature of roughly -270c...so that's pretty cold.
But I think what people/movies don't understand is that there's almost no conductive thermal transfer going on, because there's not much matter to do it. It's all radiation, which is why heat is a much bigger problem, because you can only radiate heat away, you can't conduct it. And whatever you use to radiate heat away can also potentially receive radiation from things like the Sun, making your craft even hotter.
Wouldn't a body essentially freeze dry as a wet being exposed to vacuum? I.e. the temperature of the space is still irrelevant and the cooling comes from vaporization.
I'm reading through the Expanse books and a passage that I read after seeing your comment jumped out at me. It's a quote from Havelock:
"As a boy living planetside, he had always thought of space as cold. And while that was technically true, mostly it was a vacuum. And so a ship, mostly, was a thermos. The heat from their bodies and systems would bleed into the void off over years and decades of it had the chance."
It’s not that dumb- if a human gets exposed to space the water in their exposed tissues will boil off, leading to evaporative cooling. In a vacuum, evaporative cooling can get you ~arbitrarily cold, as long as you’re giving up enough fluids. I don’t know whether you freeze over or dry out first, but I’m sure someone at NASA has done the math.
Why do they want to put a data center in space in the first place?
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
I've talked to the founder of Starcloud about this, there is just going to be a lot of data generative stuff in space in the future, and further and further out into space. He thinks now is the right time to learn how to compute up there because people will want to process, and maybe orchestrate processing between many devices, in space. He's fully aware of all of the objections in this hn comments section, he just doesn't believe they are insurmountable and he believes interoperable compute hubs in space will be required over the next 20/30 years. He's in his mid 20s, so it seems like a reasonable mission to be on to me.
Seems far more likely that the "data generative stuff" will get smaller and cheaper to run (like cell phones with on-device models) much faster than "run a giant supercomputer in orbit" will become easy.
Earth is the closest spot for most of space. It makes most sense for satellites to send data back to Earth. They would have to find a use where with lots of compute but latency really matters.
For farther out, computer on ships, stations, or bases makes sense, but that is different than free floating satellites. They already have power, cooling, and maintenance.
It is like saying there should be compute in the air for all the airplanes flying around.
My initial thought was: ambiguous regulatory environment.
Not being physically located the US, the EU, or any other sovereign territory, they could plausably claim exemption from pretty much any national regulations.
Space is terrible for that. There's only a handful of countries with launch vehicles and/or launch sites. You obviously need to be in their good graces for the launch to be approved.
If you want permissive regulatory environment, just spend the money buying a Mercedes for some politician in a corrupt country, you'll get a lot further...
The US government does questionable things to people in places like Guantanamo Bay because the constitution gives those people rights if they set foot on US soil. Data doesn't have rights, and governments have the capability to waive their own laws for things like national security.
Corporations operating in space are bound to the laws of the country the spacecraft belongs to, so there's no difference between a data harbor in Whogivesastan vs. a data harbor on a spacecraft operated by Whogivesastan.
> A lot of waste heat is generated running TDCs, which contributes to climate change—so migrating to space would alleviate the toll on Earth’s thermal budget. This seems like a compelling environmental argument. TDCs already consume about 1-1.5% of global electricity and it’s safe to assume that this will only grow in the pursuit of AGI.
The comparison here is between solar powered TDCs in Space vs TDCs on Earth.
- A TDC in space contributes to global warming due to mining+manufacturing emissions and spaceflight emissions.
- A comparable TDC on Earth would be solar+battery run. You will likely need a larger solar panel array than in space. Note a solar panel in operation does not really contribute to global warming. So the question is whether the additional Earth solar panel+battery manufacturing emissions are greater than launching the smaller array + TDC into space.
I would guess launching into space has much higher emissions.
Low Earth orbits are in the dark about 49% of time, but suffer no seasonal variability. Low Earth orbit is also very hot, and regular solar panels become less efficient the hotter they get.
The only sensible way to count pollution from solar+battery power manufacturing & disposal is do it on a per kWh basis.
Speed of light is actually quite an advantage, in theory at least. Speed of light in optical fiber is quite a bit slower (takes 50% longer) than in vacuum.
Not really. Fiber is more 2/3 of free space propagation and that puts the break-even point of direct fiber connection vs LEO up- and downlink at a geodesic distance of about 12000 km. So, for most data centers you want to reach a fiber is the better option.
There was a video from Scott Manley about this several years back. And he was very skeptical that it's even feasible for SpinLaunch to place something useful into orbit. And they haven't yet.
2. People will pay big bucks to keep their data all the way up there!
3. Profit!
It could make sense if the entire DC was designed as a completely modular system. Think ISS without the humans. Every module needs to have a guaranteed lifetime, and then needs to be safely yet destructively deorbited after its replacement (shiny new module) docks and mirrors the data.
>
2. People will pay big bucks to keep their data all the way up there!
Just to make me understand the business plan better: why would people or companies to be willing to pay much more to have their data (or computations) done in space?
The only reason that I can imagine is that the satellite which contains the data center also has a lot of sensors mounted (think military spying devices), and either for security, capacity or latency reasons you prefer the sensor data to be processed in space instead of transferring it down to earth, process it there, and sending the results back to space.
In other words: the business model is getting big money defense contracts (somewhat ignoring whether the idea really makes military sense or not).
Except space hasn't been "more secure" for nation states against other nation states in decades. US, Russia, and China all have various capabilities to destroy or steal or manipulate or tamper with satellites. It mostly doesn't happen right now because nobody is at full blown war. Shooting down satellites was expected to be a part of any superpower war since the 80s. Those weapons will be plenty effective against even massive installations in space.
Meanwhile, space gets you zero protection from the infosec threats that plague national security installations.
Yes cooling is difficult. Half the "solar panels" on the ISS aren't solar panels but heat radiation panels. That's the only way you can get rid of it and it's very inefficient so you need a huge surface.
This isn’t true. The radiators on ISS are MUCH smaller than the solar panels. I know it’s every single armchair engineer’s idea that heat rejection is this impossible problem in space, but your own example of ISS proves this is untrue. Radiators are no more of a problem than solar panels.
seems oddly paradoxical. ISS interior at some roughly livable temperature. Exterior is ... freakin' space! Temperature gradient seems as if it should take of it ...
... and then you realize that because it is space, there's almost nothing out there to absorb the heat ...
If you read the Starcloud whitepaper[1], it claims that massive batteries aren't needed because the satellites would be placed in a dawn-dusk sun-synchronous orbit. Except for occasional lunar eclipses, the solar panels would be in constant sunlight.
The whitepaper also says that they're targeting use cases that don't require low latency or high availability. In short: AI model training and other big offline tasks.
For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
All satellites launched from the US are required to have a decommissioning plan and a debris assessment report. In other words: the government must be satisfied that they won't create orbital debris or create a hazard on the ground. Since these satellites would be very large, they'll almost certainly need thrusters that allow them to avoid potential collisions and deorbit in a controlled manner.
Whether or not their business is viable depends on the future cost of launches and the future cost of batteries. If batteries get really cheap, it will be economically feasible to have an off-the-grid datacenter on the ground. There's not much point in launching a datacenter into space if you can power it on the ground 24/7 with solar + batteries. If cost to orbit per kg plummets and the price of batteries remains high, they'll have a chance. If not, they're sunk.
I think they'll most likely fail, but their business could be very lucrative if they succeed. I wouldn't invest, but I can see why some people would.
> For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
This is hiding so, so much complexity behind a simple hand wavy “modular”. I have trained large models on thousands of GPUs, hardware failure happen all the time. Last example in date: an infiniband interface flapping which ultimately had to be physically replaced.
What do you do if your DC is in space? Do you just jettison the entire multi million $ DGX pod that contains the faulty 300$ interface before sending a new one? Do you have an army of astronauts + Dragons to do this manually? Do we hope we have achieve super intelligence by then and have robots that can do this for us ?
Waving the “Modular” magic key word doesn’t really cut it for me.
International space law (starting with the Outer Space Treaty of 1967) says that nations are responsible for all spacecraft they launch, no matter whether the government or a non-governmental group launches them. So a server farm launched by a Danish company is governed by Danish law just the same as if they were on the ground- and exposed to the same ability to put someone into jail if they don't comply with a legal warrant etc.
This is true even if your company moves the actual launching to, say, a platform in international waters- you (either a corporation or an individual) are still regulated by your home country, and that country is responsible for your actions and has full enforcement rights over you. There is no area beyond legal control, space is not a magic "free from the government" area.
The 'Principality of Sealand', anywhere else on the high seas or Antarctica have their issues with practicality too, but considerably less likelihood of background radiation flipping bits...
:-) I appreciate your snark and the ad campaign reference.
But if international waters isn't enough (and much cheaper) then I don't think space will either. Man's imagination for legal control knows no bounds.
You wait (maybe not, it's a long wait...), if humankind ever does get out to the stars, the legal claims of the major nations on the universe will have preceded them.
Unless the company blasts its HQ and all its employees into space, no, they are very much subject to the jurisdiction of the countries they operate in. The physical location of the data center is irrelevant.
[Mild spoilers for _Critical Mass_ by Daniel Suarez below]
> Servers outside any legal jurisdiction
Others have weighed in on the accuracy of this, with a couple pointing out that the people are still on the ground. There's a thread in _Critical Mass_ by Daniel Suarez that winds up dealing with this issue in a complex set of overlapping ways.
Pretty good stuff, I don't think the book will be as good as the prior book in the series. (I'm only about halfway through.)
I know there's the fantasy of orbital CSAM storage able to beam obscenity to any point on the ground with zero accountability, but that is not going to survive real world politics.
The best argument I've heard for data centres in space startups is it's a excuse to do engineering work on components other space companies might want to buy (radiators, shielding, rad-hardened chips, data transfer, space batteries) which are too unsexy to attract the same level of FOMO investment...
Yes, and also just because a space data center isn’t useful today doesn’t mean it won’t be required tomorrow. When all the computing is between the ground and some nearby satellites, of course the tradeoffs won’t be worth it.
But what about when we’re making multi-year journeys to Mars and we need a relay network of “space data centers” talking to each other, caching content, etc?
We may as well get ahead of the problems we’ll face and solve them in a low-stakes environment now, rather than waiting to discover some novel failure scenario when we’re nearing Mars…
You need less batteries in orbit than on the ground since you're only in shade for at most like 40 minutes. And it's all far more predictable.
Cooling isn't actually any more difficult than on Earth. You use large radiators and radiate to deep space. The radiators are much smaller than the solar arrays. "Oh but thermos bottles--" thermos bottles use a very low emissivity coating. Space radiators use a high emissivity coating. Literally every satellite manages to deal with heat rejection just fine, and with radiators (if needed) much smaller than the solar arrays.
Latency is potentially an issue if in a high orbit, but in LEO can be very small.
Equipment upgrades and maintenance is impossible? Literally, what is ISS, where this is done all the time?
Radiation shielding isn't free, but it's not necessarily that expensive either.
Orbital maintainence is not a serious problem with low cost launch.
The upside is effectively unlimited energy. No other place can give you terawatts of power. At that scale, this can be cheaper than terrestrially.
> The radiators are much smaller than the solar arrays.
Modern solar panels are way more efficient than the ancient ones in ISS, at least 10x. The cooling radiators are smaller than solar panels because they are stacked and therefore effectively 5x efficient.
Unless there are at least 2x performance improvements on the cooling system, the cooling system would have to be larger than solar panels in a modern deployment.
Re: reliable energy. Even in low earth orbit, isn't sunlight plentiful? My layman's guess says it's in direct sun 80-95% of the time, with deterministic shade.
It's super reliable, provided you've got the stored energy for the reliable periods of downtime (or a sun synchronous orbit). Energy storage is a solved problem, but you need rather a lot of it for a datacentre and that's all mass which is very expensive to launch and to replace at the end of its usable lifetime. Same goes for most of the other problems brought up
We’re probably thinking of it the wrong way. Instead of a single datacenter it’s more likely we build constellations and then change the way we write software.
There will probably be a lot more edge computing in the future. 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal. Space infrastructure will probably have some parallels.
That sounds like the Guoxing Aerospace / ADA Space “Three-Body Computing Constellation”, currently at 12 satellites (out of a planned 2,800).
The Chinese project involves a larger number of less powerful inference-only nodes for edge computing, compared to Starcloud's training-capable hyperscale data centers.
> 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal.
Are there many startups actually taking real advantage of edge computing? Smaller B2B places don't really need it, larger ones can just spin up per-region clusters.... and then for 2C stuff you're mainly looking at static asset stuff which is just CDNs?
Who's out there using edge computing to good effect?
You're making lots of assumptions. They can put like 1000 Raspberrypi's which don't need all that much cooling and relatively little energy requirements.
For your other concerns, the risks are worth it for customers because of the main reward: No laws or governments in space! Technically, the datacenter company could be found liable but not for traffic, only for take-down refusals. Physical security is the most important security. For a lot of potential clients, simply making sure human access to the device is difficult is worth data-loss,latency and reliability issues.
If you are out of the magnetosphere, wouldn't your data be subject to way more cosmic ray interference, to the point that its actually a consideration?
In LEO, there is a lot of testing and mitigation you can do with your design to help reduce the chance and impact of radiation single events. For example, redundancy for key components, ECC for RAM, supervisor hardware, RAID or other storage tooling, etc.
Geostationary orbiters operate there during the day but this concept would position systems 100x closer to earth and well inside the protective envelope.
Yes. And most server hardware is already at least ECC ram. You may still want some light radiation shielding to prevent the worst, maybe some heavier shielding for solar flares. But beyond that, simple error correction can be baked into the software - ecc the bootloader and filesystem and you are mostly good to go
There is 0 reason to put a data center in space. For every single reason beyond "investor vibes" you can accomplish the same thing on earth for a significantly lower cost.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.
Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.
Side: Thanks for sharing about the "bathtub curve", as TIL and I'm surprised I haven't heard of this before especially as it's related to reliability engineering (as from searching on HN (Algolia) that no HN post about the bathtub curve crossed 9 points).
Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.
Without backup cooling and power one small failure could take the entire facility offline.
And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.
The whole idea is bonkers.
IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.
I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.
on one hand, I imagine you'd rack things up so the whole rack/etc moves as one into space, OTOH there's still movement and things "shaking loose" plus the vibration, acceleration of the flight and loss of gravity...
They would just keep the failed drives in the chassi. Maybe swap out the entire chassi if enough drives died.
> The company only lost six of the 855 submerged servers versus the eight servers that needed replacement (from the total of 135) on the parallel experiment Microsoft ran on land. It equates to a 0.7% loss in the sea versus 5.9% on land.
6/855 servers over 6 years is nothing. You'd simply re-launch the whole thing in 6 years (with advances in hardware anyways) and you'd call it a day. Just route around the bad servers. Add a bit more redundancy in your scheme. Plan for 10% to fail.
That being said, it's a complete bonkers proposal until they figure out the big problems, like cooling, power, and so on.
Underwater pods are the polar opposite of space in terms of failure risks. They don't require a rocket launch to get there, and they further insulate the servers from radiation compared to operating on the surface of the Earth, rather than increasing exposure.
(Also, much easier to cool.)
My feeling is that, a bit like starlink, you would just deprecate failed hardware, rather than bother with all the moving parts to replace faulty ram.
Does mean your comms and OOB tools need to be better than the average american colo provider but I would hope that would be a given.
And once you remove all the moving parts, you just fill the whole thing with oil rather than air and let heat transfer more smoothly to the radiators.
In this case, I see no reason to perform any replacements of any kind. Proper networked serial port and power controls would allow maintenance for firmware/software issues.
On Earth we have skeleton crews maintain large datacenters. If the cost of mass to orbit is 100x cheaper, it’s not that absurd to have an on-call rotation of humans to maintain the space datacenter and install parts shipped on space FedEx or whatever we have in the future.
Consider that we've been at the point where layers of monitoring & lockout systems are required to ensure no humans get caught in hot spots, which can surpass 100C, for quite some time now.
It's all contingent on a factor of 100-1000x reduction in launch costs, and a lot of the objections to the idea don't really engage with that concept. That's a cost comparable to air travel (both air freight and passenger travel).
(Especially irritating is the continued assertion that thermal radiation is really hard, and not like something that every satellite already seems to deal with just fine, with a radiator surface much smaller than the solar array.)
I won't say it's a good idea, but it's a fun way to get rid of e-waste (I envision this as a sort of old persons home for parted out supercomptuers)
Don’t even get me started on the costs of maintenance. I am sweating bricks just thinking of the mission architecture for assembly and how the robotic system might actually look. Unless there’s a single 4 km long deployable array (of what width?), which would be ridiculous to imagine.
It just seems funny, I recall when servers started getting more energy dense it was a revelation to many computer folks that safe operating temps in a datacenter should be quite high.
I’d imagine operating in space has lots of revelations in store. It’s a fascinating idea with big potential impact… but I wouldn’t expect this investment to pay out!
That is, as hardware fails, the system looses capacity.
That seems easier than replacing things on orbit, especially if StarShip becomes the cheapest way to launch to orbit because StarShip launches huge payloads, not a few rack mounted servers.
Deleted Comment
Dead Comment
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
Traditionally in European papers it used to be 18°C, so if Einstein and Schrödinger talk about room temperature it is that.
I've heard in chemistry and stamp collecting they use 25°C but that is heresy.
Even The Expanse, even them! Although they are otherwise so realistic, that I have to say I started doubting myself a bit. I wonder what would really would happen and how fast...
People even complained that Leia did not freeze over (in stead of complaining about her sudden use of the force where previously she did not show any such talents.)
But I think what people/movies don't understand is that there's almost no conductive thermal transfer going on, because there's not much matter to do it. It's all radiation, which is why heat is a much bigger problem, because you can only radiate heat away, you can't conduct it. And whatever you use to radiate heat away can also potentially receive radiation from things like the Sun, making your craft even hotter.
"As a boy living planetside, he had always thought of space as cold. And while that was technically true, mostly it was a vacuum. And so a ship, mostly, was a thermos. The heat from their bodies and systems would bleed into the void off over years and decades of it had the chance."
So at least the books got it right!
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
For farther out, computer on ships, stations, or bases makes sense, but that is different than free floating satellites. They already have power, cooling, and maintenance.
It is like saying there should be compute in the air for all the airplanes flying around.
Why?
Not being physically located the US, the EU, or any other sovereign territory, they could plausably claim exemption from pretty much any national regulations.
If you run amiss of US (or EU) regulators, they will never say, "well, it's in space, out of our jurisdiction!".
They will make your life hell on Earth.
If you want permissive regulatory environment, just spend the money buying a Mercedes for some politician in a corrupt country, you'll get a lot further...
The US government does questionable things to people in places like Guantanamo Bay because the constitution gives those people rights if they set foot on US soil. Data doesn't have rights, and governments have the capability to waive their own laws for things like national security.
Corporations operating in space are bound to the laws of the country the spacecraft belongs to, so there's no difference between a data harbor in Whogivesastan vs. a data harbor on a spacecraft operated by Whogivesastan.
> A lot of waste heat is generated running TDCs, which contributes to climate change—so migrating to space would alleviate the toll on Earth’s thermal budget. This seems like a compelling environmental argument. TDCs already consume about 1-1.5% of global electricity and it’s safe to assume that this will only grow in the pursuit of AGI.
The comparison here is between solar powered TDCs in Space vs TDCs on Earth.
- A TDC in space contributes to global warming due to mining+manufacturing emissions and spaceflight emissions.
- A comparable TDC on Earth would be solar+battery run. You will likely need a larger solar panel array than in space. Note a solar panel in operation does not really contribute to global warming. So the question is whether the additional Earth solar panel+battery manufacturing emissions are greater than launching the smaller array + TDC into space.
I would guess launching into space has much higher emissions.
The only sensible way to count pollution from solar+battery power manufacturing & disposal is do it on a per kWh basis.
At https://news.ycombinator.com/item?id=44397026 I speculate that in particular militaries might be interested.
In space there’s no ambient environment to speak of, so you’re limited to radiative cooling, which is massively inferior to refrigeration.
There’s also no 24/7 solar in low Earth orbit, which is where you want to be for latency and serviceable.
1. YOLO. Yeet big data into orbit!
2. People will pay big bucks to keep their data all the way up there!
3. Profit!
It could make sense if the entire DC was designed as a completely modular system. Think ISS without the humans. Every module needs to have a guaranteed lifetime, and then needs to be safely yet destructively deorbited after its replacement (shiny new module) docks and mirrors the data.
Just to make me understand the business plan better: why would people or companies to be willing to pay much more to have their data (or computations) done in space?
The only reason that I can imagine is that the satellite which contains the data center also has a lot of sensors mounted (think military spying devices), and either for security, capacity or latency reasons you prefer the sensor data to be processed in space instead of transferring it down to earth, process it there, and sending the results back to space.
In other words: the business model is getting big money defense contracts (somewhat ignoring whether the idea really makes military sense or not).
Meanwhile, space gets you zero protection from the infosec threats that plague national security installations.
Reliable energy? Possible, but difficult -- need plenty of batteries
Cooling? Very difficult. Where does the heat transfer to?
Latency? Highly variable.
Equipment upgrades and maintenance? Impossible.
Radiation shielding? Not free.
Decommissioning? Potentially dangerous!
Orbital maintenance? Gotta install engines on your datacenter and keep them fueled.
There's no upside, it's only downsides as far as I can tell.
... and then you realize that because it is space, there's almost nothing out there to absorb the heat ...
The whitepaper also says that they're targeting use cases that don't require low latency or high availability. In short: AI model training and other big offline tasks.
For maintenance, they plan to have a modular architecture that allows upgrading and/or replacing failed/obsolete servers. If launch costs are low enough to allow for launching a datacenter into space, they'll be low enough to allow for launching replacement modules.
All satellites launched from the US are required to have a decommissioning plan and a debris assessment report. In other words: the government must be satisfied that they won't create orbital debris or create a hazard on the ground. Since these satellites would be very large, they'll almost certainly need thrusters that allow them to avoid potential collisions and deorbit in a controlled manner.
Whether or not their business is viable depends on the future cost of launches and the future cost of batteries. If batteries get really cheap, it will be economically feasible to have an off-the-grid datacenter on the ground. There's not much point in launching a datacenter into space if you can power it on the ground 24/7 with solar + batteries. If cost to orbit per kg plummets and the price of batteries remains high, they'll have a chance. If not, they're sunk.
I think they'll most likely fail, but their business could be very lucrative if they succeed. I wouldn't invest, but I can see why some people would.
1. https://starcloudinc.github.io/wp.pdf
This is hiding so, so much complexity behind a simple hand wavy “modular”. I have trained large models on thousands of GPUs, hardware failure happen all the time. Last example in date: an infiniband interface flapping which ultimately had to be physically replaced. What do you do if your DC is in space? Do you just jettison the entire multi million $ DGX pod that contains the faulty 300$ interface before sending a new one? Do you have an army of astronauts + Dragons to do this manually? Do we hope we have achieve super intelligence by then and have robots that can do this for us ?
Waving the “Modular” magic key word doesn’t really cut it for me.
Any purported advantages have to contend with the fact that sending the modules costs millions of dollars. Tens to hundred millions
This is true even if your company moves the actual launching to, say, a platform in international waters- you (either a corporation or an individual) are still regulated by your home country, and that country is responsible for your actions and has full enforcement rights over you. There is no area beyond legal control, space is not a magic "free from the government" area.
But if international waters isn't enough (and much cheaper) then I don't think space will either. Man's imagination for legal control knows no bounds.
You wait (maybe not, it's a long wait...), if humankind ever does get out to the stars, the legal claims of the major nations on the universe will have preceded them.
> Servers outside any legal jurisdiction
Others have weighed in on the accuracy of this, with a couple pointing out that the people are still on the ground. There's a thread in _Critical Mass_ by Daniel Suarez that winds up dealing with this issue in a complex set of overlapping ways.
Pretty good stuff, I don't think the book will be as good as the prior book in the series. (I'm only about halfway through.)
But what about when we’re making multi-year journeys to Mars and we need a relay network of “space data centers” talking to each other, caching content, etc?
We may as well get ahead of the problems we’ll face and solve them in a low-stakes environment now, rather than waiting to discover some novel failure scenario when we’re nearing Mars…
Cooling isn't actually any more difficult than on Earth. You use large radiators and radiate to deep space. The radiators are much smaller than the solar arrays. "Oh but thermos bottles--" thermos bottles use a very low emissivity coating. Space radiators use a high emissivity coating. Literally every satellite manages to deal with heat rejection just fine, and with radiators (if needed) much smaller than the solar arrays.
Latency is potentially an issue if in a high orbit, but in LEO can be very small.
Equipment upgrades and maintenance is impossible? Literally, what is ISS, where this is done all the time?
Radiation shielding isn't free, but it's not necessarily that expensive either.
Orbital maintainence is not a serious problem with low cost launch.
The upside is effectively unlimited energy. No other place can give you terawatts of power. At that scale, this can be cheaper than terrestrially.
Modern solar panels are way more efficient than the ancient ones in ISS, at least 10x. The cooling radiators are smaller than solar panels because they are stacked and therefore effectively 5x efficient.
Unless there are at least 2x performance improvements on the cooling system, the cooling system would have to be larger than solar panels in a modern deployment.
There will probably be a lot more edge computing in the future. 20 years ago engineers scoffed at the idea of deploying code into a dozen regions (If you didn’t have a massive datacenter footprint) but now startups do it casually like it’s no big deal. Space infrastructure will probably have some parallels.
The Chinese project involves a larger number of less powerful inference-only nodes for edge computing, compared to Starcloud's training-capable hyperscale data centers.
[1] Andrew Jones. "China launches first of 2,800 satellites for AI space computing constellation". Spacenews, May 14, 2025. https://spacenews.com/china-launches-first-of-2800-satellite... [2] Ling Xin. "China launches satellites to start building the world’s first supercomputer in orbit". South China Morning Post, May 15, 2025. https://www.scmp.com/news/china/science/article/3310506/chin... [3] Ben Turner. "China is building a constellation of AI supercomputers in space — and just launched the first pieces". June 2, 2025. https://www.livescience.com/technology/computing/china-is-bu...
Are there many startups actually taking real advantage of edge computing? Smaller B2B places don't really need it, larger ones can just spin up per-region clusters.... and then for 2C stuff you're mainly looking at static asset stuff which is just CDNs?
Who's out there using edge computing to good effect?
I can see it now - orbiting crypto mines power on at dawn, die off at dusk, Oscar-7 style
For your other concerns, the risks are worth it for customers because of the main reward: No laws or governments in space! Technically, the datacenter company could be found liable but not for traffic, only for take-down refusals. Physical security is the most important security. For a lot of potential clients, simply making sure human access to the device is difficult is worth data-loss,latency and reliability issues.
It's outside of any jurisdiction, this is a dream come true for a libertarian oligarch.