Last time these folks were mentioned on HN, there was a lot of skepticism that this is really possible to do. The issue is cooling: in space, you can't rely on convection or conduction to do passive cooling, so you can only radiate away heat. However, the radiator would need to be several kilometers big to provide enough cooling, and obviously launching such a large object into space would therefore eat up any cost savings from the "free" solar power.
Maybe this is reductive, but there are times that I'm concerned the only thing keeping me from getting gobs and gobs of startup funds are the facts that I understand basic principles of engineering in space.
I could be wrong and this will be a slam dunk. To me, however, the costs/complexity (Cooling, SRP perturbation, stationkeeping, rendezvous, etc.) far outweigh the benefits of the Cheap as Free (tm) solar power
Assuming these people don't understand that their ideas are unworkable is a mistake. Don't believe for a second they are stupid or ignorant.
The difference between a criminal and a law-abiding citizen isn't that the citizen knows that crimes are wrong, it's that the citizen cares that crimes are wrong and the criminal doesn't.
Your thinking seems more risk averse, which is similar to myself. However that doesn't mean that without the business drivers these types of things can't happen if enough attention is given too it. Costs are often because we're comparing one thing which has significant efficiencies built into the supply chain, vs something that doesn't, which by virtue drives up the cost. Perhaps Nvidia have money to burn on trying something.
You've also got the problem of cosmic radiation flipping bits. Your fault tolerant architecture probably mitigates this with redundancy, with the extra servers again eating into the purported advantages of extra solar power. Dealing with the PITA of single event upsets is something developers of edge data processing software in space put up with to avoid the latency issues that data clouds in space introduce
In all seriousness, if AI models can handle quantization, they can handle some flipped bits from time to time! There are probably some fascinating papers to be written around how to choose which layers in an LLM architecture could benefit more than others from redundant computation in a high-radiation environment.
> and even the nuclear decay (due to practical considerations the latter, as well as the atmospheric noise, is not viable except for fairly restricted applications or online distribution services)
I wonder if "normal" RDIMM ECC would be enough to mitigate most of those radiation bit-flipping issues. If so it wouldn't really make a difference to earth-based servers since most enterprise servers use RDIMM ECC too
> Because caches hold the most recent and most relevant data to the current processing, it is critical that this data be accurate. To enable this, AMD has designed EPYC with multiple tiers of cache protection. The level 1 data cache includes SEC-DED ECC, which can detect two-bit errors and correct single-bit errors. Through parity and retry, L1 data cache tag errors and L1 instruction cache errors are automatically corrected. The L2 and L3 caches are extended even further with the ability to correct double errors and detect triple errors.
Sun Microsystems famously had this problem with their servers using the UltraSPARC II chips, with cache SRAM that didn’t have ECC. Later versions of their processors had ECC added.
My initial thought was "cooling is going to be a fun challenge, in addition to data transfer, latency, hardware maintenance and all that other fun stuff". It truly feels like one of those, you-have-too-much-money moments.
By my back of the envelope calculations, the radiators would be comparable to the solar arrays, probably somewhat smaller and not massively bigger at least.
Extremely rough one significant digit analysis from first principles, containing a lot of assumptions:
For solar panels:
Assuming area of 1000 square meters (30m x 30m square), solar irradiance of 1 kW/m^2, efficiency of 0.2. As a result power is 200 kW.
For radiators:
Stefan-Boltzmann constant 6E-8, temperature difference of 300 K, emissivity of one, we get total radiator power 1000 x 6E-8 x 300^4 = 486 kW.
The radiator number is bigger so the radiator could be smaller than the solar panels and could still radiate away all the heat. With caveats.
Temperature difference in the radiator is the biggest open question, and the design is very sensitive to that. Say if your chips run at 70 C (340 K), what is the cool temperature needed to cool down to, what is the assumed solar and earth flux hitting the radiator, depends on geometry and so on.
And then in reality part of the radiator is cooler and radiates way less, so most of the energy is radiated from the hot part. How low do you need to get the cool end temperature to, in order to not fry your chips? I guess you could run at very high flow rates and small temperature deltas to minimize radiator size but then rest of the system becomes heavier.
There's a very clever scheme I remember reading about a while ago where you dump the heat into an oil that you then spray in a fine mist towards a collector. You get a collosal surface area that way, in a very confined volume, with not that much more mass than a coolant fluid which you already need; and it's relatively easy to homogenise the temperature across the radiating particles. I seem to recall that it got as far as Dupont coming up with a specific coolant mix for the job; the rest of the system is a relatively well-understood (if precise) nozzle/collector design so you don't end up squirting your coolant off somewhere you can't catch it.
In space this wouldn't really work since there's no conduction or convection.
If you think of a big ball of droplet mist. From the point of view of a droplet in the center, it gets heat radiation from all the droplets around it. It can only radiate heat to black sky it sees, and it might be none, it's "sky" is just filled by other hot droplets. So it doesn't cool at all.
The total power radiated can't exceed the proportion to the macro surface area with tricks.
Question: for a larger system, can a heat pump be used to increase the temperature of the radiator without making the rest of the system hotter? Thus radiating more heat from fewer panels?
Your temperature differential is already 300K, so the efficiency needs to be high enough. 50K change is only 18% more cooling, but if COP=5 then it's also putting out 20% more heat...
In addition to the math, you can also look at existing examples, like how large the ISS radiators need to be relative to its solar panels. Like this project, it is essentially a closed system where all power generated by the solar panels will eventually be converted to heat that needs to be dissipated.
I'm skeptical that it makes any economic sense to put a datacenter in orbit, but the focus on the radiators in the last discussion was odd - if you can make the power generation work, you can make the heat dissipation work.
Their white paper touches on the issue, which seems slightly hand-wavy without much detail on quantification. They could potentially take advantage of heat gradients from deep space and dissipate heat to explore the Seeback effect.
Even beyond cooling, just getting all the hardware up there is extremely costly, and for what benefit over ground based DCs? The cooling is the ongoing problem but the cost of lifting it there obliterates all the other problems, IMO.
And who is The Law, in space? What's to prevent E.G. Amazon Kuiper or Musk Starlink from crashing one of their vehicles into the array, when they want to takeover their market?
My understanding is that the normal rule here is that the launching state has jurisdiction over (and international legal responsibility for) what is done by a spacecraft, but I’d bet that if private parties crashing their spacecraft into those of other private parties with widespread, economically significant use became a thing, a whole lot of countries in which one or more of the companies have assets or interests would discover jurisdiction in underused provisions of their domestic law rather quickly, no matter where either of the craft involved were launched.
They taught me that Sir Isaac Newton is the deadliest son-of-a-bitch in space. Which is probably something else these space data centers will struggle against, it'll be interesting to see how much shielding they have against impacts. There was a Soyuz that had a coolant leak blamed on a micrometeorite strike.
The Github paper seems to indicate they have considered the thermal aspects fairly heavily and mention that "conduction and convection to the environment are not
available in space".
Not sure if I follow really. Cooling from it's own generated heat? Are we even sure the system would get that hot in the first place? The temperatures can plunge up to -200 degrees.
If needed, they'd cool it just like they keep the James Webb Telescope cool.
The Webb telescope is a _wildly_ different apparatus, designed from the ground up to run as cool as possible, and with an effectively unlimited budget. It lives in the shadow of the Earth behind multiple layers of shielding. These "data centers" need to live in direct sunlight and operate as cheaply as possible _at scale._ Very little of Webb's tech is applicable.
Keeping things cool in space is very hard. On earth we usually transfer heat from one medium to another (water to water, water to air, etc.). In space that's not possible because even though the matter in space is quite cold, there is very little. Therefore the only real way to get rid of heat in space is to radiate it away (think infrared light bulb). The James Webb Telescope does the same thing.
This is a big thing never shown in sci-fi. For example, those huge torch ships in The Expanse would need gigantic radiators. Even if the drive were upwards of 90% efficient the waste heat would melt the engine and the rest of the ship.
Even the ISS has sizable radiators. The Shuttle had deployable radiators in the form of the bay doors if my memory serves me correctly.
Oddly enough the otherwise dumb Avatar films are among the only ones to show starships with something approaching proper radiators.
There’s no air resistance in space so radiators don’t impact your flight characteristics.
I'm pretty sure it was that series that also described https://en.wikipedia.org/wiki/Liquid_droplet_radiator , with the side effects of different ships having very distinct heat patterns because of their radiator patterns. And that if a ship ever had to make a turn while they were active, big glowing arcs of slowly-cooling droplets would be flung out into space and leave a kind of heat plume.
Your memory serves well with respect to the Shuttle. Astronaut Mike Mullane, from his autobiography Riding Rockets:
> Next [after loading the computers with on-orbit software] we opened the payload bay doors. The inside of those doors contained radiators used to dump the heat generated by our electronics into space. If they failed to open, we’d have only a couple hours to get Discovery back on Earth before she fried her brains. But both doors swung open as planned, another milestone passed.
Makes me wonder about building a 16km square datacenter on earth. I wonder if building in that way, with a lower "data density" would allow for more passive cooling and you'd have the large solar field.
Wonder if that would be less impactful than how ever many rockets they'll need to send up, plus you could, ya know, ~drive~ bike to a failed machine.
It says "Starcloud plans to build a 5-gigawatt orbital data center with super-large solar and cooling panels approximately 4 kilometers in width and length."
So, it's the solar/cooling panels that make up that space, not the data centre per se.
I know. I'm saying what if you build lower density data centers that could be more passively cooled. Apparently being in space is no issue for latency, so I can't see why building it on earth in a remote-ish area would matter.
Should we be adding massive sources of heat (datacenters) to regions that can easily passively cool them? It sounds like that would be somewhere around the Arctics. These are already seeing record high temperatures both in winter and summer. Maybe if we manage to radiate all the heat directly back into space by mimicking snow…?
No. It would need to be larger, probably by a factor of 3 or 4, for a couple reasons.
1) The atmosphere attenuates sunlight (even when it's not cloudy)
2) The solar array in orbit can pivot to face the sun all the time.
3) While most orbits will go into earth's shadow some of the time, on average they'll be in sunlight more of the time than a typical point on the surface.
Space is cold. There are just very little cold molecules to take over the energy from your hot molecules.
Here on earth we are surrounded by many molecules, that are not so cold, but colder than us and together they can take a lot of our excess heat energy away.
This prompted my curiosity. None of the following contradicts the thrust of your message, but I thought the nuance is interesting to share.
Interstellar space isn't a vacuum. Space is mostly empty compared to Earthly standards, but it still contains gas (mostly hydrogen and helium), dust, radiation, magnetic fields, and quantum activity.
The emptiest regions are incredibly sparse, but not completely empty. Even in a perfect vacuum, quantum mechanics predicst that particle-antiparticle pairs constantly pop in and out of existence, so empty space can be said to be buzzing with tiny fluctuations.
> Space is not cold. It has no real value for temperature. Stuff in space does.
The cosmic microwave background radiation, the left-over energy from the Big Bang, sets a baseline temperature of about 2.7K (-270°C), just above absolute zero.
Temperature depends on particle collisions, and since space isn't a vacuum, just incredibly sparse, one can talk about the temperature of space, but you're right that what is typically more relevant is the temperature of "specific" objects.
TFA> “In space, you get almost unlimited, low-cost renewable energy,”
Why is this exclusive to space? If you're powering datacenters on solar, one would think covering the Sahara or other large desert in datacenters would be easier than launching them into space. Renewable energy is just as plentiful and free there, you can connect it to the rest of the world with multiple TB/s of fiber links, and the construction/maintainence costs would be a few orders of magnitude less.
Given the water needs of data centers and the ongoing and upcoming water scarcity, I imagine the problem of heat dissipation seems easier to solve, long term, in space.
We can and do build data centres that don't use evaporative cooling, evaporation is just often the cheapest option in places with large natural water sources.
They state that in 10 years all data centers will be in outer space.
I state that in 10 years we will look back and think this was a ridiculous idea. The meta and maintenance costs, the pollution of sending them to space, the space pollution itself, the outer space radiation, the extra redundant error correction needed*,* and much more all speak against this.
Why not throw that trillion dollars into optical computing chip research? Why not create better sustainable methods here on earth*?* We could run a single data center down here, or pay a million times moreto do this in space. The argument that we are polluting Earth down here is very weak. Yes, we do, but why on earth do we then not invest more in research for solving these problems*?*
There are startups out there that will one day solve these issues. And then space data centers will be something for the Star Trek age, which humanity will probably never achieve.
Either the satellite is geostationary and doesn't have 24h / 24h sun exposure as energy source.
Or they are not geostationary but it also means the datacenter will connect to a different earth base station which means the data access route would change and latency would increase which would be unacceptable for a lot of use cases.
You would then need to replicate and synchronise customer data across the different space data centres to make it possible to access said data in constant and low-latency time.
Space is no one's land by a number of active international treaties, and also very large and empty, so enforcing boundaries is hard, except by actively killing spacecraft up high. There is no viable "space defense", comparable to the atmospheric air defense. Were it not so, spy satellites won't exist.
But read/write access to the datacentre is on someone's land, and spacefaring powers without access to that can still interfere with its effective operation...
The customer is going to be extremely concerned when it turns out physically locating datacentres in space doesn't actually render the data inaccessible or uncensorable...
To render your data inaccessible, use /dev/null. For practical purposes, some access is required.
Censoring data in a datacenter in space requires either administrative access, or physical access. The latter is complicated in space, The former depends on your trust to the operator, and your security posture.
Since the admins aren't in space, actors that want to use administrative privileges to interfere with your data have no less access to it than if the datacentre was located on the ground.
The difference between the US government censoring a datacentre in orbit and one in California is a matter of cost rather than practicality, and it's actually easier for other spacefaring powers to interfere with it in a deniable manner if it's that important to them than the datacentre in California
This depend on you threat model. If your model is mostly legal threats form less-than-nation-state actors, being formally outside any terrestrial jurisdiction may help. If you try to protect yourself from a big threat that won't mind raiding (or bombing) your DC without a court order, quite possibly locating it in space is not the best idea.
Once it’s easy enough to launch the hundreds of launches it would take to build one of these, it will also be trivial to launch a drone that can physically attach and attack them. This is the opposite of a secure facility.
Makes you think. Could some rich enough rogue operator attack such data centre to for example cause stock crash and then profit more from that than the cost of mission?
Yeah, who throws out these sort of timeframe in earnest? We haven't built anything in space since the ISS (which is in LEO mind you, not "outer space"), and we're building full data centers within a decade? Give me a break, that's an Elon level prediction.
> Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.
Didn’t face any problems doing it… you mean when was charged by the SEC for lying on Twitter? Or do you mean when he was forced to buy Twitter to avoid another case against him?
One of the selling points they mention is that they won't need to use any fresh water for cooling.
My understanding was that water-demands on Earth were an overblown issue and minuscule when compared to other uses of fresh water such as watering one acre of farmland.
Not to mention, "used" water is just "warm" water that can then be used again for other purposes.
So are they perpetuating a myth here? Or is water use a bigger issue than I thought?
Well, for one thing you can't eat GPUs, so I'm ok with farmland taking up more water.
Also, the "warm" water has already destroyed ecosystems because the data centers are just dumping it. It's a completely solvable issue if we had any common sense regulations.
It's not a real issue, but it's truthy enough to generate real opposition to datacenter buildout and catalyze AI hate. So definitionally avoiding it from the get-go might end up being worth it.
It really depends where they get the water. If they're pumping an aquifer fry and doing evaporative cooling they could be just boiling an entire areas water source. If they could figure out how to use salt water it'd be ideal.
More seriously, space is pretty cold, and will consume large amounts of radiated heat. The problem, of course, it that the amount you can radiate thermally at, say, 150°C is pretty limited. According to the Stephan-Bolzmann equation, it's about 1800W for a perfect black body. For 5GW, that would take a square radiator 1.7km wide, always concealed from sunlight. Realistically, much larger as the temperature would drop as the coolant flows along.
I can't wait for the inevitable epic humanity saving mission, where the AI datacenter gets stuck in a murder loop, and we have to send up the best and brightest in a spaceship to unplug the power cable and plug it in again.
I'm inclined to think you're right, but I can't figure out one thing - the command module (apparently) in Apollo 13 got down to 38F without active heating. That's much colder than standard data centre rack temps.
In the example of a data centre, there would be considerably more heat generation than 3 astronauts, but, I would like to understand more. 38F is cold, so heat is clearly lost not as slowly as we might think.
The Apollo passive radiators can dissipate ~2500 Watts into space. With most systems shut down, only ~500 Watts was coming from the remaining systems and the astronauts bodies.
Cool, thank you. So I read this as fundamentally, the heat they dissipated far exceeded the heat they produced. Do you mind opining on what similar figures would be with modest passive radiators and a typical data centre rack heat output?
Apart from getting 16 sq. km of solar arrays and radiators into orbit - and without jumping to conclusions about whether this is a borderline scam - I can imagine 2 obvious showstoppers:
1) Space debris. This is proposal is several orders of magnitude larger than the biggest things in near-Earth orbits. Thus equally many orders more likely to be hit by, and create, space debris
2) Heat transport - this isn't my home turf, but I can't imagine building something lightweight enough to be launched, yet also capable of transferring enough heat away from the 5 GW core, without it melting/breaking
It's been a while since I read their whitepaper, but I don't recall either of those points being addressed.
LEO is the last place you should worry about space debris.
Space is just unfathomably large. If you aren’t in the same orbital plane, you’re just not going to have a problem. And if you did, Kessler syndrome in LEO is a non problem.
Could be an issue for specific orbital planes in stable orbits, but even there, it’s overblown.
We've officially lost the plot, we will now ship our AI data centers to ~space~ ... This will not work with modern technology.
The sun will be eclipsed by earth many times per day, requiring you to either shift all workloads or add substantial UPS weight. The radiator grid you need to cool 125kw is something like 16x the size of the entire data center.
I watched this video last week that went into 3 different scenarios, it's a good watch.
*Table 1. Cost comparison of a single 40 MW cluster operated for 10 years in space vs on land.*
| Cost Item | Terrestrial | Space
|:------------------------------|:--------------------------------|:----------------
| Energy (10 years) | $140m @ $0.04 per kWh | $2m cost of solar array
| Launch | None | $5m (single launch of compute module, solar & radiators)
| Cooling (chiller energy cost) | $7m @ 5% of overall power usage | More efficient cooling architecture taking advantage of higher ΔT in space
| Water usage | 1.7m tons @ 0.5L/kWh | Not required
| Enclosure (Sat. Bus/Building) | Approximately equivalent cost | Approximately equivalent cost
| Backup power supply | $20m | Not required
| All other DC hardware | Approximately equivalent cost | Approximately equivalent cost
| Radiation shielding | Not required | $1.2m @ 1 kg of shielding per kW of compute and $30/kg launch cost
| Cost Balance | $167m | $8.2m
It is, unless you take Musk's hype about Starship as fact. With rockets that are actually potentially available the best price is $1500/kg to LEO, so either they're presuming the whole setup weighs in at 3-4 tons (which is less than the shielding alone) or that they can get it launched for a few orders of magnitude less than what's on the market now (and they do say they assume $30/kg).
Actual engineering question. How large can you scale a cooling system in space? And I mean say from radial central point. Surely at some point it just doesn't work anymore. Or you spend more energy to get energy to point where you can radiate it away than you can radiate.
I believe there is math for this very question. A similar principle applies with heatsinks. You cant just continue increasing the heatsink on a CPU, the outer edge of a large heatsink won't go above ambient and thus any heatsink bigger than that is wasting material.
I would guess in a system where coolant is pumped and the added heat of that you'll have a similar problem. This is probably further exacerbated by the fact that you cant do clever things to increase surface area - your radiating surfaces must all "see" the black of space in order to function.
So many questions, like how would you protect from bit flips, damage to circuits.
"10x lower energy costs and reduce the need for energy consumption on Earth." I am not sure if we need a rocket scientist to calculate the energy costs of manufacturing and sending a rocket to outer space versus putting that fuel into a generator and just letting it run.
What happens when the servers need to retire due to some unpatchable bug
Yeah, radiation is the enemy of integrated circuits, cosmic radiation is more damaging the smaller the features get.
You pretty much have to have multiple redundancy and special space-rated HW, which I wouldn't be surprised is stuck at super old process nodes to mitigate this exact same issue.
Tbf, leaving aside the claims about datacentres in space, working with Nvidia on radiation hardening its latest generation chips would be a good project...
So many questions to be asked, I don't know where to start. What's the upside of bunching up all the servers into a single megastructure rather than separate satellites?
The rate of radiative cooling scales proportionally to (T^4-Tenv^4) which approximates to just T^4 in space (Tenv = 3K). The hotter they can run it, the smaller heatsinks they need; for every doubling of temperature, the heatsink area can be reduced by a factor of 16. Also, it might be possible to boost the output temperature, e.g. with a chemical heat pump for even smaller heat sinks.
How is a multiple square-kilometer radiator not just an inevitable Kessler syndrome disaster?
Edit: Some back of the envelope calculation suggests that the total cross-sectional area of all man-made orbiting satellites is around 55000 m^2. Just one 4km x 4km = 1600000m^2 starcloud would represent an increase by a factor of about 300. That's insane.
Not sure what the slippery slope is here. The linked page imagines a 4km x 4km radiator/solar array. The cross-sectional area of the array is going to be directly proportional to the probability of impacting high velocity space debris. In such an event the amount of debris that would be generated could also scale with the area of the array. This seems bad
See my edit. Just one starcloud would represent an increase in a risk factor of over 300 c.f. status quo. Then multiply that by the number of starclouds you think would be deployed.
Even with a numerator-only view, I suspect it's not fair to characterize the "risk factor" as going up 300x. There's a lot more nuance about orbits in space.
Tell me the nuance then. If people have concerns about Kessler syndrome at the starlink scale then why wouldn't something literally 1000x bigger be even more concerning.
I know this in the same way that even though I don't know the exact credence to assign the probability of particular bad effects from global warming, I can confidently say that an increase by a factor of 1000 of the CO2 emissions would be a bad thing. This is not because I have done a simulation, but instead my beliefs are based on the assumption that while concerned experts might be wrong in the details, they are probably not wrong with a gap of 3 orders of magnitude.
I already did. Your reply/edit merely repeated your prior observation.
Getting back to the point:
You literally claimed that one of these would "inevitabl[y]" trigger a Kessler effect with no proof.
> something literally 1000x bigger be even more concerning.
Again, this isn't convincing if you don't have the denominator/context. Think about it: you still can't answer how many of these are needed to trigger the Kessler effect.
BTW, "increase by a FACTOR of about 300" != "increase in a RISK FACTOR of over 300"
The solar array is 4 km by 4 km. The whole Earth, with its 6400 km radius, only gets hit by a Prius-sized asteroid once per year. So the risks are much lower. I guess the array may be hit by many micro-asteroids though, but it should be possible to engineer some level of tolerance for that.
You'll never be able to do maintenance or upgrade these things. The up front cost seems extremely high given the risk of hardware failure or obselecence at data center scales.
I could see gov imaging satellites with a direct encrypted laser communications to GPU’s in orbit being attractive. Images processing, movement pattern analysis, multi spectral, and as they mentioned radar.
I though that refrigerating things in space was using a lot of energy because heat cannot dissipate in the void of space.
Moreover, why are the energy cost 10x lower when in space you have unlimited access to sun power? Is it the cost of building the energy production infrastructure ?
Lots of things limit the benefit of putting PV in space. UV damages the semiconductors faster, ditto micrometeoroids, it's just plain expensive to put stuff up there in the first place…
It's not a slam-dunk "no", we are seeing developments on all metrics. It's just that right now, I wouldn't be surprised if the claim of x10 improvement was anywhere from correct to x100 over-optimistic.
Replacing faulty nodes or equipment in space seems totally reasonable... It's not like getting faulty drives replaced in my datacenter racks don't already take weeks/months...
I have had people point out that building a Dyson sphere is pretty much a dumb idea, and there's no concievable reason why we would build one even if we could.
I want that type of money for playing out something which can be pre calculated and is just not a smart idea at the moment at all.
I don't get it. I really don't.
You can calculate the minimum cost, you can calculate heat, maintenance and probably also the expected failerrate for the hardware.
But even if the failerrate is something you need to figure out, that would probably some R&D thing which you would test and verify in a very small and cheap setup.
Same stupid shit with the mirror in space which will send sun back to some PV panels on earth.
Cool stuff in a non capitalistic system but otherwise it just shows that plenty of people have too much money to invest in weird things without understanding it at all.
“The only cost on the environment will be on the launch, then there will be 10x carbon-dioxide savings over the life of the data center”
And how long is that life exactly? There is zero chance this is a net positive for carbon emissions, much less a remotely economical way to build or operate datacenters.
Water consumption of a data center is not a real thing. You don't just consume water. You need it to move heat and you don't need it to remove heat by vaporization.
You can easily use this heat if you actually wanted to do so by heating houses close by or for chemical processes.
Its a legal issue.
And its very resource heavy to put anything in space...
Would it be more cost effective and more sustainable to heavily invest in graphene semiconductors than space-based datacenters? Is that a false dilemma?
Aren't there advantages to fabricating GO Graphene Oxide and CNT Carbon Nanotubes in microgravity?
Energy went into mining, extracting, refining, transporting all the raw materials needed to make these chips.
This is typical tech industry green washing as the industry fails to accept its destructive influence on the planet.
We need practical solutions that help reduce consumption and waste and actually address the issues. We don’t always need more we need to find a way to use less.
It will soon come back from over the other horizon. :)
Of all the things insane about this proposal, I'm not very bothered about this one. It could be high availability and distributed by default. Like having redundant datacenters with eventual consistency on all continents. Except the continents are spinning really fast above you...
The animation is wild... 5GW concentrated up there at the top of a field of solar panels - it's not a Starcloud, it's an electric Starfurnace.
Why can't it be geostationary? Laser communication can get you gigabit speeds today. That would take a month to transmit GPT-5's estimated 280TB training corpus, which is acceptable. Latency does not matter.
With geostationary orbit you won't ever get less than 200ms round-trip latency from the ground (at the speed of light).
Fine for some applications, but a massive regression from modern fiber infrastructure and definitely not suitable for everything (just think how slow the modern web is even with 15ms connections to datacenters). There's a reason why Starlink & co are trying to set up communication satellites closer to the ground.
Round-trip to GEO will add 238.7 milliseconds to whatever other infra you have over the last 200 km vertically* and whatever along the ground. It's probably fine for some things, but not for everything.
* while there could, in principle, be no extra infra in the last 200 km vertically, that means someone on the ground is talking directly to GEO. As per similar discussion about big PV space stations beaming power to the ground, your minimum ground spot size for a transmitter this big and this far away is still tens of km, which limits the other parts of your overall system design.
It's going to be fun constantly repairing all those solar arrays. We'll be destroying our planet with the rocket launches alone. But hey! The more ridiculous the idea, the greater the chance that Trump and his conspiracy-laden circle will embrace it. It works in science fiction movies and novels, why not in reality, duh. /s
Last time these folks were mentioned on HN, there was a lot of skepticism that this is really possible to do. The issue is cooling: in space, you can't rely on convection or conduction to do passive cooling, so you can only radiate away heat. However, the radiator would need to be several kilometers big to provide enough cooling, and obviously launching such a large object into space would therefore eat up any cost savings from the "free" solar power.
More discussion: https://news.ycombinator.com/item?id=43977188
Maybe this is reductive, but there are times that I'm concerned the only thing keeping me from getting gobs and gobs of startup funds are the facts that I understand basic principles of engineering in space.
I could be wrong and this will be a slam dunk. To me, however, the costs/complexity (Cooling, SRP perturbation, stationkeeping, rendezvous, etc.) far outweigh the benefits of the Cheap as Free (tm) solar power
Assuming these people don't understand that their ideas are unworkable is a mistake. Don't believe for a second they are stupid or ignorant.
The difference between a criminal and a law-abiding citizen isn't that the citizen knows that crimes are wrong, it's that the citizen cares that crimes are wrong and the criminal doesn't.
>they are stupid or ignorant.
Nope, probably the more apt description is 'in denial'.
Reading the paper they wrote on this from their GitHub site, it does take into account the thermal management aspects quite considerably.
https://starcloudinc.github.io/wp.pdf
Your thinking seems more risk averse, which is similar to myself. However that doesn't mean that without the business drivers these types of things can't happen if enough attention is given too it. Costs are often because we're comparing one thing which has significant efficiencies built into the supply chain, vs something that doesn't, which by virtue drives up the cost. Perhaps Nvidia have money to burn on trying something.
You've also got the problem of cosmic radiation flipping bits. Your fault tolerant architecture probably mitigates this with redundancy, with the extra servers again eating into the purported advantages of extra solar power. Dealing with the PITA of single event upsets is something developers of edge data processing software in space put up with to avoid the latency issues that data clouds in space introduce
In all seriousness, if AI models can handle quantization, they can handle some flipped bits from time to time! There are probably some fascinating papers to be written around how to choose which layers in an LLM architecture could benefit more than others from redundant computation in a high-radiation environment.
Brilliant, to turn up the model temperature we just hinge open the shielding. I call dibs on the patent!
Ok, has anyone patented chips with radioactive source glued to them? For "true" randomness.
If it not i want dibs on it.
https://en.wikipedia.org/wiki/Hardware_random_number_generat...
> and even the nuclear decay (due to practical considerations the latter, as well as the atmospheric noise, is not viable except for fairly restricted applications or online distribution services)
yeah, I think the space weather experts would have fun statistically analysing the single-event-upset RNG :)
I wonder if "normal" RDIMM ECC would be enough to mitigate most of those radiation bit-flipping issues. If so it wouldn't really make a difference to earth-based servers since most enterprise servers use RDIMM ECC too
You'll get bitflips elsewhere besides just in RAM. A bitflip in L1 or L3 cache will be propagated to your DIMM and noone will be the wiser.
I thought server CPUs already handled this? E.g. for Epyc https://moorinsightsstrategy.com/wp-content/uploads/2017/05/...
> Because caches hold the most recent and most relevant data to the current processing, it is critical that this data be accurate. To enable this, AMD has designed EPYC with multiple tiers of cache protection. The level 1 data cache includes SEC-DED ECC, which can detect two-bit errors and correct single-bit errors. Through parity and retry, L1 data cache tag errors and L1 instruction cache errors are automatically corrected. The L2 and L3 caches are extended even further with the ability to correct double errors and detect triple errors.
Sun Microsystems famously had this problem with their servers using the UltraSPARC II chips, with cache SRAM that didn’t have ECC. Later versions of their processors had ECC added.
Those do ECC already
What about the registers?
My initial thought was "cooling is going to be a fun challenge, in addition to data transfer, latency, hardware maintenance and all that other fun stuff". It truly feels like one of those, you-have-too-much-money moments.
By my back of the envelope calculations, the radiators would be comparable to the solar arrays, probably somewhat smaller and not massively bigger at least.
Care to share them?
Extremely rough one significant digit analysis from first principles, containing a lot of assumptions:
For solar panels:
Assuming area of 1000 square meters (30m x 30m square), solar irradiance of 1 kW/m^2, efficiency of 0.2. As a result power is 200 kW.
For radiators:
Stefan-Boltzmann constant 6E-8, temperature difference of 300 K, emissivity of one, we get total radiator power 1000 x 6E-8 x 300^4 = 486 kW.
The radiator number is bigger so the radiator could be smaller than the solar panels and could still radiate away all the heat. With caveats.
Temperature difference in the radiator is the biggest open question, and the design is very sensitive to that. Say if your chips run at 70 C (340 K), what is the cool temperature needed to cool down to, what is the assumed solar and earth flux hitting the radiator, depends on geometry and so on. And then in reality part of the radiator is cooler and radiates way less, so most of the energy is radiated from the hot part. How low do you need to get the cool end temperature to, in order to not fry your chips? I guess you could run at very high flow rates and small temperature deltas to minimize radiator size but then rest of the system becomes heavier.
There's a very clever scheme I remember reading about a while ago where you dump the heat into an oil that you then spray in a fine mist towards a collector. You get a collosal surface area that way, in a very confined volume, with not that much more mass than a coolant fluid which you already need; and it's relatively easy to homogenise the temperature across the radiating particles. I seem to recall that it got as far as Dupont coming up with a specific coolant mix for the job; the rest of the system is a relatively well-understood (if precise) nozzle/collector design so you don't end up squirting your coolant off somewhere you can't catch it.
In space this wouldn't really work since there's no conduction or convection.
If you think of a big ball of droplet mist. From the point of view of a droplet in the center, it gets heat radiation from all the droplets around it. It can only radiate heat to black sky it sees, and it might be none, it's "sky" is just filled by other hot droplets. So it doesn't cool at all.
The total power radiated can't exceed the proportion to the macro surface area with tricks.
Are there any liquids with a low enough vapour pressure for this sort of thing?
Question: for a larger system, can a heat pump be used to increase the temperature of the radiator without making the rest of the system hotter? Thus radiating more heat from fewer panels?
Your temperature differential is already 300K, so the efficiency needs to be high enough. 50K change is only 18% more cooling, but if COP=5 then it's also putting out 20% more heat...
In addition to the math, you can also look at existing examples, like how large the ISS radiators need to be relative to its solar panels. Like this project, it is essentially a closed system where all power generated by the solar panels will eventually be converted to heat that needs to be dissipated.
I'm skeptical that it makes any economic sense to put a datacenter in orbit, but the focus on the radiators in the last discussion was odd - if you can make the power generation work, you can make the heat dissipation work.
Their white paper touches on the issue, which seems slightly hand-wavy without much detail on quantification. They could potentially take advantage of heat gradients from deep space and dissipate heat to explore the Seeback effect.
Deep space? So they want to be outside geostationary orbit?
In LEO, pointing at deep-space just means away from Sun/Earth. You don't have to be in deep-space to use it for radiating heat away.
p3 of their white paper https://starcloudinc.github.io/wp.pdf akshually...
Even beyond cooling, just getting all the hardware up there is extremely costly, and for what benefit over ground based DCs? The cooling is the ongoing problem but the cost of lifting it there obliterates all the other problems, IMO.
Space X thinks they will reduce the cost by 90% with Starship, so they are probably calculating off that.
On the linked page there are animations using Starship.
And who is The Law, in space? What's to prevent E.G. Amazon Kuiper or Musk Starlink from crashing one of their vehicles into the array, when they want to takeover their market?
My understanding is that the normal rule here is that the launching state has jurisdiction over (and international legal responsibility for) what is done by a spacecraft, but I’d bet that if private parties crashing their spacecraft into those of other private parties with widespread, economically significant use became a thing, a whole lot of countries in which one or more of the companies have assets or interests would discover jurisdiction in underused provisions of their domestic law rather quickly, no matter where either of the craft involved were launched.
If the Mass Effect games have taught me anything, it's that heat dissipation in outer space is hard.
They taught me that Sir Isaac Newton is the deadliest son-of-a-bitch in space. Which is probably something else these space data centers will struggle against, it'll be interesting to see how much shielding they have against impacts. There was a Soyuz that had a coolant leak blamed on a micrometeorite strike.
The Github paper seems to indicate they have considered the thermal aspects fairly heavily and mention that "conduction and convection to the environment are not available in space".
Not sure if I follow really. Cooling from it's own generated heat? Are we even sure the system would get that hot in the first place? The temperatures can plunge up to -200 degrees. If needed, they'd cool it just like they keep the James Webb Telescope cool.
The Webb telescope is a _wildly_ different apparatus, designed from the ground up to run as cool as possible, and with an effectively unlimited budget. It lives in the shadow of the Earth behind multiple layers of shielding. These "data centers" need to live in direct sunlight and operate as cheaply as possible _at scale._ Very little of Webb's tech is applicable.
Keeping things cool in space is very hard. On earth we usually transfer heat from one medium to another (water to water, water to air, etc.). In space that's not possible because even though the matter in space is quite cold, there is very little. Therefore the only real way to get rid of heat in space is to radiate it away (think infrared light bulb). The James Webb Telescope does the same thing.
There are two real challenges in running a data center: how to get power in (reliably), and how to get heat out.
Any data center that isn't generating massive heat is a waste of our time.
And no, JWST is not doing industrial scale cooling.
Thank you for the responses. I understand the issue a bit more now.
This is a big thing never shown in sci-fi. For example, those huge torch ships in The Expanse would need gigantic radiators. Even if the drive were upwards of 90% efficient the waste heat would melt the engine and the rest of the ship.
Even the ISS has sizable radiators. The Shuttle had deployable radiators in the form of the bay doors if my memory serves me correctly.
Oddly enough the otherwise dumb Avatar films are among the only ones to show starships with something approaching proper radiators.
There’s no air resistance in space so radiators don’t impact your flight characteristics.
The Mass Effect video games talk about cooling ships, with the warships glowing red from heat if they go too fast
I enjoyed seeing it described in those games :)
I'm pretty sure it was that series that also described https://en.wikipedia.org/wiki/Liquid_droplet_radiator , with the side effects of different ships having very distinct heat patterns because of their radiator patterns. And that if a ship ever had to make a turn while they were active, big glowing arcs of slowly-cooling droplets would be flung out into space and leave a kind of heat plume.
> Oddly enough the otherwise dumb Avatar films are among the only ones to show starships with something approaching proper radiators.
I imagine it's the same reason James Cameron is a world expert on submersibles - the guy picks individual topics in his movies to really get right.
Neal Stephenson's _Seveneves_ covers these dynamics in detail :)
The book Saturn Run has an interesting design utilized for a spaceship.
Your memory serves well with respect to the Shuttle. Astronaut Mike Mullane, from his autobiography Riding Rockets:
> Next [after loading the computers with on-orbit software] we opened the payload bay doors. The inside of those doors contained radiators used to dump the heat generated by our electronics into space. If they failed to open, we’d have only a couple hours to get Discovery back on Earth before she fried her brains. But both doors swung open as planned, another milestone passed.
Alternatively, assuming they are aware of the cost, what does this say about what they are implying the cost of electricity is going to be?
Hey, at least it's not going to end up with a bunch of actual people getting treatment based on invalid blood test results.
Their website pitches it as 16 square km
Makes me wonder about building a 16km square datacenter on earth. I wonder if building in that way, with a lower "data density" would allow for more passive cooling and you'd have the large solar field.
Wonder if that would be less impactful than how ever many rockets they'll need to send up, plus you could, ya know, ~drive~ bike to a failed machine.
It says "Starcloud plans to build a 5-gigawatt orbital data center with super-large solar and cooling panels approximately 4 kilometers in width and length."
So, it's the solar/cooling panels that make up that space, not the data centre per se.
I know. I'm saying what if you build lower density data centers that could be more passively cooled. Apparently being in space is no issue for latency, so I can't see why building it on earth in a remote-ish area would matter.
I can think of some parts of earth where passive cooling isn't a major problem, and some of them even have power sources...
Should we be adding massive sources of heat (datacenters) to regions that can easily passively cool them? It sounds like that would be somewhere around the Arctics. These are already seeing record high temperatures both in winter and summer. Maybe if we manage to radiate all the heat directly back into space by mimicking snow…?
Wouldn't a 16km² gigantic solar roof on Earth already cover the energy needs that they're pitching will be saved with this space data center?
No. It would need to be larger, probably by a factor of 3 or 4, for a couple reasons.
1) The atmosphere attenuates sunlight (even when it's not cloudy)
2) The solar array in orbit can pivot to face the sun all the time.
3) While most orbits will go into earth's shadow some of the time, on average they'll be in sunlight more of the time than a typical point on the surface.
see https://en.wikipedia.org/wiki/Solar_irradiance
There's already about 0.4 square km of solar panels across the Starlink constellation. (~4,000 v2 satellites at ~100 meter^2 each).
This project seems 40x larger than all of Starlink's constellation combined. So quite huge.
I learned something interesting here, thanks. I've never really thought about it so I'd always assumed space = cold so that would be fine.
Space is cold. There are just very little cold molecules to take over the energy from your hot molecules.
Here on earth we are surrounded by many molecules, that are not so cold, but colder than us and together they can take a lot of our excess heat energy away.
Space is not cold. Space is empty. It has no real value for temperature.
Stuff in space does.
> Space is empty.
This prompted my curiosity. None of the following contradicts the thrust of your message, but I thought the nuance is interesting to share.
Interstellar space isn't a vacuum. Space is mostly empty compared to Earthly standards, but it still contains gas (mostly hydrogen and helium), dust, radiation, magnetic fields, and quantum activity.
The emptiest regions are incredibly sparse, but not completely empty. Even in a perfect vacuum, quantum mechanics predicst that particle-antiparticle pairs constantly pop in and out of existence, so empty space can be said to be buzzing with tiny fluctuations.
> Space is not cold. It has no real value for temperature. Stuff in space does.
The cosmic microwave background radiation, the left-over energy from the Big Bang, sets a baseline temperature of about 2.7K (-270°C), just above absolute zero.
Temperature depends on particle collisions, and since space isn't a vacuum, just incredibly sparse, one can talk about the temperature of space, but you're right that what is typically more relevant is the temperature of "specific" objects.
Yeah, it's totally obvious now that it's been pointed out, I'd just never thought it through properly.
interesting, what if we put datacenters in the ocean floor with nuclear power? like the Army Janus program
TFA> “In space, you get almost unlimited, low-cost renewable energy,”
Why is this exclusive to space? If you're powering datacenters on solar, one would think covering the Sahara or other large desert in datacenters would be easier than launching them into space. Renewable energy is just as plentiful and free there, you can connect it to the rest of the world with multiple TB/s of fiber links, and the construction/maintainence costs would be a few orders of magnitude less.
But then you wouldn't be able to launch to space. It would also seem like a very mundane project wouldn't it?
Given the water needs of data centers and the ongoing and upcoming water scarcity, I imagine the problem of heat dissipation seems easier to solve, long term, in space.
We can and do build data centres that don't use evaporative cooling, evaporation is just often the cheapest option in places with large natural water sources.
Wut?
They state that in 10 years all data centers will be in outer space. I state that in 10 years we will look back and think this was a ridiculous idea. The meta and maintenance costs, the pollution of sending them to space, the space pollution itself, the outer space radiation, the extra redundant error correction needed*,* and much more all speak against this. Why not throw that trillion dollars into optical computing chip research? Why not create better sustainable methods here on earth*?* We could run a single data center down here, or pay a million times moreto do this in space. The argument that we are polluting Earth down here is very weak. Yes, we do, but why on earth do we then not invest more in research for solving these problems*?* There are startups out there that will one day solve these issues. And then space data centers will be something for the Star Trek age, which humanity will probably never achieve.
I think the bigger thing about a space-based data center that it's not on anyone's land, and not easy to inspect or capture.
Solar energy available around the clock allows it to be self-sufficient for a long time.
I suppose there will be some demand for high-security, high-price setups like that.
Either the satellite is geostationary and doesn't have 24h / 24h sun exposure as energy source.
Or they are not geostationary but it also means the datacenter will connect to a different earth base station which means the data access route would change and latency would increase which would be unacceptable for a lot of use cases.
You would then need to replicate and synchronise customer data across the different space data centres to make it possible to access said data in constant and low-latency time.
> Either the satellite is geostationary and doesn't have 24h / 24h sun exposure as energy source.
Due to the Earth's axial tilt [1], geostationary orbits generally have 24 hour sun exposure, except for a few minutes a day around the equinoxes [2].
[1] https://en.wikipedia.org/wiki/Axial_tilt
[2] https://www.nesdis.noaa.gov/our-satellites/currently-flying/...
You can even see this in action via NOAA's CCOR-1: https://www.swpc.noaa.gov/news/earth-makes-appearances-goes-...
> that it's not on anyone's land
Oh you can bet that, if we assume this happens in 10 years, various countries will absolutely do a "land grab" up high. There is no escaping it.
Space is no one's land by a number of active international treaties, and also very large and empty, so enforcing boundaries is hard, except by actively killing spacecraft up high. There is no viable "space defense", comparable to the atmospheric air defense. Were it not so, spy satellites won't exist.
But read/write access to the datacentre is on someone's land, and spacefaring powers without access to that can still interfere with its effective operation...
The access is the customer's concern, much like starlink.
The customer is going to be extremely concerned when it turns out physically locating datacentres in space doesn't actually render the data inaccessible or uncensorable...
To render your data inaccessible, use /dev/null. For practical purposes, some access is required.
Censoring data in a datacenter in space requires either administrative access, or physical access. The latter is complicated in space, The former depends on your trust to the operator, and your security posture.
Since the admins aren't in space, actors that want to use administrative privileges to interfere with your data have no less access to it than if the datacentre was located on the ground.
The difference between the US government censoring a datacentre in orbit and one in California is a matter of cost rather than practicality, and it's actually easier for other spacefaring powers to interfere with it in a deniable manner if it's that important to them than the datacentre in California
This depend on you threat model. If your model is mostly legal threats form less-than-nation-state actors, being formally outside any terrestrial jurisdiction may help. If you try to protect yourself from a big threat that won't mind raiding (or bombing) your DC without a court order, quite possibly locating it in space is not the best idea.
Once it’s easy enough to launch the hundreds of launches it would take to build one of these, it will also be trivial to launch a drone that can physically attach and attack them. This is the opposite of a secure facility.
Makes you think. Could some rich enough rogue operator attack such data centre to for example cause stock crash and then profit more from that than the cost of mission?
Whats going to be delivered first - Tesla FSD or a space based data center?
Right. also wouldn't space debris eventually hitting the huge solar panel system be an issue?
> In 10 years, nearly all new data centers will be being built in outer space,” Johnston predicts.
Can I bet on the contrary odds? Could throw down my whole retirement with confidence
Yeah, who throws out these sort of timeframe in earnest? We haven't built anything in space since the ISS (which is in LEO mind you, not "outer space"), and we're building full data centers within a decade? Give me a break, that's an Elon level prediction.
https://longbets.org/
I read it as something an ambitious founder would say, not to be taken literally.
Think: "AI will replace all software developers in 6 months"
This used to be called fraud, now it’s cutesy lying?
I think now it's called 'the pitch deck'
Yep. It is now legally called puffery if you commit massive fraud. Truly we live in the best of all possible timelines.
"Naughtiness," to use the technical term (https://paulgraham.com/founders.html).
> Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.
This doesn’t seem like naughtiness. Seems like incoherence
It being unmeasurable claim is why they get away with it.
Musk has been doing it for more than a decade now and didnt really face any real problems doing it...
Didn’t face any problems doing it… you mean when was charged by the SEC for lying on Twitter? Or do you mean when he was forced to buy Twitter to avoid another case against him?
One of the selling points they mention is that they won't need to use any fresh water for cooling.
My understanding was that water-demands on Earth were an overblown issue and minuscule when compared to other uses of fresh water such as watering one acre of farmland.
Not to mention, "used" water is just "warm" water that can then be used again for other purposes.
So are they perpetuating a myth here? Or is water use a bigger issue than I thought?
Minor correction: the water is evaporated. It remains in the water cycle but is removed from the water source for any downstream users.
Well, for one thing you can't eat GPUs, so I'm ok with farmland taking up more water.
Also, the "warm" water has already destroyed ecosystems because the data centers are just dumping it. It's a completely solvable issue if we had any common sense regulations.
It's not a real issue, but it's truthy enough to generate real opposition to datacenter buildout and catalyze AI hate. So definitionally avoiding it from the get-go might end up being worth it.
It really depends where they get the water. If they're pumping an aquifer fry and doing evaporative cooling they could be just boiling an entire areas water source. If they could figure out how to use salt water it'd be ideal.
Just run your closed loop cooling through a heat exchanger in sea water. They probably do something like this already.
I think they prefer to use evaporative cooling though.
Yes, pouring more heat into the already warming oceans is surely a safe plan.
The added heat is so minuscule that it doesn't have any effect on the temperature of the ocean.
>Starcloud’s space-based data centers can use the vacuum of deep space as an infinite heat sink.
The famously heat conductive vacuum...
Someone fedex a vacuum flask full of hot coffee to nvidia HQ with an explanatory note.
More seriously, space is pretty cold, and will consume large amounts of radiated heat. The problem, of course, it that the amount you can radiate thermally at, say, 150°C is pretty limited. According to the Stephan-Bolzmann equation, it's about 1800W for a perfect black body. For 5GW, that would take a square radiator 1.7km wide, always concealed from sunlight. Realistically, much larger as the temperature would drop as the coolant flows along.
Altman: has stake in nuclear power and AI companies
Also Altman: Let's build gigawatts of nuclear for AI
Musk: has stake in space and AI companies
Also Musk: Let's build AI datacenters in space
Starcloud is the concept I'll show my friends that datacenters have hit peak hype.
Can't wait for an alien to NIMBY one of these.
This. We really have hit Internet bubble hype levels, this could be a Silicon Valley episode. "My pitch deck is one slide. 'It's AI... IN SPACE!'"
The banker slide with checkboxes will be compelling:
[x] no permitting, cultural, wildlife
[x] no local opposition.
[x] site control
Unfortunately, this is balance sheet financing so big boys only.
I can't wait for the inevitable epic humanity saving mission, where the AI datacenter gets stuck in a murder loop, and we have to send up the best and brightest in a spaceship to unplug the power cable and plug it in again.
The year is 2054...
There are no offline, terrestrial porn backups.
Humanity has lost connection with the only remaining porn footage on a space datacenter called "Old Family Cooking Recipes."
A rag tag team, led by Ellen Degeneres (who still can't get hired), must perform a daring mission to press the power On/Off switch.
No need to wait for nimby. Plenty of people unhappy about rocket launches.
Add the fact it's going to host AI and crypto and some will even call it fascism and try to move out (not sure where tho).
This isn't a musk idea though, is it?
Not sure, but the Star-something naming and the video prominently featuring Starships suggests he'll be involved.
/uj Apparently, this has already been discussed a few times on HN, last year. The bubble is still expanding!
Shameful to see this on Nvidia's site. They have real engineers and business prowess. This is really shaking my assumptions about the company.
Why is this shameful?
Because it's crank science harvesting money from people who don't understand physics.
Heat is almost impossible to dissipate in space because there's negligible matter to take the heat away.
I'm inclined to think you're right, but I can't figure out one thing - the command module (apparently) in Apollo 13 got down to 38F without active heating. That's much colder than standard data centre rack temps.
In the example of a data centre, there would be considerably more heat generation than 3 astronauts, but, I would like to understand more. 38F is cold, so heat is clearly lost not as slowly as we might think.
The Apollo passive radiators can dissipate ~2500 Watts into space. With most systems shut down, only ~500 Watts was coming from the remaining systems and the astronauts bodies.
Cool, thank you. So I read this as fundamentally, the heat they dissipated far exceeded the heat they produced. Do you mind opining on what similar figures would be with modest passive radiators and a typical data centre rack heat output?
No idea what the passive radiators might look like (50x the size of Apollo?), but an Nvidia GB300 NVL72 uses 120,000 watts.
The "scam" part isn't the radiators, it's the launch price. Their whole cost estimate depends on a different company reducing prices at least 50x.
And yet heat is dissipated in space on a regular basis. It's physically possible with a huge upside. This is how progress happens.
Apart from getting 16 sq. km of solar arrays and radiators into orbit - and without jumping to conclusions about whether this is a borderline scam - I can imagine 2 obvious showstoppers:
1) Space debris. This is proposal is several orders of magnitude larger than the biggest things in near-Earth orbits. Thus equally many orders more likely to be hit by, and create, space debris
2) Heat transport - this isn't my home turf, but I can't imagine building something lightweight enough to be launched, yet also capable of transferring enough heat away from the 5 GW core, without it melting/breaking
It's been a while since I read their whitepaper, but I don't recall either of those points being addressed.
LEO is the last place you should worry about space debris.
Space is just unfathomably large. If you aren’t in the same orbital plane, you’re just not going to have a problem. And if you did, Kessler syndrome in LEO is a non problem.
Could be an issue for specific orbital planes in stable orbits, but even there, it’s overblown.
We've officially lost the plot, we will now ship our AI data centers to ~space~ ... This will not work with modern technology.
The sun will be eclipsed by earth many times per day, requiring you to either shift all workloads or add substantial UPS weight. The radiator grid you need to cool 125kw is something like 16x the size of the entire data center.
I watched this video last week that went into 3 different scenarios, it's a good watch.
https://www.youtube.com/watch?v=JAcR7kqOb3o
> The sun will be eclipsed by earth many times per day
Depends on the orbit.
https://en.wikipedia.org/wiki/Beta_angle
So what you're saying is we should put them on the moon?
Yeah, I just wanted to link to the same video.
By the way, the same channel also has a sobering video on commercial space stations. https://youtube.com/watch?v=2G60Y3ydtqY
Their numbers strike me as very optimistic:
Source: Page 4 of their whitepaper https://starcloudinc.github.io/wp.pdf> $5m (single launch of compute module, solar & radiators)
This seems absurdly low to me.
It is, unless you take Musk's hype about Starship as fact. With rockets that are actually potentially available the best price is $1500/kg to LEO, so either they're presuming the whole setup weighs in at 3-4 tons (which is less than the shielding alone) or that they can get it launched for a few orders of magnitude less than what's on the market now (and they do say they assume $30/kg).
Actual engineering question. How large can you scale a cooling system in space? And I mean say from radial central point. Surely at some point it just doesn't work anymore. Or you spend more energy to get energy to point where you can radiate it away than you can radiate.
I believe there is math for this very question. A similar principle applies with heatsinks. You cant just continue increasing the heatsink on a CPU, the outer edge of a large heatsink won't go above ambient and thus any heatsink bigger than that is wasting material.
I would guess in a system where coolant is pumped and the added heat of that you'll have a similar problem. This is probably further exacerbated by the fact that you cant do clever things to increase surface area - your radiating surfaces must all "see" the black of space in order to function.
So many questions, like how would you protect from bit flips, damage to circuits. "10x lower energy costs and reduce the need for energy consumption on Earth." I am not sure if we need a rocket scientist to calculate the energy costs of manufacturing and sending a rocket to outer space versus putting that fuel into a generator and just letting it run. What happens when the servers need to retire due to some unpatchable bug
> "the energy costs of manufacturing and sending a rocket to outer space versus putting that fuel into a generator"
I believe it's on the order of magnitude of 100x return (for a low-orbit space photovoltiac panel that's (almost) always facing direct sun).
Yeah, radiation is the enemy of integrated circuits, cosmic radiation is more damaging the smaller the features get.
You pretty much have to have multiple redundancy and special space-rated HW, which I wouldn't be surprised is stuck at super old process nodes to mitigate this exact same issue.
Tbf, leaving aside the claims about datacentres in space, working with Nvidia on radiation hardening its latest generation chips would be a good project...
> “In space, you get almost unlimited, low-cost renewable energy,”
Wouldn't you know, you COULD get the same energy here too.
So many questions to be asked, I don't know where to start. What's the upside of bunching up all the servers into a single megastructure rather than separate satellites?
The rate of radiative cooling scales proportionally to (T^4-Tenv^4) which approximates to just T^4 in space (Tenv = 3K). The hotter they can run it, the smaller heatsinks they need; for every doubling of temperature, the heatsink area can be reduced by a factor of 16. Also, it might be possible to boost the output temperature, e.g. with a chemical heat pump for even smaller heat sinks.
How is a multiple square-kilometer radiator not just an inevitable Kessler syndrome disaster?
Edit: Some back of the envelope calculation suggests that the total cross-sectional area of all man-made orbiting satellites is around 55000 m^2. Just one 4km x 4km = 1600000m^2 starcloud would represent an increase by a factor of about 300. That's insane.
Sounds like a "slippery slope" fallacy without further explanation.
Not sure what the slippery slope is here. The linked page imagines a 4km x 4km radiator/solar array. The cross-sectional area of the array is going to be directly proportional to the probability of impacting high velocity space debris. In such an event the amount of debris that would be generated could also scale with the area of the array. This seems bad
> This seems bad
e.g., Cianide seems bad, but it won't kill you if the relative volumes are small.
tl;dr: You haven't characterized the denominator.
See my edit. Just one starcloud would represent an increase in a risk factor of over 300 c.f. status quo. Then multiply that by the number of starclouds you think would be deployed.
You still keep playing with the numerator.
> increase in a risk factor of over 300
Even with a numerator-only view, I suspect it's not fair to characterize the "risk factor" as going up 300x. There's a lot more nuance about orbits in space.
Tell me the nuance then. If people have concerns about Kessler syndrome at the starlink scale then why wouldn't something literally 1000x bigger be even more concerning.
I know this in the same way that even though I don't know the exact credence to assign the probability of particular bad effects from global warming, I can confidently say that an increase by a factor of 1000 of the CO2 emissions would be a bad thing. This is not because I have done a simulation, but instead my beliefs are based on the assumption that while concerned experts might be wrong in the details, they are probably not wrong with a gap of 3 orders of magnitude.
I already did. Your reply/edit merely repeated your prior observation.
Getting back to the point:
You literally claimed that one of these would "inevitabl[y]" trigger a Kessler effect with no proof.
> something literally 1000x bigger be even more concerning.
Again, this isn't convincing if you don't have the denominator/context. Think about it: you still can't answer how many of these are needed to trigger the Kessler effect.
BTW, "increase by a FACTOR of about 300" != "increase in a RISK FACTOR of over 300"
I'm by no means closer or educated enough on astrophysics or anything to do with space. Hence I have a very "commoner" question:
- asteroids? Debris? It's there even any risk of anything significantly big to be damaged by something flying by?
"About once a year, an automobile-sized asteroid hits Earth’s atmosphere, creates an impressive fireball, and burns up before reaching the surface."
I assume a good old "Prius" might have opinions about such construction of it flies through it.
But I guess "space is big", risks are low?
https://www.nasa.gov/solar-system/asteroids/asteroid-fast-fa...
The solar array is 4 km by 4 km. The whole Earth, with its 6400 km radius, only gets hit by a Prius-sized asteroid once per year. So the risks are much lower. I guess the array may be hit by many micro-asteroids though, but it should be possible to engineer some level of tolerance for that.
You'll never be able to do maintenance or upgrade these things. The up front cost seems extremely high given the risk of hardware failure or obselecence at data center scales.
> Starcloud plans to build a 5-gigawatt orbital data center with super-large solar and cooling panels approximately 4 kilometers in width and length.
That is...very, very large.
For scale, that's twice as large (energy-wise) as AWS's us-east-1, which itself is a group of ~170 data centers.
Oh, cool... I have only one question (that is not cool at all): how are they about to exhaust 5GW of waste heat?
This is a super basic question, how do they prevent the panels from being hit my space debris or rocks, floaties etc?
Is it just that within its orbit there are next to no objects it could collide with, even small ones?
I have yet to meet a hardware engineer who thinks this is a good idea. I'm REALLY struggling to see benefits.
Pumping your stock.
I've barely started reading the post, but
> “In space, you get almost unlimited, low-cost renewable energy”
Low cost???????? Sending a solar array into space would probably rank among the most expensive forms of energy production.
> Starcloud’s space-based data centers can use the vacuum of deep space as an infinite heat sink.
Well, good luck getting the heat out first. I hope you planned for some big radiators to go along your 5GW solar array.
Yeah especially considering the next line is:
> Constant exposure to the sun in orbit also means nearly infinite solar power
Is it an infinite heat sink or an infinite power source?!
I could see gov imaging satellites with a direct encrypted laser communications to GPU’s in orbit being attractive. Images processing, movement pattern analysis, multi spectral, and as they mentioned radar.
If this math added up, wouldn't solar panels and radiators on earth solve the same problem?
Solar panels on Earth only get sunlight half the day. The idea is still dumb, but not for that reason.
So when a storage device or a GPU burns out, how do you replace it?
Fly an astronaut to space..?
It’s more of a building the solution first and then look for the problem because why the heck not.
I though that refrigerating things in space was using a lot of energy because heat cannot dissipate in the void of space.
Moreover, why are the energy cost 10x lower when in space you have unlimited access to sun power? Is it the cost of building the energy production infrastructure ?
Lots of things limit the benefit of putting PV in space. UV damages the semiconductors faster, ditto micrometeoroids, it's just plain expensive to put stuff up there in the first place…
It's not a slam-dunk "no", we are seeing developments on all metrics. It's just that right now, I wouldn't be surprised if the claim of x10 improvement was anywhere from correct to x100 over-optimistic.
The whole thing is bogus, you could plop the hardware in the middle of a desert and have everything perform way better for cheaper.
I'm surprised nvidia put their name on this.
and what about solar storms...
Replacing faulty nodes or equipment in space seems totally reasonable... It's not like getting faulty drives replaced in my datacenter racks don't already take weeks/months...
Even if the some how solved the cooling problem others mentioned.
What happens when this data center becomes obsolete? we've just got a 4km wide piece of junk floating above earth now?
I have had people point out that building a Dyson sphere is pretty much a dumb idea, and there's no concievable reason why we would build one even if we could.
Now we have one - venture capital.
This cannot possibly end well (flipped bits, maintenance, cooling).
If they fulfill their promise within 10 years I'll change careers to kiwi farming. I promise.
Basic datacenter technicians will be the new astronauts, swapping burnt CPUs and failed hard drives in space.
If they send billions of dollars of GPU cards into space, how are they going to secure it, physically?
Thats a ridiculous amount of solar pannels to send up. I don't really think this is going to work/be viable
Would this not only work if there are solar arrays always catching the sun while the gpus are never in the sun?
Strange their rendering is not exactly starship but its starship.
„plans to build“
So far, it’s just a dream that convinced some investors to part with their money.
The prelude of a Dyson Sphere!
https://www.youtube.com/watch?v=fLzEX1TPBFM
[flagged]
It really feels like I'm living in the future, lately.
I want that type of money for playing out something which can be pre calculated and is just not a smart idea at the moment at all.
I don't get it. I really don't.
You can calculate the minimum cost, you can calculate heat, maintenance and probably also the expected failerrate for the hardware.
But even if the failerrate is something you need to figure out, that would probably some R&D thing which you would test and verify in a very small and cheap setup.
Same stupid shit with the mirror in space which will send sun back to some PV panels on earth.
Cool stuff in a non capitalistic system but otherwise it just shows that plenty of people have too much money to invest in weird things without understanding it at all.
Wow, this is embarrassing. Hard to read.
“The only cost on the environment will be on the launch, then there will be 10x carbon-dioxide savings over the life of the data center”
And how long is that life exactly? There is zero chance this is a net positive for carbon emissions, much less a remotely economical way to build or operate datacenters.
What a scam
crazy how tiny the servers are compared to the panels
btw. this is dishonest regarding sustainability.
Water consumption of a data center is not a real thing. You don't just consume water. You need it to move heat and you don't need it to remove heat by vaporization.
You can easily use this heat if you actually wanted to do so by heating houses close by or for chemical processes.
Its a legal issue.
And its very resource heavy to put anything in space...
Uhm, isn't radiation a problem outside of the atmosphere? How fast are the data transfers going to be? so many questions...
We have officially "jumped the shark." If this had been posted on April 1st I would have laughed at this and said "great joke guys."
Cooling will be a real bitch.
Shielding also.
And latency.
And don’t get me started on the remote hands fees. You thought DigitalRealty was bad!
And power, maintenance, cost...
I still like Keith Lofstrom's Server Sky concept.
http://server-sky.com/ServerSky
Would it be more cost effective and more sustainable to heavily invest in graphene semiconductors than space-based datacenters? Is that a false dilemma?
Aren't there advantages to fabricating GO Graphene Oxide and CNT Carbon Nanotubes in microgravity?
“The only energy is the launch”, that’s false.
Energy went into mining, extracting, refining, transporting all the raw materials needed to make these chips.
This is typical tech industry green washing as the industry fails to accept its destructive influence on the planet.
We need practical solutions that help reduce consumption and waste and actually address the issues. We don’t always need more we need to find a way to use less.
Engineer call out for diskswap
Where we're going... we won't need disks! Memstores all the way
They really don't want to let this bubble pop, do they?
This is a con artist smelling “idiots with too much money and nowhere else to spend it.”
They’re the same sort as the cold fusion people coming out of the woodwork with “investment opportunities” during the peak of ZIRP.
All these calculations (of feasibility and maintenance challenges) are fascinating, but really just silly.
Of course we are going to use AI and robots, like AI robots, and stuff. It's going to be fully self-operating and the future!
But more seriously, thanks for great learning experiences about space to HN commenters.
This is absolute nonsense.
The first thing to consider is that this thing won’t be stationary!
Geosynchronous orbit is much more expensive to reach per kg launched, even for Starship… when it starts working properly.
Lower orbits… aren’t stationary. Who wants a data centre that’s “over the horizon” from the owning country most of the time!?
If you think AWS egress costs are bad? Just add some zeroes! No, more zeroes than that…
It will soon come back from over the other horizon. :)
Of all the things insane about this proposal, I'm not very bothered about this one. It could be high availability and distributed by default. Like having redundant datacenters with eventual consistency on all continents. Except the continents are spinning really fast above you...
The animation is wild... 5GW concentrated up there at the top of a field of solar panels - it's not a Starcloud, it's an electric Starfurnace.
Why can't it be geostationary? Laser communication can get you gigabit speeds today. That would take a month to transmit GPT-5's estimated 280TB training corpus, which is acceptable. Latency does not matter.
With geostationary orbit you won't ever get less than 200ms round-trip latency from the ground (at the speed of light).
Fine for some applications, but a massive regression from modern fiber infrastructure and definitely not suitable for everything (just think how slow the modern web is even with 15ms connections to datacenters). There's a reason why Starlink & co are trying to set up communication satellites closer to the ground.
Nothing stopping the satellite data center from communicating back to homebase via Starlink network right?
Would probably need to negotiate for a huge amount of dedicated priority bandwidth, but latency shouldn't actually be that bad.
Round-trip to GEO will add 238.7 milliseconds to whatever other infra you have over the last 200 km vertically* and whatever along the ground. It's probably fine for some things, but not for everything.
* while there could, in principle, be no extra infra in the last 200 km vertically, that means someone on the ground is talking directly to GEO. As per similar discussion about big PV space stations beaming power to the ground, your minimum ground spot size for a transmitter this big and this far away is still tens of km, which limits the other parts of your overall system design.
pipe dream. this isnt going to happen before the AI bubble pops. Then when it does there wont be a drive for it.
It's going to be fun constantly repairing all those solar arrays. We'll be destroying our planet with the rocket launches alone. But hey! The more ridiculous the idea, the greater the chance that Trump and his conspiracy-laden circle will embrace it. It works in science fiction movies and novels, why not in reality, duh. /s
Everyone who puts up a persistent bright dot in the night sky should compensate everyone who has to see it with 1 cent for the sensory pollution.