SpaceX proposes deploying millions of solar-powered data center satellites in orbit, igniting a dual debate over AI energy consumption and orbital safety.
(Background recap: Sister Wei states “AI is not a bubble”: The moment of wealth explosion as the internet is being replicated)
(Additional context: Google officially launches “Gemini 3”! What are the highlights of the world’s smartest AI model?)
Table of Contents
Earth power grid alarms, space becomes the final address
Space clouds assembled from laser optical networks
Negotiation anchors behind millions of satellites
How far is space computing center? Five hurdles before landing
According to the latest report from PCMag, SpaceX founded by Elon Musk submitted an application to the U.S. Federal Communications Commission (FCC) on the 30th of this week, proposing to deploy up to 1 million solar data center satellites, aiming to move AI computing cores off the ground and into near-Earth orbit.
Earth power grid alarms, space becomes the final address
We know that training and inference of AI models require enormous amounts of electricity and cooling water, but land, power quotas, and water resource controls are forcing ground data centers to slow expansion.
According to analysis by the World Economic Forum, space data centers are estimated to have electricity prices as low as $0.005 per kWh, about one-fifteenth of the average wholesale price on the ground, and the vacuum environment directly eliminates the need for cooling water, providing a major solution for traditional 40MW facilities that consume hundreds of thousands of tons of water.
When submitting the documents, SpaceX emphasized:
This is the first step toward becoming a star civilization, not just solving current bottlenecks, but fully harnessing solar energy.
Like Musk’s past mastery of extreme goals, this statement binds energy dividends with civilizational advancement narratives, guiding investors to focus on long-term marginal cost advantages.
Laser optical network assembly in space clouds
The technical aspect is not just imagination. Starlink has deployed over 9,600 satellites in orbit and has verified OISL (Optical Inter-Satellite Link) laser communication technology. According to Time magazine, future Starlink nodes could directly exchange data and perform real-time computation in orbit, only sending summarized results or backups back to Earth, greatly reducing dependence on fiber optic backhaul.
Currently, Google’s Project Suncatcher and Blue Origin’s TeraWave are testing along similar paths, but the scale of SpaceX’s application has raised the entry barrier by an entire order of magnitude.
Negotiation anchors behind millions of satellites
Critics question whether 1 million satellites is exaggerated, but Engadget reviewed that in 2022, SpaceX applied to launch 30,000 Starlink satellites, with FCC ultimately approving only 7,500.
Now claiming a number in the hundreds of thousands is likely a “anchoring effect”: setting the negotiation starting point at an extreme position, so that even after reductions, the total remains in the hundreds of thousands. Bloomberg pointed out that the Trump administration was inclined to relax large infrastructure reviews, which could increase approval chances, but the actual number approved still depends on subsequent hearings and negotiations.
There are about 15,000 active satellites worldwide. If 10% of the application volume is ultimately approved, the orbit will instantly add 100,000 data nodes, increasing collision risks from debris. Astronomers and environmental groups worry that once the Kessler effect is triggered, chain collisions could block the entire near-Earth orbit.
The FCC will need to balance “supporting AI infrastructure innovation” with “preventing space traffic chaos.” Key points of the hearing will focus on: how to set deorbiting procedures, how active collision avoidance protocols are implemented, and whether debris removal mechanisms are in place.
How far is space computing center? Five hurdles before landing
Despite Musk’s visionary ambitions, there are several unavoidable engineering and economic challenges between application and realization.
First, the contradiction between launch costs and deployment scale. Even though Falcon 9 has reduced the cost per kilogram to orbit to about $2,700, and Starship aims even lower in the future, a satellite node with actual computing capability—containing servers, solar panels, cooling systems, and communication modules—far exceeds typical communication satellites in weight. Deploying hundreds of thousands of such units requires an astronomical number of launches and total costs.
Second, the computing power bottleneck of space-grade hardware. GPUs and high-bandwidth memory used in ground data centers are not designed for space environments. Cosmic radiation can cause single-particle upsets, leading to computational errors; extreme temperature differences (up to 120°C on the sun-facing side and -150°C on the shaded side) pose severe challenges to chip stability. Currently, space-grade radiation-hardened chips perform about two to three generations behind commercial consumer-grade chips.
Running large models inference in orbit remains fundamentally limited by hardware gaps.
Third, cooling is not as simple as imagined. Vacuum environment indeed eliminates the need for cooling water, but it also means no convective cooling, relying solely on radiative heat dissipation. Radiative cooling efficiency depends on surface area and temperature; thus, satellites need large radiators, further increasing weight and volume, conflicting with limited launch capacity.
The International Space Station’s cooling system weighs several tons, exemplifying this issue.
Fourth, the physical limits of latency and bandwidth. Near-Earth orbit’s one-way delay is about 4 to 20 milliseconds, seemingly acceptable, but the bandwidth of laser links between satellites is far below that of ground fiber optics. A submarine cable can transmit tens of Tbps, while current OISL links are still in Gbps range.
For distributed training tasks requiring large parameter synchronization, this bandwidth gap could be fatal. Space computing is more suitable for latency-tolerant batch inference rather than real-time training.
Fifth, maintenance and upgrades are difficult. Ground data centers can replace disks, upgrade GPUs, and repair faulty nodes at any time. Satellites in orbit, once deployed, are essentially unrecoverable for hardware repairs. When chip performance is surpassed by next-generation products, or parts degrade due to radiation, the only “upgrade” is to launch new satellites and retire old ones—bringing us back to launch costs and orbital congestion issues.
Of course, these difficulties do not mean space computing centers are forever impossible, but they define a clear boundary of reality: in the short term, space is better suited as a supplement to ground data centers, handling workloads that are insensitive to latency and sensitive to energy costs, rather than a complete replacement. Musk’s gamble is that as marginal resource costs on Earth continue to rise, cloud customers willing to offload workloads to orbit will only increase.
Several months remain before the FCC’s final decision, but this application has already moved the idea of “sending data centers to space” from science fiction into policy agenda. The future ceiling of cloud computing may truly lie not below the ceiling, but at the unseen boundary of the sky.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
SpaceX applies to the US FCC to launch millions of satellites to create solar-powered data centers, Elon Musk's space AI gamble
SpaceX proposes deploying millions of solar-powered data center satellites in orbit, igniting a dual debate over AI energy consumption and orbital safety.
(Background recap: Sister Wei states “AI is not a bubble”: The moment of wealth explosion as the internet is being replicated)
(Additional context: Google officially launches “Gemini 3”! What are the highlights of the world’s smartest AI model?)
Table of Contents
According to the latest report from PCMag, SpaceX founded by Elon Musk submitted an application to the U.S. Federal Communications Commission (FCC) on the 30th of this week, proposing to deploy up to 1 million solar data center satellites, aiming to move AI computing cores off the ground and into near-Earth orbit.
Earth power grid alarms, space becomes the final address
We know that training and inference of AI models require enormous amounts of electricity and cooling water, but land, power quotas, and water resource controls are forcing ground data centers to slow expansion.
According to analysis by the World Economic Forum, space data centers are estimated to have electricity prices as low as $0.005 per kWh, about one-fifteenth of the average wholesale price on the ground, and the vacuum environment directly eliminates the need for cooling water, providing a major solution for traditional 40MW facilities that consume hundreds of thousands of tons of water.
When submitting the documents, SpaceX emphasized:
Like Musk’s past mastery of extreme goals, this statement binds energy dividends with civilizational advancement narratives, guiding investors to focus on long-term marginal cost advantages.
Laser optical network assembly in space clouds
The technical aspect is not just imagination. Starlink has deployed over 9,600 satellites in orbit and has verified OISL (Optical Inter-Satellite Link) laser communication technology. According to Time magazine, future Starlink nodes could directly exchange data and perform real-time computation in orbit, only sending summarized results or backups back to Earth, greatly reducing dependence on fiber optic backhaul.
Currently, Google’s Project Suncatcher and Blue Origin’s TeraWave are testing along similar paths, but the scale of SpaceX’s application has raised the entry barrier by an entire order of magnitude.
Negotiation anchors behind millions of satellites
Critics question whether 1 million satellites is exaggerated, but Engadget reviewed that in 2022, SpaceX applied to launch 30,000 Starlink satellites, with FCC ultimately approving only 7,500.
Now claiming a number in the hundreds of thousands is likely a “anchoring effect”: setting the negotiation starting point at an extreme position, so that even after reductions, the total remains in the hundreds of thousands. Bloomberg pointed out that the Trump administration was inclined to relax large infrastructure reviews, which could increase approval chances, but the actual number approved still depends on subsequent hearings and negotiations.
There are about 15,000 active satellites worldwide. If 10% of the application volume is ultimately approved, the orbit will instantly add 100,000 data nodes, increasing collision risks from debris. Astronomers and environmental groups worry that once the Kessler effect is triggered, chain collisions could block the entire near-Earth orbit.
The FCC will need to balance “supporting AI infrastructure innovation” with “preventing space traffic chaos.” Key points of the hearing will focus on: how to set deorbiting procedures, how active collision avoidance protocols are implemented, and whether debris removal mechanisms are in place.
How far is space computing center? Five hurdles before landing
Despite Musk’s visionary ambitions, there are several unavoidable engineering and economic challenges between application and realization.
First, the contradiction between launch costs and deployment scale. Even though Falcon 9 has reduced the cost per kilogram to orbit to about $2,700, and Starship aims even lower in the future, a satellite node with actual computing capability—containing servers, solar panels, cooling systems, and communication modules—far exceeds typical communication satellites in weight. Deploying hundreds of thousands of such units requires an astronomical number of launches and total costs.
Second, the computing power bottleneck of space-grade hardware. GPUs and high-bandwidth memory used in ground data centers are not designed for space environments. Cosmic radiation can cause single-particle upsets, leading to computational errors; extreme temperature differences (up to 120°C on the sun-facing side and -150°C on the shaded side) pose severe challenges to chip stability. Currently, space-grade radiation-hardened chips perform about two to three generations behind commercial consumer-grade chips.
Running large models inference in orbit remains fundamentally limited by hardware gaps.
Third, cooling is not as simple as imagined. Vacuum environment indeed eliminates the need for cooling water, but it also means no convective cooling, relying solely on radiative heat dissipation. Radiative cooling efficiency depends on surface area and temperature; thus, satellites need large radiators, further increasing weight and volume, conflicting with limited launch capacity.
The International Space Station’s cooling system weighs several tons, exemplifying this issue.
Fourth, the physical limits of latency and bandwidth. Near-Earth orbit’s one-way delay is about 4 to 20 milliseconds, seemingly acceptable, but the bandwidth of laser links between satellites is far below that of ground fiber optics. A submarine cable can transmit tens of Tbps, while current OISL links are still in Gbps range.
For distributed training tasks requiring large parameter synchronization, this bandwidth gap could be fatal. Space computing is more suitable for latency-tolerant batch inference rather than real-time training.
Fifth, maintenance and upgrades are difficult. Ground data centers can replace disks, upgrade GPUs, and repair faulty nodes at any time. Satellites in orbit, once deployed, are essentially unrecoverable for hardware repairs. When chip performance is surpassed by next-generation products, or parts degrade due to radiation, the only “upgrade” is to launch new satellites and retire old ones—bringing us back to launch costs and orbital congestion issues.
Of course, these difficulties do not mean space computing centers are forever impossible, but they define a clear boundary of reality: in the short term, space is better suited as a supplement to ground data centers, handling workloads that are insensitive to latency and sensitive to energy costs, rather than a complete replacement. Musk’s gamble is that as marginal resource costs on Earth continue to rise, cloud customers willing to offload workloads to orbit will only increase.
Several months remain before the FCC’s final decision, but this application has already moved the idea of “sending data centers to space” from science fiction into policy agenda. The future ceiling of cloud computing may truly lie not below the ceiling, but at the unseen boundary of the sky.