Every Earth-observation, signals-intelligence and weather constellation accumulates vastly more sensor data than it can downlink. The bottleneck is not compute or storage — it is the radio link. Federated learning inverts the classical approach: instead of shipping petabytes of raw imagery or RF captures to a ground data centre, each satellite trains a local model increment on-board and transmits only gradient updates — kilobytes, not gigabytes. Aggregation happens either at a designated orbital relay node or, in the most sovereign-friendly architecture, at a nationally controlled ground segment that never hands custody of raw data to a foreign cloud provider.
The national security implication is acute. A state that relies on a commercial constellation operator for AI model training is, in practice, handing that operator's jurisdiction visibility over what the model is learning to detect — enemy ship classes, missile plume signatures, illegal deforestation patterns, refugee movements. Federated learning severs that link. Model weights and gradients are mathematically uninvertible to raw imagery under standard differential-privacy guarantees, meaning the aggregation point can be operated by an ally, a neutral commercial prime, or even a rival, without exposing the underlying intelligence collection.
The architecture remains firmly speculative but is tractable within a five-to-eight year horizon. Inter-satellite links carrying compressed gradient tensors at 10–100 Mbps are already being demonstrated by commercial LEO broadband primes. The missing piece is radiation-hardened, energy-efficient AI accelerators with enough TOPS to complete a meaningful training epoch during a 15-minute orbital pass. Once that compute floor is reached, a federated constellation becomes a continuously self-improving sensor network — one that gets smarter with every orbit without exposing a single raw frame to an adversary's subpoena or export-control regime.