Applications
- 16.3.1 Orbital Inference Nodes — Dedicated compute satellites that run AI inference workloads in orbit, eliminating the latency and interception risk of downlinking raw data to ground before processing.
- 16.3.2 Federated Learning Across Constellations — Training shared AI models across multiple satellite constellations without raw data ever leaving the spacecraft or crossing a foreign ground station.
- 16.3.3 Onboard Earth-Observation AI — Running trained inference models directly on imaging satellites so that actionable intelligence is derived in orbit and only compressed results are downlinked.
- 16.3.4 Orbital Data Centres — Purpose-built satellites carrying server racks, storage arrays and high-speed inter-satellite links to perform data processing and storage outside terrestrial jurisdiction.
- 16.3.5 Sovereign AI Compute in Orbit — Deploying nationally owned, radiation-hardened AI accelerator hardware in orbit to run sensitive inference and training workloads beyond the jurisdictional reach of foreign cloud providers.