The space industry is one of the fastest-growing markets today. To grasp the scale of this transformation: the so-called “new space economy” is projected to grow at a rate exceeding 8% per year over the coming years, potentially surpassing even the automotive sector (currently the world’s largest industry) by 2030.
To sustain this rapid growth, a significant boost in computational power and automation in space applications is essential. And this is where artificial intelligence comes into play.
Integrating AI into space technologies is both a fascinating challenge and a crucial turning point. It will, for instance, enhance our ability to explore other planets, automate complex operations such as docking maneuvers, and accelerate the analysis of environmental data, vital for understanding the evolution of our own planet. Currently, one of the biggest hurdles is communication latency: sending a signal from Earth to Mars can take anywhere from 6 to 20 minutes, depending on their relative positions. This means that remotely controlling a robot like Perseverance may require up to 40 minutes for a single command to be sent and confirmed.
Moreover, instruments like space telescopes (such as the James Webb) and planetary probes currently transmit complete image bitmaps back to Earth for processing and analysis, a process that is highly inefficient in terms of energy consumption and bandwidth. Equipping satellites with greater onboard computational capacity and intelligent algorithms could solve these issues, drastically cutting both costs and processing times.
The invisible threat: cosmic radiation
Bringing artificial intelligence into space is far from straightforward. First and foremost, the extraterrestrial environment is extremely hostile. Without the protective shield of Earth’s atmosphere and magnetic field, both electronics (and, of course, humans) are exposed to vast amounts of radiation (specifically, cosmic rays). To put it in perspective: a round-trip to Mars would damage or destroy nearly half of the cells in the human body due to radiation exposure. Unfortunately, electronic components are even more fragile than biological tissue, and cosmic rays can cause computational errors.
The most concerning part? Radiation-induced faults often leave no trace. A neural network could miscalculate without realizing it, leading to potentially catastrophic decisions: identifying obstacles that don’t exist, failing to detect real hazards, or botching a docking operation. Completely shielding a spacecraft from radiation would require replicating Earth’s atmosphere around it, an unfeasible solution. On the other hand, building radiation-hardened hardware is extremely costly and not easily scalable.
The challenge: making AI space-resilient
To overcome these limitations, at the Department of Industrial Engineering we are collaborating with space agencies and industry leaders to understand how radiation affects neural network predictions. Our goal is clear: to adapt algorithms so that, in the event of an error, the system does not perform dangerous actions but can autonomously manage the effects of radiation.
A critical part of our work involves simulating cosmic radiation in laboratory environments. We use particle accelerators (such as ChipIR in the UK, RADEF in Finland, UCL in Belgium, and TIFPA in Trento) that bombard electronic devices with protons, heavy ions, and neutrons in just a few seconds, effectively replicating the conditions of outer space. These tests provide statistically accurate data on how applications might fail due to radiation exposure.
We evaluate a variety of components, including GPUs, TPUs, FPGAs, and dedicated accelerators, running neural networks in real time during irradiation. We then identify the specific computational changes caused by radiation and implement dedicated code blocks to prevent these errors from propagating. Additionally, we retrain the neural networks to teach them how to recognize and correct these abnormal behaviors.
The results so far are extremely promising: we’ve managed to correct up to 90% of errors with less than 2% performance overhead, and have increased onboard satellite computational capacity by a factor of one million.
Looking ahead
Our ultimate aim is to enable the safe and widespread adoption of artificial intelligence in space. Next steps include integrating intelligent robotic systems on probes and satellites, not only to ensure system reliability but also to develop an autonomous navigation framework adaptable to the harsh environments of other planets.
Space will no longer be just a destination: it will become an operational environment, where AI is not a luxury, but a necessity.
Figure 1: Example of radiation-induced errors
Figure 2: Experimental setup in particle accelerators
Figure 3: During an experiment (credits: ChipIR – STFC)