Information magazine of the Department of Industrial Engineering

Università di Trento

Can we trust artificial intelligence behind the wheel?

Road accidents are the third leading cause of death globally. According to periodic reports from Istat, the primary causes include distraction, failure to yield, and excessive speed. Although the situation has improved over the last 30 years thanks to seatbelt mandates and increasingly advanced vehicle technology, there is still room for improvement. Autonomous vehicles may be the answer, with estimates suggesting that their widespread use could reduce accidents from the current 20 per hour to less than 2 per day across Europe. Reliable autonomous vehicles, then, could save millions of lives annually.

Artificial Intelligence: The Future of Driving

The automotive industry is undoubtedly moving towards autonomous driving, heavily supported by advancements in artificial intelligence. AI has reached a level where it can:

  • “Learn” driving behavior;
  • Identify objects surrounding the vehicle;
  • Determine actions to avoid accidents.

However, we may not yet be ready to place our vehicles—and our lives—in the hands of an AI system. In Europe, fully autonomous vehicles are not yet permitted on the roads, and reports of accidents involving autonomous vehicles, especially from the United States, often dampen public confidence in trusting AI on the road. At our Department, researcher Paolo Rech is studying ways to enhance the safety and reliability of autonomous driving systems.

The Issue of Errors

We tend to believe that computers never make mistakes, but unfortunately, this is not always true. Probabilistic algorithms and neural networks give correct answers over 90% of the time. However, even if algorithms were perfect, hardware can fail due to factors like external interference, excessive use or aging, temperature fluctuations, or impacts from cosmic particles and neutrons. These events can alter stored values or affect computation results.

History has shown us these risks. In the 2003 Belgian election, Maria Vindevoghel received 4,096 extra votes due to an error caused by a neutron. In 2008, a Qantas airplane suddenly descended because the onboard computer mistakenly believed it was flying 1,024 meters higher than its cruising altitude. In 2007, a neutron error in the cruise control of Toyota vehicles caused uncontrolled acceleration, leading to fatal accidents.

Research at the DII: From Theory…

To achieve a world with autonomous vehicles, we cannot risk computer errors. This is why we are studying how cosmic particles interfere with autonomous driving systems. Experiments using particle accelerators, along with studies on computational architecture and algorithms, are helping us understand how a particle might make a neural network perceive a non-existent object, causing an abrupt brake, or fail to detect a pedestrian or another vehicle.

…to Practice: The Next Steps

Our goal is to design software and hardware solutions to prevent adverse events. To this end, we:

  • Add control layers within the neural network;
  • Double-check the most critical operations to ensure accuracy;
  • Verify that calculations do not yield unreasonable results.

“Currently, we can identify and correct up to 85% of errors in object recognition systems,” explains Paolo Rech. “However, we are working on training neural networks to make AI capable of self-correcting when it makes mistakes. The hope—and the challenge—in the coming years is to develop neural networks that can guide vehicles safely even if values are corrupted due to cosmic particle impacts.”

Ricerca di:

Paolo Rech
DII, Area di ricerca: Sistemi di elaborazione delle Informazioni
Vuoi restare aggiornato

Iscriviti alla newsletter di DII News

You want to stay updated

Subscribe to the DII News newsletter