Sources of Instability

 Masters (2nd level) classification icon Instability of perception, mapping, and localization is caused mainly by the uncertainty of the sensor readings. The object recognition could provide different results over time, even though the scene is static, due to sensor noise. The map could be distorted by erroneous readings of GNSS caused by the reflection of signals. Localization could be degraded by occlusions or unexpected objects.

There are several sources of uncertainty: sensor noise, error readings, model uncertainty, environment randomness, occlusions, adversarial attacks, and intention estimation errors for other traffic participants. This section categorizes and describes them and provides an introduction to how these phenomena can be handled.

There are two main types of uncertainty: aleatoric and epistemic.

Aleatoric uncertainty represents the stochastic nature or randomness of the world. Sensor readings are always affected by different types of noise, and many processes in the environment are of a stochastic nature.

Epistemic uncertainty, or systematic uncertainty, arises from imprecise or incomplete models. The model cannot explain the observed data completely or precisely.

Sensor noise is a significant source of instability in the AV systems. All the sensors providing digital signals add quantization noise inevitably to the measurement, as the nature of the real environment is continuous. To minimize the effects of quantization, a higher resolution is used, causing, on the other hand, a significant increase in computational complexity.

But quantization is not only a source of noise in the sensors. Different physical processes caused noise, e.g., interference, statistical quantum fluctuations, etc. These types of noise are random and usually have a normal distribution. Therefore, the noise can be reduced by averaging over multiple measurements and other similar methods, but it could introduce a time delay and distortion of data.

Cheaper sensors are usually more prone to noise. But even the best sensors are not noiseless. According to ISO 26262, HARA should be performed, safety goals should be set, and finally, hardware safety requirements should be specified. The hardware specification may include the noise ratio, etc., of the sensor.

Sensor fusion is often used to cope with sensor noise. Different sensors use different physical processes for measurement, and therefore, the noise is different as well. A combination of different sensor modalities can improve the measurement.

Another source of uncertainty is the limitation of sensing, such as occlusion or limited visibility. The limitation of the perception subsystem limits the performance of the systems relying on the perception, as the information is only partial or not provided at all. There are various approaches to deal with partial occlusion, like object tracking and prediction, the Kalman filter, or sensor fusion. Especially when we take the weather into account, different sensor modalities could work better for different weather conditions. A combination of radar, visible light camera, and infrared cameras of different wavelengths could effectively diminish the effects of the harsh weather conditions ( e.g., fog).

A similar type of uncertainty is when we have only partial information about the intentions of other agents in the traffic. The prediction of the future vehicle’s trajectory is always a combination of the physical limits of the vehicle and the intention of its driver. The physical limits could be known to a high degree of certainty; the intention of the driver is always only estimated from the observations of previous actions.

Another sources of uncertainty are traffic regulations themselves. Because the real-world road environment is too intricate to quantify all the regulations one by one, there are ambiguities in traffic regulations that do not provide quantitative criteria.

A specific source of instability, especially in the context of neural networks, is an adversarial attack. An adversarial attack is an intentional modification of the environment, causing networks to misrecognize an object as a different class or not detect it at all. Even a little pattern added to the environment (e.g. traffic sign) could cause misrecognition.

Adversarial attacks are especially threatening for autonomous driving systems, which may harm human life. The robustness of autonomous driving systems against adversarial attacks is called SOTIF (Safety Of The Intended Functionality) and is covered by international standards such as ISO 21448.

en/safeav/maps/instability.txt · Last modified: 2025/10/17 14:04 by kosnark
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0