[raivo.sell][✓ raivo.sell, 2025-09-18]
Autonomy is part of the next big megatrend in electronics which is likely to change society. As a new technology, there are a large number of open research problems. These problems can be classified in four broad categories: Autonomy hardware, Autonomy Software, Autonomy Ecosystem, and Autonomy Business models. In terms of hardware, autonomy consists of a mobility component (increasingly becoming electric), sensors, and computation.
Research in sensors for autonomy is rapidly evolving, with a strong focus on “sensor fusion, robustness, and intelligent perception.” One exciting area is “multi-modal sensor fusion,” where data from LiDAR, radar, cameras, and inertial sensors are combined using AI to improve perception in complex or degraded environments. Researchers are developing uncertainty-aware fusion models that not only integrate data but also quantify confidence levels, essential for safety-critical systems. There's also growing interest in “event-based cameras” and “adaptive LiDAR,” which offer low-latency or selective scanning capabilities for dynamic scenes, while self-supervised learning enables autonomous systems to extract semantic understanding from raw, unlabeled sensor data. Another critical thrust is the development of resilient and context-aware sensors. This includes sensors that function in all-weather conditions, such as “FMCW radar” and “polarization-based vision,” and systems that can detect and correct for sensor faults or spoofing in real-time. Researchers are also exploring “terrain-aware sensing,” “semantic mapping,” and “infrastructure-to-vehicle (I2V)” sensor networks to extend situational awareness beyond line-of-sight. Finally, sensor co-design—where hardware, placement, and algorithms are optimized together—is gaining traction, especially in “edge computing architectures” where real-time processing and low power are crucial. These advances support autonomy not just in cars, but also in drones, underwater vehicles, and robotic systems operating in unstructured or GPS-denied environments.
In terms of computation, exciting research focuses on enabling real-time decision-making in environments where cloud connectivity is limited, latency is critical, and power is constrained. One prominent area is the “co-design of perception and control algorithms with edge hardware,” such as integrating neural network compression, quantization, and pruning techniques to run advanced AI models on embedded systems (e.g., NVIDIA Jetson, Qualcomm RB5, or custom ASICs). Research also targets “dynamic workload scheduling,” where sensor processing, localization, and planning are intelligently distributed across CPUs, GPUs, and dedicated accelerators based on latency and energy constraints. Another major focus is on “adaptive, context-aware computing,” where the system dynamically changes its computational load or sensing fidelity based on situational awareness—for instance, increasing compute resources during complex maneuvers or reducing them during idle cruising. Related to this is “event-driven computing” and “neuromorphic architectures” that mimic biological efficiency to reduce energy use in perception tasks. Researchers are also exploring “secure edge execution,” such as trusted computing environments and runtime monitoring to ensure deterministic behavior under adversarial conditions. Finally, “collaborative edge networks,” where multiple autonomous agents (vehicles, drones, or infrastructure nodes) share compute and data at the edge in real time, open new frontiers in swarm autonomy and decentralized intelligence.
Finally, as there is a shift towards “software defined vehicles,” there is an increasing need to develop computing hardware architectures bottom-up with critical properties of software reuse and underlying hardware innovation. This process mimics computer architectures in information technology, but does not exist in the world of autonomy today.
In terms of software, important system functions such as perception, path planning, and location services sit in software/AI layer. While somewhat effective, AV stacks are quite a bit less effective then a human who can navigate the world spending only about a 100 watts of power. There are a number of places where humans/machine autonomy differ. These include:
Thus, in addition to traditional machine learning techniques, newer AI architectures with properties of robustness, power/compute efficiency, and effectiveness are open research problems.
In terms of Ecosystem, key open research problems exist in areas such as safety validation, V2X communication, and ecosystem partners.
Verification and validation (V\&V) for autonomous systems is evolving rapidly, with key research focused on making AI-driven behavior both “provably safe and explainable.” One major direction involves “bounding AI behavior” using formal methods and developing “explainable AI” (XAI) that supports safety arguments regulators and engineers can trust. Researchers is also focused on “rare and edge-case scenario generation” through adversarial learning, simulation, and digital twins, aiming to create test cases that challenge the limits of perception and planning systems. Defining new “coverage metrics”—such as semantic or risk-based coverage—has become crucial, as traditional code coverage doesn’t capture the complexity of non-deterministic AI components. Another active area is “scalable system-level V&V,” where component-level validation must support higher-level safety guarantees. This includes “compositional reasoning,” contracts-based design, and model-based safety case automation. The integration of digital twins for closed-loop simulation and real-time monitoring is enabling continuous validation even post-deployment. In parallel, “cybersecurity-aware V&V” is emerging, focusing on spoofing resilience and securing the validation pipeline itself. Finally, standardization of simulation formats (e.g., OpenSCENARIO, ASAM) and the rise of “test infrastructure-as-code” are laying the groundwork for scalable, certifiable autonomy, especially under evolving regulatory frameworks like UL 4600 and ISO 21448.
One of the ecosystem aids to autonomy maybe connection to the infrastructure and of course, in mixed human/machine environments there is the natural Human Machine Interface (HMI). Key research in V2X (Vehicle-to-Everything) for autonomy centers on enabling cooperative behavior and enhanced situational awareness through low-latency, secure communication. A major area of focus is on “reliable, high-speed communication” via technologies like “C-V2X and 5G/6G,” which are critical for supporting time-sensitive autonomous functions such as coordinated lane changes, intersection management, and emergency response. Closely linked is the development of “edge computing architectures,” where V2X messages are processed locally to reduce latency and support real-time decision-making. Research is active in “cooperative perception,” where vehicles and infrastructure share sensor data to extend the field of view beyond occlusions, enabling safer navigation in complex urban environments. Another core research direction is the integration of “smart infrastructure and digital twins,” where roadside sensors provide real-time updates to HD maps and augment vehicle perception. This is essential for detecting dynamic road conditions, construction zones, and temporary signage. In parallel, ensuring “security and privacy in V2X communication” is a growing concern. Work is underway on encrypted, authenticated protocols and on methods to detect and respond to malicious actors or faulty data. Finally, standardization and interoperability are vital for large-scale deployment; efforts are focused on harmonizing communication protocols across vendors and regions and on developing robust, scenario-based testing frameworks that incorporate both simulation and physical validation. Finally, an open research issue is the tradeoff between individual autonomy and dependence on an infrastructure. Associated with infrastructure dependence are open issues of legal liability, business model, or cost.
Human-Machine Interface (HMI) for autonomy remains an area with several open research and design challenges, particularly around trust, control, and situational awareness. One major issue is how to build “appropriate trust and transparency” between users and autonomous systems. Current interfaces often fail to clearly convey the vehicle’s capabilities, limitations, or decision-making rationale, which can lead to overreliance or confusion. There's a delicate balance between providing sufficient information to promote understanding and avoiding cognitive overload. Additionally, ensuring “safe and intuitive transitions of control,” especially in Level 3 and Level 4 autonomy, remains a critical concern. Drivers may take several seconds to re-engage during a takeover request, and the timing, modality, and clarity of such prompts are not yet standardized or optimized across systems. Another set of challenges lies in maintaining “situational awareness” and designing “adaptive, accessible interfaces.” Passive users in autonomous systems tend to disengage, losing track of the environment, which can be dangerous during unexpected events. Effective HMI must offer context-sensitive feedback using visual, auditory, or haptic cues while adapting to the user’s state, experience level, and accessibility needs. Moreover, autonomous vehicles currently lack effective ways to interact with external actors—such as pedestrians or other drivers—replacing human cues like eye contact or gestures. Developing standardized, interpretable external HMIs, a language of driving, remains an active area of research. Finally, a lack of unified metrics and regulatory standards for evaluating HMI effectiveness further complicates design validation, making it difficult to compare systems or ensure safety across manufacturers.
Finally, autonomy will have implications on topics such as civil infrastructure guidance, field maintenance, interaction with emergency services, interaction with disabled and young riders, insurance markets, and most importantly the legal profession. There are many research issues underlying all of these topics.
In terms of business models, use models and their implications for supply chain are open research problems. For example, for the supply chain, the critical technology is semiconductors which is highly sensitive to very high volume. For example, the largest market in mobility, the auto industry, is approx. 10% of semiconductor volume, and the other forms (airborne, marine, space) are orders-of-magnitude lower. From a supply chain point perspective, a small number of skews which service a large market are ideal. The research problem is: What should be the nature of these very scalable components. In terms of end-markets, autonomy in traditional transportation is likely to lead to a reduction in unit volume. Why? With autonomy, one can get much higher utilization (vs the < 5% in today's automobiles). However, it is also likely that autonomy unleashes a broad class of solutions in markets such as agriculture, warehouses, distribution, delivery, and more. Micromobility applications in particular offer some interesting options for very high volumes. The exact nature of the applications is an open research problem.