This is an old revision of the document!
A typical autonomy software stack is organised into hierarchical layers, each responsible for a specific subset of functions — from low-level sensor control to high-level decision-making and fleet coordination. Although implementations differ across domains (ground, aerial, marine), the core architectural logic remains similar:
This layered design aligns closely with both robotics frameworks (ROS 2) and automotive architectures (AUTOSAR Adaptive).
In Figure 1, the main software layers and their functions are depicted.
Hardware Abstraction Layer (HAL) The HAL provides standardised access to hardware resources. It translates hardware-specific details (e.g., sensor communication protocols, voltage levels) into software-accessible APIs. This functionality typically includes:
HAL ensures portability — software modules remain agnostic to specific hardware vendors or configurations [3].
Operating System (OS) and Virtualisation Layer The OS layer manages hardware resources, process scheduling, and interprocess communication (IPC) as well as real-time operation, alert and trigger raising using watchdog processes. Here, data processing parallelisation is one of the keys to ensuring resources for time-critical applications. Autonomous systems often use:
Time-Sensitive Networking (TSN) extensions and PREEMPT-RT patches ensure deterministic scheduling for mission-critical tasks [4].
Middleware / Communication Layer The middleware layer serves as the data backbone of the autonomy stack. It manages communication between distributed software modules, ensuring real-time, reliable, and scalable data flow. IN some of the mentioned architectures middleware is the central distinctive feature of the architecture. Popular middleware technologies:
Control & Execution Layer The control layer translates planned trajectories into actuator commands while maintaining vehicle stability. It closes the feedback loop between command and sensor response. Key modules:
Safety-critical systems often employ redundant controllers and monitor nodes to prevent hazardous conditions [5].
Autonomy Intelligence Layer This is the core of decision-making in the stack. It consists of several interrelated subsystems:
| Subsystem | Function | Example Techniques / Tools |
|---|---|---|
| Perception | Detect and classify objects, lanes, terrain, or obstacles. | CNNs, LiDAR segmentation, sensor fusion. |
| Localization | Estimate position relative to a global or local map. | SLAM, GNSS, Visual Odometry, EKF. |
| Planning | Compute feasible, safe paths or behaviours. | A*, D*, RRT*, Behavior Trees. |
| Prediction | Provide the environmental behaviour forecast. Usually, it provides an internal dynamics forecast as well. | Recurrent Neural Networks, Bayesian inference. |
| Decision-making | Choose actions based on mission goals and context. | Finite State Machines, Reinforcement Learning. |
These components interact through middleware and run either on edge computers (onboard) or cloud-assisted systems for extended processing [6].
Application & Cloud Layer At the top of the stack lies the application layer, which extends autonomy beyond individual vehicles:
Frameworks like AWS RoboMaker, NVIDIA DRIVE Sim, and Microsoft AirSim bridge onboard autonomy with cloud computation.