Table of Contents

Autonomy Software Stack

A typical autonomy software stack is organised into hierarchical layers, each responsible for a specific subset of functions — from low-level sensor control to high-level decision-making and fleet coordination. Although implementations differ across domains (ground, aerial, marine), the core architectural logic remains similar:

This layered design aligns closely with both robotics frameworks (ROS 2) and automotive architectures (AUTOSAR Adaptive).

 The SCOR Model Typical Autonomy Software Stack
Figure 1: Typical Autonomy Software Stack (Adapted from [1] [2]

In Figure 1, the main software layers and their functions are depicted.

Hardware Abstraction Layer (HAL) The HAL provides standardised access to hardware resources. It translates hardware-specific details (e.g., sensor communication protocols, voltage levels) into software-accessible APIs. This functionality typically includes:

HAL ensures portability — software modules remain agnostic to specific hardware vendors or configurations [3].

Operating System (OS) and Virtualisation Layer The OS layer manages hardware resources, process scheduling, and interprocess communication (IPC) as well as real-time operation, alert and trigger raising using watchdog processes. Here, data processing parallelisation is one of the keys to ensuring resources for time-critical applications. Autonomous systems often use:

Time-Sensitive Networking (TSN) extensions and PREEMPT-RT patches ensure deterministic scheduling for mission-critical tasks [4].

Middleware / Communication Layer The middleware layer serves as the data backbone of the autonomy stack. It manages communication between distributed software modules, ensuring real-time, reliable, and scalable data flow. IN some of the mentioned architectures middleware is the central distinctive feature of the architecture. Popular middleware technologies:

Control & Execution Layer The control layer translates planned trajectories into actuator commands while maintaining vehicle stability. It closes the feedback loop between command and sensor response. Key modules:

Safety-critical systems often employ redundant controllers and monitor nodes to prevent hazardous conditions [5].

Autonomy Intelligence Layer This is the core of decision-making in the stack. It consists of several interrelated subsystems:

Subsystem Function Example Techniques / Tools
Perception Detect and classify objects, lanes, terrain, or obstacles. CNNs, LiDAR segmentation, sensor fusion.
Localization Estimate position relative to a global or local map. SLAM, GNSS, Visual Odometry, EKF.
Planning Compute feasible, safe paths or behaviours. A*, D*, RRT*, Behavior Trees.
Prediction Provide the environmental behaviour forecast. Usually, it provides an internal dynamics forecast as well. Recurrent Neural Networks, Bayesian inference.
Decision-making Choose actions based on mission goals and context. Finite State Machines, Reinforcement Learning.

These components interact through middleware and run either on edge computers (onboard) or cloud-assisted systems for extended processing [6].

Application & Cloud Layer At the top of the stack lies the application layer, which extends autonomy beyond individual vehicles:

Frameworks like AWS RoboMaker, NVIDIA DRIVE Sim, and Microsoft AirSim bridge onboard autonomy with cloud computation.

Data Flow in the Autonomy Software Stack

Autonomy systems rely on data pipelines that move information between layers in real time.

 Data Flow in an Autonomy Software Stack
Figure 2: Data Flow in an Autonomy Software Stack

Each stage includes feedback loops to ensure error correction and safety monitoring [7, 8].

Example Implementations

ROS 2-Based Stack (Research and Prototyping)

AUTOSAR Adaptive Platform (Automotive)

MOOS-IvP (Marine Autonomy)

Hybrid Cloud-Edge Architectures

Layer Interaction Example – Autonomous Vehicle

 Simplified Interaction Example
Figure 3: Simplified Interaction Example

This closed-loop data exchange ensures real-time responsiveness, robust error recovery, and cross-module coherence.


[1] Raj, A., & Saxena, P. (2022). Software architectures for autonomous vehicle development: Trends and challenges. IEEE Access, 10, 54321–54345
[2] AUTOSAR Consortium. (2023). AUTOSAR Adaptive Platform Specification. AUTOSAR
[3] Lee, E. A., & Seshia, S. A. (2020). Introduction to Embedded Systems: A Cyber-Physical Systems Approach (3rd ed.). MIT Press.
[4] Baruah, S., Baker, T. P., & Burns, A. (2012). Real-time scheduling theory: A historical perspective. Real-Time Systems, 28(2–3), 101–155
[5] Broy, M., et al. (2021). Modeling Automotive Software and Hardware Architectures with AUTOSAR. Springer
[6] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
[7] Thrun, S. (2010). Toward robotic cars. Communications of the ACM, 53(4), 99–106
[8] Raj, A., & Saxena, P. (2022). Software architectures for autonomous vehicle development: Trends and challenges. IEEE Access, 10, 54321–54345
[9] Benjamin, M. R., Curcio, J. A., & Leonard, J. J. (2012). MOOS-IvP autonomy software for marine robots. Journal of Field Robotics, 29(6), 821–835
[10] Wang, L., Xu, X., & Nee, A. Y. C. (2022). Digital twin-enabled integration in manufacturing. CIRP Annals, 71(1), 105–128