This is an old revision of the document!


Autonomy Software Stack

A typical autonomy software stack is organised into hierarchical layers, each responsible for a specific subset of functions — from low-level sensor control to high-level decision-making and fleet coordination. Although implementations differ across domains (ground, aerial, marine), the core architectural logic remains similar:

  • Perception Layer – sensing and understanding the environment.
  • Localisation Layer – determining position and orientation.
  • Planning Layer – deciding what actions to take.
  • Control Layer – executing those actions through actuators.
  • System Layer – managing communication, hardware, and runtime.
  • Infrastructure Layer – providing simulation, cloud services, and DevOps.

This layered design aligns closely with both robotics frameworks (ROS 2) and automotive architectures (AUTOSAR Adaptive).

 The SCOR Model Typical Autonomy Software Stack
Figure 1: Typical Autonomy Software Stack (Adapted from [1] [2]

In Figure 1, the main software layers and their functions are depicted.

Hardware Abstraction Layer (HAL) The HAL provides standardised access to hardware resources. It translates hardware-specific details (e.g., sensor communication protocols, voltage levels) into software-accessible APIs. This functionality typically includes:

  • Managing device drivers for LiDARs, radars, cameras, IMUs, and other devices needed for the autonomous system.
  • Monitoring power systems and diagnostics, which includes behaviour change triggers if needed.
  • Providing time-stamped sensor data streams. The real-time or time-stamped data is the essential part of any data-driven system to ensure proper work of time-series analysis algorithms including deep learning methods.
  • Handling real-time control of actuators (motors, servos, brakes).

HAL ensures portability — software modules remain agnostic to specific hardware vendors or configurations [3].

Operating System (OS) and Virtualisation Layer The OS layer manages hardware resources, process scheduling, and interprocess communication (IPC) as well as real-time operation, alert and trigger raising using watchdog processes. Here, data processing parallelisation is one of the keys to ensuring resources for time-critical applications. Autonomous systems often use:

  • Linux (Ubuntu or Yocto-based) for flexibility.
  • Real-Time Operating Systems (RTOS) like QNX or VxWorks for safety-critical timing.
  • Containerization (Docker, Podman) for software isolation and modular deployment.

Time-Sensitive Networking (TSN) extensions and PREEMPT-RT patches ensure deterministic scheduling for mission-critical tasks [4].

Middleware / Communication Layer The middleware layer serves as the data backbone of the autonomy stack. It manages communication between distributed software modules, ensuring real-time, reliable, and scalable data flow. IN some of the mentioned architectures middleware is the central distinctive feature of the architecture. Popular middleware technologies:

  • ROS 2 (Robot Operating System 2): Uses DDS (Data Distribution Service) for modular publish–subscribe communication.
  • DDS (Data Distribution Service): Industry-standard middleware for real-time, QoS-based data exchange.
  • CAN, Ethernet, MQTT: Used for in-vehicle and external communication.
  • Logging and telemetry systems (ROS bags, DDS recorders).
  • Fault detection and recovery mechanisms.
  • QoS (Quality of Service) configuration for bandwidth and latency management.

Control & Execution Layer The control layer translates planned trajectories into actuator commands while maintaining vehicle stability. It closes the feedback loop between command and sensor response. Key modules:

  • Low-level control: PID, LQR, or MPC controllers for steering, throttle, braking, or propulsion.
  • High-level control: Converts trajectory plans into real-time setpoints.
  • State estimation: Uses Kalman filters or observers to correct control deviations.

Safety-critical systems often employ redundant controllers and monitor nodes to prevent hazardous conditions [5].

Autonomy Intelligence Layer This is the core of decision-making in the stack. It consists of several interrelated subsystems:

Subsystem Function Example Techniques / Tools
Perception Detect and classify objects, lanes, terrain, or obstacles. CNNs, LiDAR segmentation, sensor fusion.
Localization Estimate position relative to a global or local map. SLAM, GNSS, Visual Odometry, EKF.
Planning Compute feasible, safe paths or behaviours. A*, D*, RRT*, Behavior Trees.
Prediction Provide the environmental behaviour forecast. Usually, it provides an internal dynamics forecast as well. Recurrent Neural Networks, Bayesian inference.
Decision-making Choose actions based on mission goals and context. Finite State Machines, Reinforcement Learning.

These components interact through middleware and run either on edge computers (onboard) or cloud-assisted systems for extended processing [6].

Application & Cloud Layer At the top of the stack lies the application layer, which extends autonomy beyond individual vehicles:

  • Fleet management (monitoring, task assignment).
  • Over-the-air (OTA) updates for software and firmware.
  • Cloud-based simulation and analytics.
  • Data collection for machine learning retraining.

Frameworks like AWS RoboMaker, NVIDIA DRIVE Sim, and Microsoft AirSim bridge onboard autonomy with cloud computation.


[1] Raj, A., & Saxena, P. (2022). Software architectures for autonomous vehicle development: Trends and challenges. IEEE Access, 10, 54321–54345
[2] AUTOSAR Consortium. (2023). AUTOSAR Adaptive Platform Specification. AUTOSAR
[3] Lee, E. A., & Seshia, S. A. (2020). Introduction to Embedded Systems: A Cyber-Physical Systems Approach (3rd ed.). MIT Press.
[4] Baruah, S., Baker, T. P., & Burns, A. (2012). Real-time scheduling theory: A historical perspective. Real-Time Systems, 28(2–3), 101–155
[5] Broy, M., et al. (2021). Modeling Automotive Software and Hardware Architectures with AUTOSAR. Springer
[6] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
en/safeav/softsys/autonomysoftstack.1760702330.txt.gz · Last modified: 2025/10/17 11:58 by agrisnik
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0