General Concepts of Architecture for Autonomous Systems

Software architecture represents the high-level structure of a system, outlining the organisation of its components, their relationships, and the guiding principles governing their design and evolution. In autonomous systems, architecture defines how perception, planning, and control modules interact to achieve autonomy while maintaining safety, reliability, and performance [1]. The primary purpose of an autonomous system’s software architecture is to ensure:

  • Scalability: The ability to integrate new sensors, algorithms, or mission modules.
  • Interoperability: Compatibility with other systems and communication protocols.
  • Maintainability: Ease of updating or modifying individual modules.
  • Safety and fault tolerance: Robustness against sensor failure, communication loss, or software bugs.
  • Real-time responsiveness: Capability to process environmental data and respond within strict temporal limits.

Architectural design in autonomous systems typically follows several universal principles:

  1. Modularity: Systems are divided into well-defined modules (e.g., perception, localisation, path planning, control), allowing independent development and testing. Enables functional isolation, expandability and higher maintainability of the given system.
  2. Abstraction: Functional details are hidden behind interfaces, promoting flexibility and reuse. In its essence, the higher the abstraction, the easier system development, testing and applications. It also reduces design complexity at every abstraction layer.
  3. Layering: Tasks are grouped by level of abstraction — for instance, hardware interfaces at the lowest level and mission planning at the highest. Besides the functional abstraction, layering enables different technical implementations at different levels, which is needed to address different reaction speeds, reduce decision-making delays and more effective internal communications and data processing at different levels.
  4. Standardisation: Adoption of middleware standards (e.g., ROS, DDS, MOOS) facilitates interoperability across platforms. Besides, the mentioned standardisation enables avoiding vendor locks and reduces overall costs due to increased competition.
  5. Data-centric communication: Modern architectures rely on publish/subscribe paradigms to manage distributed communication efficiently.

Middleware and Frameworks

Middleware serves as the backbone that connects diverse modules, ensuring efficient data exchange and synchronisation. Prominent middleware systems in autonomous vehicles include:

  • ROS (Robot Operating System): An open-source framework providing a modular structure for robotic applications, including perception, planning, and control [2].
  • DDS (Data Distribution Service): A real-time communication standard widely used in aerospace and defence systems, supporting deterministic data exchange [3].
  • MOOS-IvP: A marine-oriented autonomy framework designed for mission planning and vehicle coordination in autonomous underwater and surface vehicles [4].
  • AUTOSAR Adaptive Platform: A standard architecture for automotive systems emphasising safety, reliability, and scalability [5].

These middleware platforms not only promote interoperability but also enforce architectural patterns that ensure predictable performance across heterogeneous domains.

Most autonomous systems follow a hierarchical layered architecture:

Layer Function Examples
Hardware Abstraction Interface with sensors, actuators, and low-level control Sensor drivers, motor controllers
Perception Process raw sensor data into meaningful environment representations Object detection, SLAM
Decision-Making / Planning Generate paths or actions based on goals and constraints Path planning, behavior trees
Control / Execution Translate plans into commands for actuators PID, MPC, low-level control loops
Communication / Coordination Handle data sharing between systems or fleets Vehicle-to-vehicle (V2V), swarm coordination

Depending on functional tasks system’s architecture is split into multiple layers to abstract functionality and technical implementation as discussed above. Below is a schema of a generic architecture to get a better understanding of typical tasks at different layers.

 Edge IoT system' architecture
Figure 1: Generic Autonomous System Architecture

The Role of AI and Machine Learning

Modern autonomous systems increasingly integrate machine learning (ML) techniques for perception and decision-making. Deep neural networks enable real-time object detection, semantic segmentation, and trajectory prediction [6]. However, these data-driven methods also introduce architectural challenges:

  • Increased computational load requiring edge GPUs or dedicated AI accelerators.
  • The need for robust validation and explainability to ensure safety.
  • Integration with deterministic control modules in hybrid architectures.

Thus, many systems adopt hybrid designs, combining traditional rule-based or dynamics-based control with data-driven inference modules, balancing interpretability and adaptability


[1] Bass, L., Clements, P., & Kazman, R. (2021). Software Architecture in Practice (4th ed.). Addison-Wesley
[2] Quigley, M., Gerkey, B., & Smart, W. D. (2009). Programming Robots with ROS: A Practical Introduction to the Robot Operating System. O’Reilly Media
[3] Object Management Group. (2023). Data Distribution Service (DDS) Standard. OMG
[4] Benjamin, M. R., Curcio, J. A., & Leonard, J. J. (2012). MOOS-IvP autonomy software for marine robots. Journal of Field Robotics, 29(6), 821–835
[5] AUTOSAR Consortium. (2023). AUTOSAR Adaptive Platform Specification. AUTOSAR
[6] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444
en/safeav/as/general.txt · Last modified: 2025/10/17 08:57 by agrisnik
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0