This is an old revision of the document!
Autonomous systems operate across diverse environments that impose unique constraints on perception, communication, control, and safety. While all share a foundation in modular, layered architectures, the operational domain strongly influences how these layers are implemented [1,2]. Some of the most important challenges and differences are listed in the following table:
| Domain | Environmental Constraints | Architectural Challenges |
|---|---|---|
| Aerial | 3D motion, strict safety & stability, limited power | Real-time control, airspace coordination, fail-safes |
| Ground | Structured/unstructured terrain, interaction with humans. Complex localisation and mapping | Sensor fusion, dynamic path planning, V2X communication |
| Marine | Underwater acoustics, communication latency, and localisation drift | Navigation under low visibility, adaptive control, and energy management |
Aerial autonomous systems include Unmanned Aerial Vehicles (UAVs), drones, and autonomous aircraft. Their software architectures must ensure flight stability, real-time control, and safety compliance while supporting mission-level autonomy [3]. UAV architectures are often tightly coupled with flight control hardware, leading to a split architecture:
Some of the most popular architectures:
PX4 Autopilot An open-source flight control stack supporting multirotors, fixed-wing, and VTOL aircraft. The PX4 architecture is divided into Flight Stack (estimation, control) and Middleware Layer (uORB) for data communication [4]). The technical implementation of the architecture ensures compatibility with MAVLink communication and ROS 2 integration, making it a very popular and widely used solution.
ArduPilot In comparison, the ArduPilot is a Modular architecture with layers for HAL (Hardware Abstraction Layer), Vehicle-Specific Code, and Mission Control. The technical implementation are widely used by the community and used in research and commercial UAVs for mapping, surveillance, and logistics [5].
Still, some challenges remain:
Ground autonomous systems encompass self-driving cars, unmanned ground vehicles (UGVs), and delivery robots. Their architectures must manage complex interactions with dynamic environments, multi-sensor fusion, and strict safety requirements [7]. A ground vehicle’s software stack integrates high-level decision-making with low-level vehicle dynamics, ensuring compliance with ISO 26262 functional safety standards [8]. One of the reference architectures used is Autoware.AI (and its successor Autoware.Auto), which is an open-source reference architecture for autonomous driving built on ROS/ROS 2. It implements all functional modules required for L4 autonomy, including:
Autoware emphasises modularity, allowing integration with hardware-in-the-loop (HIL) simulators and real vehicle platforms [9]). Currently, the automotive industry is using several standards to foster the development and practical implementations of future autonomous ground transport systems:
Due to the environmental complexity, in the autonomous ground vehicles domain, the following main challenges still remain:
Marine autonomous vehicles operate in harsh, unpredictable environments characterised by communication latency, limited GPS access, and energy constraints. They include AUVs (Autonomous Underwater Vehicles), ASVs (Autonomous Surface Vehicles) and ROVs (Remotely Operated Vehicles). These vehicles rely heavily on acoustic communication and inertial navigation, requiring architectures that can operate autonomously for long durations without human intervention [10].
The reference architecture is based on the MOOS (Mission-Oriented Operating Suite) IvP architecture discussed previously. It provides interprocess communication and logging, while IvP Helm enables a decision-making engine using behaviour-based optimisation via IvP functions. The architecture supports distributed coordination (multi-vehicle missions) and robust low-bandwidth communication [11]. The architecture is extensively used in NATO CMRE and MIT Marine Robotics research [12].
While the overall trend is to take advantage of modularity, abstraction and reuse, the are significant differences among the application domains.
| Aspect | Aerial | Ground | Marine |
|---|---|---|---|
| Primary Frameworks | PX4, ArduPilot, ROS 2 | Autoware, ROS 2, AUTOSAR | MOOS-IvP |
| Communication | MAVLink, RF, 4G/5G | Ethernet, V2X, CAN | Acoustic, Wi-Fi |
| Localization | GPS, IMU, Vision | GPS, LiDAR, HD Maps | DVL, IMU, Acoustic |
| Main Challenge | Real-time stability | Sensor fusion & safety | Navigation & communication delay |
| Safety Standard | DO-178C | ISO 26262 | IMCA Guidelines |
| Emerging Trend | Swarm autonomy | Edge AI | Cooperative fleets |
An important trend in recent years is the convergence of architectures across domains. Unified software platforms (e.g., ROS 2, DDS) now allow interoperability between aerial, ground, and marine systems, enabling multi-domain missions such as coordinated search-and-rescue (SAR) operations. The integration of AI, edge computing, and cloud-based digital twins has blurred domain boundaries, giving rise to heterogeneous fleets of autonomous agents working collaboratively. Aerial systems look after stability, lightweight real-time control, and airspace compliance; open stacks like PX4/ArduPilot show how flight-critical loops coexist with higher-level autonomy. Ground systems exploit dense, dynamic scenes, heavy sensor fusion, and functional safety; stacks like Autoware illustrate a full L4 pipeline from localisation to MPC-based control. Marine systems suffer from low-bandwidth communications, GPS-denied navigation, and long-endurance missions; MOOS-IvP’s shared-database and behaviour-arbitration approach fits these realities. Summarising, a successful autonomy is based on sound software architecture instead of any particular single algorithm. The developed frameworks provide practical blueprints that can be adapted, mixed, and extended to meet mission demands across air, land, and sea.