BOOK

Authors

The list of book contributors is presented below.

Tallinn University of Technology
  • Rahul Razdan, Ph.D
  • Mohsen Malayjerdi, Ph.D
Silesian University of Technology
  • Roman Czyba, Ph. D., DSc., Eng.
  • Piotr Czekalski, Ph. D., Eng.
  • Tomasz Grzejszczak, Ph. D., Eng.
Riga Technical University
  • Agris Nikitenko, Ph. D., Eng.
  • Karlis Berkolds, M. sc., Eng.
  • Larisa Survilo, M. sc., Eng.
ITT Group
  • Raivo Sell, Ph. D., ING-PAED IGIP
ProDron
  • Tomasz Siwy, CEO
Czech Technical University
  • Ing. Libor Přeučil, CSc. (Ing. = Master of Engineering, CSc. = Ph. D.)
  • Ing. Karel Košnar, Ph.D. (Ing. = Master of Engineering)
Technical Editor
  • Raivo Sell, Ph. D., ING-PAED IGIP
External Contributors
Reviewers

 

Project Information

This content was implemented under the project: SafeAV - Harmonizations of Autonomous Vehicle Safety Validation and Verification for Higher Education.

Project number: 2024-1-EE01-KA220-HED-000245441.

Consortium

  • ITT Group, Tallinn, Estonia (Coordinator).
  • Silesian University of Technology, Gliwice, Poland,
  • Riga Technical University, Riga, Latvia,
  • Czech Technical University in Prague, Prague, Czech Republic
  • Tallinn University of Technology, Tallinn, Estonia
  • Prodron, Gliwice, Poland

Erasmus+ Disclaimer
This project has been funded with support from the European Commission.
This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Copyright Notice
This content was created by the SafeAV Consortium 2024–2027.
The content is copyrighted and distributed under CC BY-NC Creative Commons Licence and is free for non-commercial use.

CC BY-NC

Abbreviation

Abbreviation Meaning
AI Artificial Intelligence
AI/ML Artificial Intelligence / Machine Learning
ADAS Advanced Driver Assistance Systems
AV Autonomous Vehicle
AVSC Autonomous Vehicle Safety Consortium
ASIL Automotive Safety Integrity Level
CBMC C Bounded Model Checker
CI/CD Continuous Integration / Continuous Delivery (or Deployment)
CISPR International Special Committee on Radio Interference
CNN Convolutional Neural Network
CMMI Capability Maturity Model Integration
CTL Computation Tree Logic
DAL Design Assurance Level
DDS Data Distribution Service (for Real-Time Systems)
DO-178C Software Considerations in Airborne Systems and Equipment Certification
ECTS European Credit Transfer and Accumulation System
EMC Electromagnetic Compatibility
EMI Electromagnetic Interference
FCC Federal Communications Commission
FSM Finite State Machine
GNSS Global Navigation Satellite System
HIL Hardware-in-the-Loop
HMI Human–Machine Interface / Interaction
IEC International Electrotechnical Commission
IMU Inertial Measurement Unit
ISO International Organization for Standardization
ITC Industry Technologies Consortia (in SAE ITC)
ITU International Telecommunication Union
JAUS Joint Architecture for Unmanned Systems
KITTI Karlsruhe Institute of Technology and Toyota Technological Institute dataset
LiDAR Light Detection and Ranging
LoD Language of Driving
LTL Linear Temporal Logic
MCU Microcontroller Unit
MIL Model-in-the-Loop
MOOC Massive Open Online Course
MPC Model Predictive Control
MQTT Message Queuing Telemetry Transport
NuScenes “New Scenes” autonomous driving dataset (Motional/nuTonomy)
ODD Operational Design Domain
OTA Over-the-Air (updates)
PBE Physics-Based Execution
PX4 Open-source Autopilot Platform (PX4)
QoS Quality of Service
RL Reinforcement Learning
ROS Robot Operating System (ROS 1)
ROS2 Robot Operating System 2
SBC Single Board Computer
SBOM Software Bill of Materials
SIL Software-in-the-Loop
SLAM Simultaneous Localization and Mapping
SOTIF Safety Of The Intended Functionality (ISO 21448)
SoC System on Chip
SPA Sense–Plan–Act (paradigm)
SPIN Simple Promela Interpreter (model checker)
UML Unified Modeling Language
UAV Unmanned Aerial Vehicle
UL Underwriters Laboratories
UPPAAL Timed-automata-based model checker (UPPAAL tool)
V&V Verification and Validation
V-Model Verification and Validation V-Model lifecycle
Waymo Waymo Open Dataset (autonomous driving)

Introduction

The document presents a structured and adaptable curriculum for Bachelor and Master level studies in Safe Autonomous Vehicles (SafeAV), with a strong focus on Verification and Validation (V&V) of autonomous systems. The framework serves as a foundation that higher education institutions can adapt and expand when designing their own study modules or programmes related to the safety, reliability, and governance of autonomous technologies.

The curriculum follows a modular structure combining theoretical foundations, applied engineering knowledge, and hands-on experimentation. It is supported by two complementary educational resources developed within the SafeAV project:

  • SafeAV Handbook – provides the theoretical and methodological background, including system architectures, sensing, software, and formal V&V methods.
  • SafeAV Hands-on Guide – offers practical laboratory and simulation exercises that allow students to perform verification and validation tasks using real and virtual autonomous platforms.

Terminology note. In this document, the SafeAV curriculum is the unified framework that defines the overall programme architecture, the BSc/MSc progression, and the learning flow from theory to V&V practice, aligning the SafeAV Handbook and the Hands-on Guide into a coherent, modular pathway. The subsequent chapters describe the modules as syllabi course-level maps specifying aims, learning outcomes, topics, assessment, tools, and relevant standards which constitute the formal open publication.

The SafeAV curriculum architecture defines the overall structure, modular hierarchy, and learning flow that connects theoretical knowledge, simulation-based validation, and experimental practice. It ensures coherence between study levels and provides a clear path from basic understanding to advanced assurance of autonomous vehicle safety. Modules are organised in pairs: Part 1 (Bachelor) introduces the concepts, while Part 2 (Master) deepens the same topic through practical verification and validation methods. This two-level structure enables a stepwise learning progression across study cycles and gives universities the flexibility to adopt the curriculum or parts of it into existing educational programs.

Each topic therefore exists in two complementary parts:

  • Part 1 (Bachelor level) – introduces the fundamental principles, technologies, and system interactions. Emphasis is on conceptual understanding, component function, and system-level awareness.
  • Part 2 (Master level) – deepens the focus toward verification and validation, including analytical, experimental, and regulatory methods used to demonstrate safety, reliability, and trustworthiness.

For example, in Hardware and Sensing Technologies Part 1, students learn sensor types, signal processing basics, and data acquisition. In Part 2, they perform calibration, fault analysis, redundancy testing, and scenario-based validation using V&V tools and simulation environments. This two-stage progression ensures continuity between study cycles and supports lifelong learning paths in autonomous vehicle engineering.

 SafeAV Curriculum

The overall curriculum can be described as three integrated layers:

  • Conceptual layer – theoretical foundations and system-level understanding (covered in the SafeAV Handbook)
  • Practical layer – hands-on experiments, data analysis, and verification in laboratory environments (based on the SafeAV Hands-on Guide)
  • Digital layer – self-study materials, MOOC courses, and AI-supported assistants that guide learning and track individual progress

These layers are interconnected through shared terminology, datasets, and unified learning outcomes across all modules.

Curriculum Composition

The curriculum consists of six interrelated modules that together form a complete 6 ECTS study block (one for bachelor and one for masters) but can also be used independently. Each module represents approximately 25–30 hours of student work, combining lectures, laboratory tasks, and self-study. The modular design allows multiple implementation strategies:

  • full six-module SafeAV course (6 ECTS)
  • selected modules as independent 1 ECTS units
  • integration into existing robotics, AI, or control courses
  • use for lifelong learning or professional training

Each module includes theoretical reading, guided experiments, simulation exercises, and assessment through a report, presentation, or quiz. The same structure is followed in all modules to maintain coherence across institutions.


Bachelor Level (Part 1)

The undergraduate programme introduces the building blocks of autonomous systems and their relation to safety assurance. The emphasis is on understanding system components and basic verification of function. Six modules (1 ECTS each) provide foundational knowledge of vehicle architecture, autonomy levels, sensing, computing, software systems, and human–machine interaction.

Modules – Part 1:

  • Autonomous Vehicles
  • Hardware and Sensing Technologies
  • Software Systems and Middleware
  • Perception, Mapping, and Localization
  • Control, Planning, and Decision-Making
  • Human–Machine Communication

Each module combines reading assignments from the SafeAV Handbook with laboratory or simulation tasks from the Hands-on Guide, such as sensor calibration, perception benchmarking, or control-loop validation. The recommended full scope equals 6 ECTS, yet the modular design allows partial adoption depending on local curricula and student pathways.


Master Level (Part 2)

The Master’s programme deepens the same thematic areas into Part 2 modules that focus on validation, verification, and system governance. Students explore how safety and reliability are demonstrated through structured testing, scenario generation, formal methods, and compliance with standards. Modules are directly linked to the advanced chapters of the SafeAV Handbook and the experimental work described in the Hands-on Guide.

Modules – Part 2:

  • Hardware and Sensing Technologies (Validation and Reliability)
  • Software Systems and Middleware (Safety and Verification)
  • Perception, Mapping, and Localization (Scenario-based Testing)
  • Control, Planning, and Decision-Making (Formal and Simulation-backed Validation)
  • Human–Machine Communication (HMI Safety and V&V)
  • Autonomy Verification and Validation Tools (Integrated Frameworks and Methods)

Students build validation pipelines from model design to field testing, using digital twins and simulation environments. The progression mirrors the V-model lifecycle introduced in the handbook — from design to verification, validation, and governance.

It is important to note that the distinction between Bachelor (Part 1) and Master (Part 2) levels in this curriculum is conditional rather than absolute. Depending on the structure of the base study programme or the learner’s prior knowledge and competences, topics defined at the Master level in the SafeAV curriculum may also be taught within Bachelor-level courses, and vice versa. The actual implementation depends on the educational context of the university and the individual learning path of the student.

For this reason, the SafeAV Handbook presents most topics in two levels of depth. Students who already have sufficient background or wish to advance further can continue directly to the next sub-sections, regardless of the formal level assigned to that topic in this curriculum. Conversely, in some non-technical or related engineering programmes, the same subjects might be addressed at a basic level even within Master studies, corresponding to what the SafeAV framework defines as Bachelor-level content.

Therefore, the level designation in this curriculum should be interpreted as indicative of content depth—Basic and Advanced rather than as a strict separation between Bachelor and Master academic degrees.

Learning Environments and Methods

Most module supports flexible learning environments that allow both classroom and remote participation:

  • classroom teaching for theoretical foundations
  • access to the AI-driven hybrid laboratory environment
  • virtual experiments linked to the MOOC platform
  • hybrid sessions combining on-site instruction with online validation tasks

The SafeAV Hands-on Guide defines equipment lists, hybrid lab configurations, and step-by-step procedures. Remote setups ensure that students can conduct verification and validation exercises even without physical access to hardware.

Digital tools, Dokuwiki materials, and the MOOC environment allow integration with AI-based assistants that support self-learning, answer technical questions, and provide feedback on simulation or validation tasks. These learning environments are common across all modules, ensuring coherence, accessibility, and continuous feedback through AI-supported methods. The same platform is used by all modules, ensuring a consistent digital experience throughout the entire curriculum. Each course component is accessed through the same environment, which connects theoretical materials, laboratory tasks, and evaluation.

Key features include:

  • AI tutoring and feedback – AI assistants answer questions, explain concepts, and provide formative feedback.
  • Accessibility and inclusion – automatic transcription, summarisation, translation, and adaptive pacing to support all learners.
  • Integration with laboratories – seamless connection between online content and hybrid laboratory activities.
  • Open-access collaboration – materials and results can be shared, reused, and expanded across institutions.

The MOOC environment also functions as the central tool for monitoring student progress and competence development. It is continuously updated with new content and integrated with AI analysis to track engagement, learning efficiency, and V&V-related skills.

Hybrid Laboratory Environment (AI-driven)

The SafeAV curriculum builds upon the remote and virtual laboratory infrastructure previously developed within earlier Erasmus+ projects (Interstudy, SimLab, Autonomian, IoT.Open Reloaded). This existing framework enables students to perform practical experiments not only in traditional classroom settings but also remotely, even when physical equipment and autonomous platforms are involved.

The hybrid laboratory integrates real test environments, such as sensor and control systems, with cloud-based and virtual simulation platforms. Through this setup, learners can connect to remote hardware, collect data, and carry out validation tasks in real time, regardless of their location. The same infrastructure also supports collaborative use between partner universities, allowing shared access to experiments, datasets, and learning tools.

SafeAV enhances this environment by introducing an AI component that expands the capabilities of the virtual laboratories. AI-based modules enable advanced simulation, automated data analysis, and model validation within digital twin environments. Intelligent assistants help students interpret results, identify anomalies, and generate experiment documentation automatically.

This AI-driven hybrid environment forms the backbone of the SafeAV practical learning concept. It bridges physical and virtual domains, connects theoretical understanding to verification and validation processes, and provides a unified experimental framework for both Bachelor and Master level studies.

AI-Based Methods Supporting the Curriculum

The integration of artificial intelligence (AI) tools into the SafeAV curriculum is a central element for enabling modern, personalized learning experiences. In addition to supporting individualized study paths for typical learners, it also enhances accessibility and provides improved educational opportunities for students with special needs.

AI technologies are implemented at two levels:

  • integration within the learning content to illustrate how AI supports autonomous vehicle V&V (e.g., AI in perception, planning, or safety analysis)
  • integration as pedagogical tools to assist students and lecturers throughout the learning process

The following AI-based methods are used within the SafeAV ecosystem:

  • AI-powered virtual assistants – LLM-based agents embedded in the MOOC and Dokuwiki environment answer course-related questions, explain theoretical concepts, and provide V&V-related guidance.
  • AI-driven interactive simulations and virtual labs – intelligent digital twins and scenario generators support sensor fusion validation, control-loop testing, and human–machine communication studies.
  • Personalized AI tutors – adaptive learning systems analyse student progress and recommend additional materials, exercises, or simulations based on performance.
  • AI-supported content summarization – automatic generation of concise summaries of lectures, reports, and laboratory documentation helps students prepare for assessment and supports accessibility.
  • Automated peer review and feedback – integrated AI tools assist in assessing reports and coding exercises, providing constructive feedback and reducing lecturer workload.

AI-based tools play a significant role in SafeAV by reducing repetitive communication tasks, offering continuous learning support, and improving the overall organization of study activities. These systems provide students with round-the-clock access to guidance and feedback, allowing instructors to focus on higher-level mentoring and project supervision.

To ensure trustworthy and responsible use of AI in education, all implementations follow privacy-by-design principles and comply with relevant data protection regulations. Student data are processed transparently and securely, with anonymized interaction records and clear options to opt out of AI-assisted learning when preferred.

In the long term, the SafeAV approach aims to develop a shared and open AI learning framework that promotes accessibility, multilingual support, and collaboration between partner universities, ensuring sustainable and equitable use of AI technologies in higher education.

Curriculum Implementation and Adaptation

The SafeAV architecture is open and adaptable. Educational institutions may:

  • adopt the complete curriculum as a dedicated SafeAV course
  • integrate selected modules into existing study programmes
  • use materials in non-formal education or industrial training

All materials are licensed under Creative Commons (CC BY-NC), allowing reuse and modification while keeping alignment with European learning standards and ECTS principles. This ensures consistency across partner universities while maintaining flexibility for local adaptation and future extension.

Module structure

Topic Description
Study level Provides the study level for which the module is designed.
ECTS credits Indicates how many ECTS credits can be obtained to complete the module.
Study forms Explains where the module can take place: classroom, online, or hybrid mode.
Module aims States the overall goal(s) or purpose(s) of the module.
Pre-requirements Outlines the necessary background knowledge or completed modules required before taking this course.
Learning outcomes Lists what students are expected to know, understand, and be able to do after completing the module.
Topics Describes the main subjects taught in the module. The content is based on materials developed within the SafeAV project, including the SafeAV Handbook and Hands-on Guide.
Type of assessment Explains how student performance is evaluated, including examinations, reports, projects, or practical demonstrations.
Learning methods Describes the teaching and learning approaches used in the module, ranging from traditional lectures and readings to interactive, experiential, and AI-supported methods.
AI integration Describes how AI methods and tools are integrated into the learning process and technical content of the module.
Includes the use of AI for content creation, adaptive learning, simulation support, or data analysis to enhance both education and verification & validation practices.
Recommended tools and environments Lists the software, hardware, and digital platforms used in the module, including physical, virtual, and remote laboratories.
Tools are selected to support hands-on experiments, simulation-based validation, and AI-assisted learning.
Verification and Validation focus Defines the module’s main contribution to the V&V domain, outlining which validation methods, assurance techniques, or safety assessment principles are addressed and what specific competences students will acquire in this area.
Relevant standards and regulatory frameworks Identifies the key international and regional standards, directives, and ethical or legal frameworks that apply to the topic.

 

Module: Autonomous Vehicles

Study level Bachelor
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce the fundamental concepts, architectures and application domains of autonomous vehicles across ground, aerial and marine systems. The course develops students’ system-level understanding of the autonomy stack from perception and localisation to planning and control, highlighting the role of AI, safety and basic verification considerations in real-world deployment.
Pre-requirements Interest in autonomous systems and basic knowledge of programming, signals and control, and electronics or mechatronics. Prior exposure to robotics concepts and Linux/ROS environments, as well as familiarity with linear algebra and probability, is recommended.
Learning outcomes Knowledge
• Explain the Sense–Plan–Act paradigm and the layered autonomy stack.
• Describe and contrast middleware/architectures.
• Summarize AI/ML roles in perception and decision-making, plus limits and safety implications.
• Identify V&V concepts and domain-specific safety standards.
Skills
• Build a minimal autonomy pipeline in simulation and tune it for a given ODD.
• Integrate modules via publish/subscribe interfaces and evaluate latency, determinism, and fault-tolerance trade-offs.
• Design basic experiments to validate algorithms and interpret results.
Understanding
• Reason about distributed vs. centralized architectures and their impact on scalability and reliability.
• Appraise governance, legal/ethical constraints, and cybersecurity risks for AV deployment.
Topics 1. Introduction to autonomous systems and autonomy definitions
2. Sense–Plan–Act and data flow in autonomous vehicles; centralized vs. distributed designs; safety & redundancy
3. Reference architectures and middleware: ROS/ROS2 (DDS), AUTOSAR Adaptive, JAUS, MOOS-IvP
4. Application domains: ground, aerial, and marine; domain challenges
5. AI/ML for perception and decision-making; hybrid model-based, learning-based stacks
6. Validation and Verification introduction (ODD, coverage, field response); simulation, SIL/HIL; safety standards
7. Governance, legal and ethical frameworks for autonomy
8. Cybersecurity for autonomous systems: electronics/firmware, communication, control, operations
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation.
Learning methods Lecture — Conceptual foundations (architectures, middleware, SPA, safety/V&V, governance) with case studies from ground, aerial, and marine domains.
Lab works — Hands-on exercises in simulation (ROS2/Autoware/PX4 or MOOS-IvP) to assemble perception-planning-control pipelines and evaluate behavior.
Individual assignments — Focused mini-projects (e.g., perception module, path planner, DDS QoS study) with short reports on design and results.
Self-learning — Guided readings and video demos on standards and frameworks; independent experimentation to deepen understanding of chosen topics.
AI involvement Deep learning for perception (object detection, semantic segmentation, tracking); learning-based prediction; SLAM and sensor fusion with ML components; reinforcement/behavior-tree hybrids for decision-making; data-centric evaluation in simulation.
Recommended tools and environments ROS/ROS2, MOOS-IvP, Autoware, PX4/ArduPilot
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, DO-178C, AUTOSAR, JAUS

Module: Hardware and Sensing Technologies (Part 1)

Study level Bachelor
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to provide a practical foundation in sensing hardware, embedded communication and navigation/positioning for autonomous systems. The course develops students’ ability to design, integrate and validate multi-sensor and actuator setups on embedded platforms, taking into account interface compatibility, timing, power and electromagnetic constraints to build reliable autonomy-ready platforms.
Pre-requirements Basic knowledge of electronics and programming, as well as introductory control and linear algebra. Ability to work with Linux-based tools and version control is beneficial, while prior experience with microcontrollers or single-board computers is recommended but not mandatory.
Learning outcomes Knowledge
• Explain operating principles and specs of common sensors and actuators.
• Describe embedded communication protocols and timing/synchronisation concepts.
• Outline the hardware integration lifecycle, calibration methods, environmental/EMC testing, and safety/quality standards.
Skills
• Select appropriate sensors/computing units for a given task and justify trade-offs of accuracy, latency, power and cost.
• Configure and bring up device buses, log and interpret sensor data, and perform basic multi-sensor calibration.
• Build a minimal HIL test to validate a perception/control loop and document results.
Understanding
• Recognize integration risks and propose mitigations.
• Appreciate supply chain constraints and obsolescence planning when choosing components.
• Work safely, ethically and reproducibly, documenting configurations and changes.
Topics 1. Sensors, Computing Units, and Navigation Systems:
— Sensor taxonomy and specs (IMU, GNSS, magnetometer, LiDAR, depth, camera); calibration (extrinsics/IMU alignment).
— Embedded computing: MCUs vs. SoCs (CPU/GPU/accelerators), power/thermal design, memory and I/O.
— Navigation and positioning: GNSS/IMU basics, odometry, sensor fusion concepts.
2. Embedded Protocols and Communication Backbones:
— I²C/SPI/UART fundamentals; CAN/CAN-FD; Ethernet, TSN concepts; DDS/ROS2 communications.
3. Integration Lifecycle and Reliability:
— Requirements → interface design → assembly → HIL/SIL → environmental & EMC testing; timing/synchronisation; redundancy.
4. Supply Chain & Lifecycle Considerations:
— Component availability, quality/traceability, cybersecurity (SBOM/firmware signing), and obsolescence planning.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Concept overviews with worked hardware schematics and bus timing examples.
Lab works — Hands-on bring-up of sensors and a microcontroller/SBC, bus sniffing, timestamping and calibration; mini HIL demo.
Individual assignments — Short design/calculation tasks (component selection, interface budgets) with a brief technical note.
Self-learning — Curated readings and datasheets; recommended MOOC videos to reinforce embedded and navigation concepts.
AI involvement Assisted code scaffolding and debugging, log summarisation, data analysis/visualisation and literature search support. Students must verify outputs, cite use of AI tools, and avoid uploading proprietary or assessment-sensitive data.
Recommended tools and environments STM32 or similar MCU development boards, Raspberry Pi / NVIDIA Jetson, typical sensors (IMU, GNSS, LiDAR, camera), CAN bus and logic analyzers, ROS2-based logging
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 11452 / CISPR 25 / ISO 7637, ISO 16750, CAN

Module: Software Systems and Middleware (Part 1)

Study level Bachelor
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce software architectures, middleware and lifecycle management for cyber-physical and autonomous systems. The course develops students’ understanding of how multi-layer autonomy stacks support reliable sensing, perception, planning and control under real-time, interoperability and safety constraints.
Pre-requirements Basic programming skills and understanding of operating systems, computer networks and data structures. Familiarity with embedded or control systems and Linux-based development tools is recommended.
Learning outcomes Knowledge
• Explain the architecture and purpose of multi-layered autonomy software stacks.
• Describe middleware technologies and their role in deterministic data exchange.
• Identify lifecycle models and configuration management practices for autonomous software.
Skills
• Design modular autonomy software architectures integrating perception, localisation, planning, and control modules.
• Configure and deploy middleware frameworks to support real-time, distributed communication.
• Apply CI/CD and configuration management principles and orchestration tools.
Understanding
• Evaluate safety, verification, and cybersecurity aspects of autonomy software systems.
• Recognize challenges in maintainability, scalability, and interoperability across heterogeneous systems.
• Appreciate ethical, reliable, and transparent AI integration in autonomous decision-making.
Topics 1. Introduction to Autonomy Software Stacks:
– Functional layers: perception, localisation, planning, control, middleware, cloud.
– Characteristics: real-time behaviour, determinism, scalability, resilience, interoperability.
2. Middleware and Communication Frameworks:
– DDS, ROS2, MQTT, AUTOSAR Adaptive, CAN, Ethernet.
– Quality of Service, message scheduling, fault tolerance.
3. Software Lifecycle and Configuration Management:
– Lifecycle models (Waterfall, V-Model, Agile, DevOps, Spiral).
– Configuration management, version control, CI/CD pipelines, baselines.
4. Development and Maintenance Challenges:
– Real-time performance, safety, AI integration, cybersecurity, and continuous updates.
5. Simulation and Testing:
– SIL/HIL methods, virtual environments and digital twins.
6. Ethics and Human–Machine Collaboration:
– Transparency, accountability, and explainability in autonomy.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation.
Learning methods Lecture — Cover theoretical and architectural foundations of autonomy software stacks and middleware frameworks.
Lab works — Practical exercises in ROS2, DDS, and containerised deployments; simulation of autonomy software using Gazebo or CARLA.
Individual assignments — System design and configuration management case studies applying CI/CD and risk analysis.
Self-learning — Reading standards, research papers, and exploring MOOC content on middleware and DevOps.
AI involvement Used for assisting code documentation, simulation setup, performance analysis, and literature review. Students must verify generated outputs, cite AI tool usage transparently, and ensure compliance with academic integrity policies.
Recommended tools and environments ROS2, Gazebo, CARLA, AirSim
Verification and Validation focus
Relevant standards and regulatory frameworks MQTT, AUTOSAR, CAN, V-Model, DevOps, ISO 26262

Module: Perception, Mapping, and Localization (Part 1)

Study level Bachelor
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce perception, mapping and localisation methods for autonomous systems. The course develops students’ ability to combine data from multiple sensors to detect and interpret the environment, build maps, estimate vehicle pose in real time and handle uncertainty using modern AI-based perception and sensor fusion techniques.
Pre-requirements Basic knowledge of linear algebra, probability and signal processing, as well as programming skills. Familiarity with control systems, kinematics, Linux/ROS environments or computer vision libraries is recommended but not mandatory.
Learning outcomes Knowledge
• Describe perception, mapping, and localization processes in autonomous systems.
• Explain principles of sensor fusion, simultaneous localization and mapping.
• Understand AI-based perception, including object detection, classification, and scene understanding.
Skills
• Implement basic perception and mapping algorithms using data from multiple sensors.
• Apply AI models to detect and classify environmental objects.
• Evaluate uncertainty and performance in localization and mapping using simulation tools.
Understanding
• Appreciate challenges of perception under varying environmental conditions.
• Recognize the role of data quality, calibration, and synchronization in sensor fusion.
• Adopt responsible practices when designing AI-driven perception modules for safety-critical applications.
Topics 1. Cameras, LiDARs, radars, and IMUs in perception and mapping.
2. Sensor calibration, synchronization, and uncertainty modeling.
3. Principles of multi-sensor fusion (Kalman/Particle filters, deep fusion networks).
4. Object recognition and classification under variable conditions.
5. SLAM, Visual Odometry, and GNSS.
6. Map representation and maintenance for autonomous navigation.
7. CNNs, semantic segmentation, and predictive modeling of dynamic environments.
8. Perception under poor visibility, occlusions, and sensor noise.
9. Integration of perception and localization pipelines in ROS2.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Theoretical background on perception, mapping, and AI-based scene understanding.
Lab works — Implementation of sensor fusion and mapping algorithms using ROS2, Python, and simulated data.
Individual assignments — Analysis of perception pipeline performance and report preparation.
Self-learning — Study of academic papers, datasets, and open-source AI perception frameworks.
AI involvement AI tools can assist in code debugging, model training, and visualization of perception results. Students must cite AI-generated assistance transparently and verify the correctness of outcomes.
Recommended tools and environments SLAM, CNN, OpenCV, PyTorch, TensorFlow, KITTI, NuScenes
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448 (SOTIF)

Module: Control, Planning, and Decision-Making (Part 1)

Study level Bachelor
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce control and planning methods for autonomous systems. The course develops students’ ability to design and analyse feedback control, motion planning and decision-making algorithms that generate safe and reliable vehicle behaviour in dynamic environments, using both classical and modern AI-based approaches.
Pre-requirements Basic knowledge of linear algebra, differential equations and control theory, as well as programming skills. Familiarity with system dynamics, robotics or numerical tools (e.g. MATLAB/Simulink) is recommended but not mandatory.
Learning outcomes Knowledge
• Explain classical control principles and their application to vehicle dynamics.
• Describe AI-based control methods, including reinforcement learning and neural network controllers.
• Understand motion planning and behavioral algorithms
• Discuss safety verification, validation, and certification issues for autonomous control systems.
Skills
• Design, simulate, and tune classical controllers for trajectory tracking and stabilization.
• Implement basic reinforcement learning or hybrid control strategies in simulation environments.
• Develop motion planning pipelines integrating perception, planning, and control layers.
Understanding
• Recognize trade-offs between transparency, performance, and adaptability in control architectures.
• Evaluate robustness, explainability, and ethical implications in AI-driven control.
• Appreciate interdisciplinary approaches to achieve safe and reliable autonomous operation.
Topics 1. Classical Control Strategies:
– Feedback control fundamentals, PID design and tuning, LQR, Sliding Mode Control.
– Model Predictive Control and real-time optimization.
2. AI-Based Control Strategies:
– Reinforcement learning for control, supervised imitation learning.
– Neural network controllers and hybrid architectures.
3. Integration and Safety:
– Verification, validation, and certification of control systems.
– Robustness, interpretability, and failure handling.
4. Motion Planning and Behavioral Algorithms:
– FSMs, Behavior Trees, and rule-based systems.
– Planning methods: A*, D*, RRT, RRT*, and MPC-based trajectory generation.
– Predictive and optimization-based planning for dynamic environments.
5. Future Trends:
– Explainable AI control, safe RL, and human-like behavioral models.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Introduce theoretical and mathematical foundations of classical and AI-based control strategies.
Lab works — Implement and compare controllers (PID, LQR, RL) and motion planners (A*, RRT) using simulation tools such as low-fidelity planning simulators, or MATLAB/Simulink.
Individual assignments — Design a control or planning pipeline and evaluate safety/performance trade-offs.
Self-learning — Independent exploration of open-source control frameworks and reading of selected research literature.
AI involvement Students may use AI tools to generate code templates, optimize control parameters, or analyze planning performance. All AI-assisted work must be reviewed, validated, and cited properly in accordance with academic integrity standards.
Recommended tools and environments FSM, Behavior Trees, A*, RRT, MPC
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448 (SOTIF), SAE J3016

Module: Human Machine communication (Part 1)

Study level Bachelor
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce human–machine interaction and communication concepts for autonomous vehicles. The course develops students’ understanding of how autonomous systems perceive, interpret and communicate with humans using AI-driven, multimodal and human-centred interfaces that support safety, trust and usability.
Pre-requirements Basic knowledge of human factors or cognitive science and interest in user-centred design. Familiarity with control systems and programming and AI-based or embedded systems is recommended but not mandatory.
Learning outcomes Knowledge
• Explain the principles of HMI and multimodal communication in autonomous systems.
• Describe human perceptual and cognitive models relevant to interaction with machines.
• Understand the cultural, ethical, and social dimensions influencing communication design.
• Recognize standards and best practices in safety-critical HMI.
Skills
• Design and prototype human–machine interfaces that enhance trust and situational awareness.
• Evaluate user experience using qualitative and quantitative assessment techniques.
• Integrate AI-based dialogue, gesture, and visual communication components within simulation environments.
Understanding
• Appreciate the need for transparency, inclusivity, and cultural sensitivity in AI communication.
• Critically assess the ethical implications of human–autonomy collaboration.
• Foster responsible design thinking in developing communication frameworks for AVs.
Topics 1. Introduction to Human–Machine Interaction in Autonomous Vehicles:
– Role of HMI in safety, trust, and usability.
– Transition from driver-operated to fully autonomous systems.
2. Human Perception and Cognition:
– Human sensory systems, attention, and response time.
– Comparing human and AI perception models.
3. Communication Modalities:
– Visual, auditory, and haptic feedback; external vehicle communication signals.
– Pedestrian and passenger interaction mechanisms.
4. The Language of Driving:
– Concept and design of shared communication languages between humans and AVs.
– Cultural, geographical, and environmental factors in interaction design.
5. AI in Communication:
– Role of conventional and LLM-based AI in HMI design and dialogue management.
6. Standards and Case Studies:
– AVSC best practices, SAE standards, and real-world HMI deployments.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Cover theories of human perception, cognitive ergonomics, and communication in autonomous systems.
Lab works — Practical development of HMI prototypes (e.g., dashboard simulation, pedestrian signaling models) using Python or Unity.
Individual assignments — Analytical essays or UX evaluations of existing HMI systems.
Self-learning — Review of international standards and exploration of open-source HMI datasets and design tools.
AI involvement AI tools may be used to design conversational interfaces, simulate interaction scenarios, and analyze user feedback. Students must disclose AI assistance transparently and validate all outputs to maintain research and ethical standards.
Recommended tools and environments Unity, MATLAB, ROS2
Verification and Validation focus
Relevant standards and regulatory frameworks AVSC, SAE ITC

Module: Hardware and Sensing Technologies (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce hardware governance, electromagnetic compatibility and sensor validation for cyber-physical and autonomous systems. The course develops students’ ability to apply regulatory frameworks, compliance testing and calibration practices to ensure reliable, safe and market-ready platforms across the product lifecycle.
Pre-requirements Basic knowledge of electrical and electronic engineering, signal processing and embedded or control systems. Familiarity with standards and regulatory frameworks and experience with laboratory instrumentation or simulation tools is recommended but not mandatory.
Learning outcomes Knowledge
• Explain electromagnetic compatibility principles, emission/immunity mechanisms, and EMI mitigation strategies.
• Describe national and international governance structures and their impact on hardware design.
• Understand calibration principles, maintenance cycles, and supply chain dependencies for safety-critical systems.
• Discuss the interrelationship between regulatory compliance, testing methodologies, and product lifecycle management.
Skills
• Conduct pre-compliance EMI tests and analyze sensor and system-level performance under regulated conditions.
• Design hardware layouts and shielding solutions to meet EMC standards.
• Assess supply chain resilience and identify obsolescence or counterfeit risks in semiconductor sourcing.
• Apply calibration procedures and reliability testing across product lifecycle stages.
Understanding
• Recognize the role of regulation and governance in enabling innovation while ensuring public safety.
• Appreciate the complexities of maintaining compliance across global markets.
• Adopt responsible and ethical approaches to supply chain management, data transparency, and sustainability.
Topics 1. Governance Frameworks and Spectrum Management:
– Regulatory evolution: FCC, ITU, and global coordination of electromagnetic spectrum.
– FCC Part 15 vs. Part 18 distinctions and implications for vehicle and sensor manufacturers.
2. Electromagnetic Compatibility Principles:
– EMI mechanisms, emissions/immunity testing, and standards.
– Anechoic chambers, Faraday cages, and instrumentation for compliance validation.
3. Sensor Validation and Calibration:
– Calibration procedures for radar, LiDAR, GNSS, and IMU sensors.
– In-field calibration strategies and digital twin applications.
4. Supply Chain and Lifecycle Governance:
– Semiconductor economics, obsolescence management, and COTS adoption strategies.
– Cybersecurity and software supply chain verification.
5. Maintenance and Long Lifecycle Design:
– OTA updates, redundancy, and sustainability in hardware maintenance.
– Case studies: Automotive, aerospace, and defense sectors.
6. Global Trends and Future Challenges:
– Cross-border regulatory harmonization and AI-driven predictive compliance.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation.
Learning methods Lecture — Provide theoretical understanding of EMC principles, regulatory frameworks, and governance mechanisms.
Lab works — Conduct EMC pre-compliance tests, sensor calibration exercises, and supply chain simulations using hardware/software tools.
Individual assignments — Develop compliance strategies, perform risk analysis on component lifecycles, and write policy-technical briefs.
Self-learning — Explore international standards, participate in industry webinars, and review current FCC/ITU publications.
AI involvement AI tools may assist in simulating EMI propagation, predicting obsolescence trends, and optimizing calibration schedules. Students must validate all AI-assisted outputs and document methodology in accordance with research integrity and transparency guidelines.
Recommended tools and environments MATLAB/Simulink
Verification and Validation focus
Relevant standards and regulatory frameworks FCC, ITU, CISPR 25, UNECE R10, ISO 11452

Module: Software Systems and Middleware (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce software verification, validation and testing methods for autonomous, cyber-physical and AI-based systems. The course develops students’ ability to plan, implement and assess V&V strategies across physics-based and data-driven software, in line with relevant safety and governance standards.
Pre-requirements Basic knowledge of software engineering, control or embedded systems and programming skills. Familiarity with system design, testing methodologies, AI/ML concepts or safety-related standards is recommended but not mandatory.
Learning outcomes Knowledge
• Explain the principles of V&V in both physics-based and decision-based execution systems.
• Describe software testing frameworks, including component, integration, and system-level approaches.
• Understand regulatory standards and their role in defining safety and assurance levels.
• Analyze challenges in AI component validation, including training set verification, robustness testing, and anti-specification frameworks.
Skills
• Develop and execute structured test plans and coverage analyses for complex, data-driven systems.
• Use simulation tools to generate and evaluate test scenarios for AI-based and safety-critical applications.
• Apply V&V techniques to assess software reliability and traceability across development lifecycles.
• Critically evaluate AI model performance using robustness, fairness, and explainability metrics.
Understanding
• Appreciate the philosophical and practical differences between deterministic and non-deterministic testing paradigms.
• Recognize the ethical and governance implications of AI deployment in safety-critical systems.
• Demonstrate interdisciplinary reasoning across engineering, regulatory, and societal domains when designing and testing autonomous software systems.
Topics 1. Verification and Validation Fundamentals:
– Overview of PBE vs DBE paradigms, fault analysis, and safety argument structures.
– Introduction to structured testing: unit, integration, and system-level testing.
2. Safety-Critical Standards and Governance:
– ISO 26262 (Automotive), AS9100 (Aerospace), and CMMI frameworks.
– Automotive Safety Integrity Levels and Design Assurance Levels.
3. Software Testing and Coverage:
– Code coverage, pseudo-random test generation, and scenario-based validation.
– Role of simulation, fault injection, and test automation.
4. AI Component Validation:
– AI vs Software validation differences; coverage, code review, and data governance.
– Training set validation, robustness to noise, and explainable AI.
5. Specification and Anti-Specification Challenges:
– IEEE 2846 and AI driver concepts; ethical, legal, and liability considerations.
– Human-equivalent testing and performance evaluation frameworks.
6. Emerging V&V Trends:
– Continuous integration, simulation-in-the-loop, and AI-assisted verification.
– Case studies: Automotive ADAS, aviation autonomy, and robotics.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation.
Learning methods Lecture — Present theoretical underpinnings of software and AI testing, covering safety-critical standards and AI V&V challenges.
Lab works — Practical exercises in automated testing, simulation-driven validation, and robustness evaluation using Python/ROS/MATLAB.
Individual assignments — Develop and analyze test strategies, evaluate compliance with ISO/IEEE frameworks, and submit technical reports.
Self-learning — Review international standards, research literature, and case studies of AI validation in autonomous domains.
AI involvement AI tools can assist in generating test cases, simulating complex operational scenarios, and analyzing coverage gaps. Students must validate AI-generated results, maintain traceability, and document AI involvement transparently in compliance with academic ethics.
Recommended tools and environments ROS, MATLAB
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, AS9100, CMMI, IEEE 2846

Module: Perception, Mapping, and Localization (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce instability and uncertainty aspects in perception, mapping and localisation for autonomous systems. The course develops students’ ability to model sensor noise and uncertainty, design robust perception and fusion algorithms, and assess system behaviour in challenging and safety-critical conditions in line with relevant standards.
Pre-requirements Basic knowledge of probability and statistics, linear algebra and perception or sensor fusion concepts, as well as programming skills in Python or C++. Familiarity with robotics, computer vision, control theory, machine learning or ROS-based tools is recommended but not mandatory.
Learning outcomes Knowledge
• Distinguish between aleatoric and epistemic uncertainty and describe their impact on perception and mapping.
• Explain sources of instability such as sensor noise, occlusions, quantization, and adversarial attacks.
• Understand safety frameworks relevant to uncertainty handling.
• Describe the role of sensor fusion and redundancy in mitigating uncertainty and maintaining localization accuracy.
Skills
• Model sensor noise and environmental uncertainty using statistical and probabilistic approaches.
• Apply sensor fusion algorithms for robust localization.
• Evaluate system robustness against occlusions, reflection errors, and adversarial perturbations.
• Design and conduct experiments to quantify uncertainty and validate robustness using simulation and real-world datasets.
Understanding
• Appreciate the trade-offs between computational complexity and robustness in multi-sensor systems.
• Recognize the ethical and safety implications of unstable or unreliable perception in autonomous systems.
• Demonstrate awareness of international safety standards and adopt responsible practices for system validation.
Topics 1. Sources of Instability and Uncertainty:
– Aleatoric vs epistemic uncertainty, stochastic processes, and measurement noise.
– Quantization effects, sensor noise modeling, and environmental randomness.
2. Sensor Noise and Fusion:
– Multi-sensor integration: LiDAR, radar, GNSS, IMU, camera.
– Noise filtering and smoothing techniques.
3. Occlusions and Partial Observability:
– Handling occlusions, weather effects, and incomplete sensor coverage.
– Tracking and prediction in uncertain environments.
4. Adversarial Robustness:
– Adversarial attacks on perception networks and their detection.
– SOTIF (ISO 21448) and safety verification of intended functionality.
5. Validation and Safety Assessment:
– Simulation-based validation and uncertainty quantification.
– Evaluation metrics for perception and localization under uncertainty.
6. Real-world Case Studies:
– Sensor degradation in autonomous vehicles, calibration drift, and redundancy design.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation.
Learning methods Lecture — Explore theoretical principles of uncertainty, noise modeling, and perception instability.
Lab works — Implement sensor fusion, uncertainty quantification, and robustness evaluation in ROS2 or MATLAB.
Individual assignments — Research and report on adversarial attacks, occlusion handling, and noise modeling strategies.
Self-learning — Study international standards and open-source datasets.
AI involvement AI tools may assist in simulating uncertainty propagation, detecting adversarial patterns, and performing sensitivity analysis. Students must critically validate results, document methodology, and ensure reproducibility and compliance with academic integrity standards.
Recommended tools and environments ROS2, MATLAB, KITTI, NuScenes, Waymo
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448 (SOTIF)

Module: Control, Planning, and Decision-Making (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce validation and verification methods for control, planning and decision-making in autonomous systems. The course develops students’ ability to design, execute and interpret simulation-based and formal testing workflows that assess safety, robustness and standards compliance of autonomy controllers.
Pre-requirements Basic knowledge of control theory, optimisation and planning algorithms, as well as programming skills or MATLAB. Familiarity with model-based design tools, AI decision-making frameworks or simulation and real-time control environments is recommended but not mandatory.
Learning outcomes Knowledge
• Explain simulation-based and formal validation approaches for control and planning systems.
• Describe the use of model-checking, reachability analysis, and verification frameworks in autonomous systems.
• Understand standards relevant to control and decision-making validation.
• Discuss trade-offs between simulation fidelity, computational efficiency, and real-time constraints.
Skills
• Develop and validate control and planning algorithms in simulation environments.
• Apply formal verification tools to analyze safety and correctness properties.
• Design hybrid validation workflows combining Monte Carlo simulation and symbolic reasoning.
• Evaluate algorithm robustness and decision safety under stochastic and adversarial conditions.
Understanding
• Appreciate the role of rigorous validation in certifying autonomous behaviors and AI-based decision-making.
• Recognize limitations of current simulation and formal verification tools in high-dimensional, data-driven systems.
• Adopt ethical, transparent, and standards-compliant practices in the assurance of autonomy.
Topics 1. Validation of Control and Planning Systems:
– System-level validation frameworks and verification-driven design.
– Simulation fidelity, corner-case testing, and scenario coverage.
2. Simulation Environments and Tools:
– SIL/HIL setups, Monte Carlo analysis, and statistical validation.
– Multi-domain co-simulation for cyber-physical systems.
3. Formal Verification and Model Checking:
– Safety property specification and temporal logic.
– Reachability analysis, invariant verification, and constraint solving.
4. Hybrid and Nonlinear Systems:
– Modeling hybrid automata and nonlinear control loops.
– Formal abstraction and conservative over-approximation techniques.
5. Standards and Safety Frameworks:
– ISO 26262, ISO 21448, IEEE 2846, and ASAM OpenSCENARIO for validation.
6. Case Studies:
– Autonomous driving, UAV flight control, and robotic path planning validation.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Cover theory and methodologies for simulation-based and formal validation of control and planning systems.
Lab works — Implement and test controllers in virtual and hybrid environments (ROS2, MATLAB, CARLA, Scenic, CommonRoad, UPPAAL).
Individual assignments — Develop validation pipelines, perform reachability analysis, and document results.
Self-learning — Study research papers and international standards on autonomy verification and formal safety assurance.
AI involvement AI tools may be used to automate scenario generation, identify unsafe trajectories, and optimize validation coverage. Students must validate AI-assisted outcomes, ensure reproducibility, and cite AI involvement transparently in deliverables.
Recommended tools and environments MATLAB/Simulink, ROS2, CARLA, UPPAAL, SPIN, or CBMC
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448 (SOTIF), and IEEE 2846, ASAM OpenSCENARIO

Module: Human Machine communication (Part 2)

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the module is to introduce safety, validation and societal aspects of human–machine interaction in autonomous systems. The course develops students’ ability to design and evaluate human-centred, explainable and standards-compliant HMI solutions that support usability, trust and safety.
Pre-requirements Basic knowledge of human factors or HMI design principles and interest in system safety. Familiarity with user interface development, AI concepts, ergonomics or safety-related standards is recommended but not mandatory.
Learning outcomes Knowledge
• Explain safety and reliability concerns in HMI design for autonomous and semi-autonomous systems.
• Describe standards and frameworks for HMI validation.
• Understand social, ethical, and psychological dimensions influencing public trust in AI-driven systems.
• Identify factors affecting cross-cultural and demographic acceptance of automation.
Skills
• Design validation procedures for HMI systems using both experimental and simulation-based testing.
• Evaluate user behavior, workload, and situational awareness using quantitative and qualitative methods.
• Apply AI tools to simulate user interaction, predict response variability, and analyze safety-related feedback.
• Conduct usability assessments and generate compliance reports aligned with HMI safety standards.
Understanding
• Appreciate the ethical importance of transparency, inclusivity, and user autonomy in interface design.
• Recognize human limitations and adapt systems to support shared control and human oversight.
• Develop awareness of public communication, risk perception, and media framing in acceptance of autonomy.
Topics 1. Human–Machine Interaction Safety:
– Human error taxonomy and resilience engineering.
– Shared control and human oversight in automated systems.
2. Verification and Validation of HMI:
– Testing frameworks, simulation methods, and standards.
– Usability metrics: workload, trust, explainability, and accessibility.
3. Public Acceptance and Risk Perception:
– Cultural and social factors influencing acceptance of automation.
– Role of transparency, explainability, and user trust.
4. AI-Assisted Interaction Evaluation:
– Emotion and intent recognition, human-in-the-loop testing.
– Adaptive HMIs and predictive user modeling.
5. Standards and Case Studies:
– AVSC Best Practices, ISO/SAE frameworks, and real-world HMI validation studies.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Cover theoretical foundations of safety, public trust, and V&V frameworks in HMI.
Lab works — Implement HMI prototypes and perform usability and safety validation using simulation environments.
Individual assignments — Evaluate and document HMI validation plans for different user scenarios and safety levels.
Self-learning — Review literature on human factors, public acceptance, and ethical design in automation.
AI involvement AI tools may assist in user behavior prediction, emotion recognition analysis, and usability simulation. Students must transparently disclose AI usage, validate data integrity, and comply with academic and ethical standards.
Recommended tools and environments Unity, MATLAB, ROS2
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448, SAE J3016

Module: Autonomy Validation Tools

Study level Master
ECTS credits 1 ECTS
Study forms Hybrid or fully online
Module aims The aim of the course is to introduce the principles, methods and tools used for verification and validation of autonomous and other safety-critical cyber-physical systems. The course develops students’ ability to design, implement and critically assess physical, virtual and hybrid validation workflows in line with relevant industrial practices and standards, preparing them to apply these approaches in advanced engineering projects and research.
Pre-requirements Solid background in control engineering, systems modelling and basic artificial intelligence or machine learning. Ability to program and familiarity with Linux-based development environments. Prior coursework in robotics, autonomous systems or cyber-physical systems is strongly recommended.
Learning outcomes Knowledge
• Explain the role and structure of verification and validation in the autonomy lifecycle.
• Describe international standards and their influence on testing processes.
• Understand the architecture of physical, virtual, and hybrid test environments for autonomous systems.
• Identify limitations and emerging research trends in simulation-based validation and safety case generation.
Skills
• Design and execute test plans using real and simulated environments.
• Apply AI-driven methods for scenario generation, coverage analysis, and failure detection.
• Integrate scenario building toolchains into validation workflows.
• Assess compliance and produce documentation aligned with certification processes.
Understanding
• Appreciate the interdependence of testing, regulation, and ethical assurance in autonomous systems.
• Recognize challenges of validating stochastic, learning-based algorithms.
• Demonstrate accountability, transparency, and critical thinking in evaluating safety and validation data.
Topics 1. Overview of Verification and Validation:
– History and evolution from traditional software testing to AI-based autonomy validation.
– Key principles: verification vs validation, safety cases, and traceability.
– International Standards
– Harmonization of global V&V requirements.
2. Physical and Virtual Testing Environments:
– Real-world validation sites and virtual tools.
– HIL/SIL/MIL testing, sensor simulation, and environmental modeling.
3. Scenario-Based Validation:
– Framework for scenario design and coverage.
– Edge case generation, fault injection, and adversarial testing.
4. AI-Enhanced Validation:
– AI for test optimization, uncertainty quantification, and robustness analysis.
5. Certification and Compliance:
– Safety argumentation, data transparency, and audit readiness.
– Ethical and governance challenges in autonomous validation.
Type of assessment The prerequisite of a positive grade is a positive evaluation of module topics and presentation of practical work results with required documentation
Learning methods Lecture — Present theories, standards, and frameworks governing verification and validation of autonomous systems.
Lab works — Conduct simulation-based testing using CARLA, MATLAB/Simulink, and OpenSCENARIO; perform hardware-in-the-loop experiments.
Individual assignments — Develop validation plans, compare standards, and write safety assurance documentation.
Self-learning — Review case studies from Pegasus and ZalaZONE; analyze real certification reports and research papers.
AI involvement AI tools may assist in generating test scenarios, automating fault detection, and analyzing coverage metrics. Students must document AI involvement transparently and validate all outputs against engineering and ethical standards.
Recommended tools and environments MATLAB/Simulink, Scenic, CARLA, rFpro, IPG CarMaker, ASAM OpenSCENARIO, Pegasus
Verification and Validation focus
Relevant standards and regulatory frameworks ISO 26262, ISO 21448, DO-178C, UL 4600, IEEE P2851