[raivo.sell] This section proposes three significant research vectors, largely inspired by semiconductors, which have the potential to move the field forward. These are: 1) Guardian Accelerant Model 2) Functional Decomposition 3) Pseudo Physical Scaling Abstraction
A. GUARDIAN ACCELERANT MODEL
In the field of computer architecture, automated learning methods are quite common in the form of performance accelerators such as branch or data prediction. Prediction algorithms, based on machine learning/AI techniques, are “trained” by the ongoing execution of software and predict to very high accuracy the branching or data values critical to overall performance execution. However, since this is only a probabilistic guess, a vast well defined “guardian” machinery is built which detects errors and unwinds the erroneous decisions. The combination leads to enormous acceleration in performance with safety. Traditionally, the critical elements of autonomous cyber-physical systems are based on the AI training and inference paradigm. As the complexity and safety considerations have grown, a non-AI based safety layer has been growing. In fact, one of the more interesting systems consists of independent risk assessment guardians [29] which are running parallel to the core algorithm. Today, these techniques are somewhat ad hoc and often in response to “patching” the current bug in the core AI algorithm. An interesting line of research would be to formalize the AV and Guardian framework. In this framework, the Guardian, which should be specified more formally, would set the bounds within which the AV algorithm would operate. This decomposition has some very interesting properties: 1) Training Set Bounding: The core power of AI is to predict reasonable approximations between training points with high probability. However, there is also the idea that the training set is not complete relative to the current situation. In this decomposition, the AI algorithms can continue to be optimized for best guess, but the Guardian can be configured for the bounding box of expectations of the AI. 2) Validation and Verification: Given a paradigm for a Guardian and well-established rules for interaction between the Guardian and the AV/AI, a large part of the safety focus moves to the Guardian, a somewhat simpler problem. The V&V for the AV moves a very hard but non-safety critical problem of performance validation. 3) Regulation: A very natural role for regulation and standards would be specify the bounds for the Guardian while leaving the performance optimization to industry.
B. FUNCTIONAL DECOMPOSITION
As shown in Figure 5, the cyber-physical problem is stymied by a layer of DBE processing in a world which is fundamentally PBE. This problem is somewhat akin to the problems caused by approximation in [30] caused by numerical approximation. That is, the underlying mathematical properties of continuity and monotonicity are broken by rounding, truncation, and digitization. With a deeper understanding of the underlying functions, numerical mathematics has developed techniques to deal with the filter of digitization. Cyber-Physical V&V must build a similar model where the underlying properties of the PBE world are preserved through the DBE layer. This is a rich area of research and can take many forms. These include: 1) Invariants: The PBE word implies invariants such as real-world objects can only move so fast and cannot float or disappear or that important objects (cars) can be successfully perceived in any orientation. The invariants can be part of a broader anti-spec and basis of a validation methodology. 2) PBE World Model: A standard for describing the static and dynamic aspects of a PBE world model are interesting. If such a standard existed, both the active actors as well as infrastructure could contribute to building it. In this universal world model, any of the actors could communicate safety hazards to all the players through a V2X communication paradigm. Note, a universal world model (annotated by source) becomes a very good risk predictor when compared to the world model built from the host cyber-physical system. 3) Intelligent Test Generation: With a focus on the underlying PBE, test generation can focus on transformations to the PBE state graph and the task of the PBE/DBE differences can be handled by other mechanisms such as described in #2.