| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| en:safeav:as:frameworks [2025/06/27 13:46] – rahulrazdan | en:safeav:as:frameworks [2025/10/12 15:14] (current) – ToDo checked: rczyba |
|---|
| {{:en:iot-open:czapka_b.png?50| Bachelors (1st level) classification icon }} | {{:en:iot-open:czapka_b.png?50| Bachelors (1st level) classification icon }} |
| |
| <todo @rahulrazdan #rahulrazdan:2025-06-16></todo> | <todo @rahulrazdan #rczyba:2025-10-12></todo> |
| | |
| In society, products operate within the confines of a legal governance structure. Whatever value products provide to their consumers is weighed against the potential harm caused by the product, and leads to the concept of legal product liability. While laws diverge across various geographies, the fundamental tenets have key elements of expectation and harm. Expectation as judged by “reasonable behavior given a totality of the facts” attaches liability. As an example, the clear expectation is that if you stand in front of a train, it cannot stop instantly while this is not the expectation for most autonomous driving situations. Harm is another key concept where AI recommendation systems for movies are not held to the same standards as autonomous vehicles. The governance framework for liability is mechanically developed through legislative actions and associated regulations. The framework is tested in the court system under the particular circumstances or facts of the case. To provide stability to the system, the database of cases and decisions are viewed as a whole under the concept of precedence. Clarification on legal points is set by the appellate legal system where arguments on the application of the law are decided what sets precedence [1,2]. | {{:en:safeav:as:slide6.jpg?400|}} |
| From a product development perspective, the combination of laws, regulations, legal precedence form the overriding governance framework around which the system specification must be constructed [3]. The process of validation ensures that a product design meets the user's needs and requirements, and verification ensures that the product is built correctly according to design specifications. | Figure 1 |
| {{:en:safeav:as:picture1.png?400|}} | |
| | In society, products operate within the confines of a legal governance structure. The legal governance structure is one of the great inventions of civilization and its primary role is to funnel |
| Fig. 1. V&V and Governance Framework. | disputes from unstructured expression and perhaps even violence to the domain of courts (figure 1). To be effective, legal governance structures must be perceived as fair and predictable. The objective of fairness is obtained by a number of methods such as due process procedures, transparency and public proceedings, and Neutral decision-makers (judges, juries, arbitrators). The objective of predictability is achieved by the use of the concept of precedence. Precedence is the idea that past rulings are given heavier weight relative to decision making, and it is an extraordinary event to diverge from precedence. Precedence gives the legal system stability. The combination of fairness and predictability shifts the dispensation of disputes to a more orderly process which promotes societal stability. |
| The Master V&V(MaVV) process needs to demonstrate that the product has been reasonably tested given the reasonable expectation of causing harm. It does so using three important concepts [4]: | |
| - Operational Design Domain (ODD): This defines the environmental conditions and operational model under which the product is designed to work. | How does this mechanically work and how does this connect to product development ? |
| - Coverage: This defines the completeness over the ODD to which the product has been validated. | |
| - Field Response: When failures do occur, the procedures used to correct product design shortcomings to prevent future harm. | {{:en:safeav:as:slide7.jpg?400|}} |
| As figure 1 shows, the Verification & Validation (V&V) process is the key input into the governance structure which attaches liability, and per the governance structure, each of the elements must show “reasonable due diligence.” An example of unreasonable ODD would be for an autonomous vehicle to give up control a millisecond before an accident. | Figure 2 |
| | |
| | As shown in figure 2, there are three major stages. First, legal frameworks are established by law-making bodies (legislators). However, in practice, legislators cannot specify all aspects and empower administrative entities (regulators) to codify the details of law. Finally, regulators often do not have the technical knowledge to codify all aspects of the law and rely on independent industry groups such as Society for Automotive Engineering (SAE) or Institute of Electrical and Electronics Engineers (IEEE) for technical knowledge. Second, in the field, disputes arise and must be adjudicated by the legal system. The typical process is a trial, under the strict processes established for fairness. The result of the trial is to apply the facts to the legal frameworks and apply a judgement. The facts of the case can result in three potential outcomes. In the first situation, the facts are covered by the legal framework, so there is no further action relative to the governance structure. In the second case, the facts expose an "edge" condition in the governance structure. In this situation, the court looks for previous cases which might fit (the concept of precedence) and uses that to make its judgement. If such a case does not exist, the court can establish precedence with its judgement in this case. This has the effect of weighing the future decisions as well. Finally, in rare situations, the facts of the case are in a field which is so new that there is not much in the way of body of law. In these situation, the courts may make a judgement, but often there is a call for law-making bodies to establish deeper legal frameworks. |
| | |
| | {{:en:safeav:as:slide11.jpg?400|}} |
| | |
| | In fact, autonomous vehicles (AVs) are considered to be one of these situations. Why ? In traditional automobiles, the body of law connected to product liability is connected to the car, and the liability of actions using the car is connected to the driver. Further, Product liability is often managed at the federal level and driver licensing more locally. However, surprisingly, as the figure below shows, there is a body of law dealing with autonomous vehicles from the distant past. In the days of horses, there were accidents, and a sophisticated liability structure emerged. In this structure, there was a concept that if a person directed his horse into an accident, then the driver was at fault. However, if a bystander did something to "spook" the horse, it was the bystander's fault. Finally, there was also the concept of "no-fault" when a horse unexpectedly went rogue. A discerning reader may well understand that this body of law emerges from a deep understanding of the characteristics of a horse. In legal terms, it creates an "expectation.' What are the "expectations" for a modern autonomous vehicle ? This is currently a highly debated point in the industry. |
| | |
| | |
| | Overall, whatever value products provide to their consumers is weighed against the potential harm caused by the product, and leads to the concept of legal product liability. While laws diverge across various geographies, the fundamental tenets have key elements of expectation and harm. Expectation as judged by “reasonable behavior given a totality of the facts” attaches liability. As an example, the clear expectation is that if you stand in front of a train, it cannot stop instantly while this is not the expectation for most autonomous driving situations. Harm is another key concept where AI recommendation systems for movies are not held to the same standards as autonomous vehicles. The governance framework for liability is mechanically developed through legislative actions and associated regulations. The framework is tested in the court system under the particular circumstances or facts of the case. To provide stability to the system, the database of cases and decisions are viewed as a whole under the concept of precedence. Clarification on legal points is set by the appellate legal system where arguments on the application of the law are decided what sets precedence. |
| | |
| | {{:en:safeav:as:slide8.jpg?400|}} |
| |
| {{:en:safeav:as:picture2.png?400|}} | What is an example of this whole situation ? Consider the airborne space with the figure above where the governance framework consists of enacted law (in this case US) with associated cases providing legal precedence, regulations, and industry standards. Any product in the airborne sector, must be compliant to release their solution to the marketplace. |
| Fig. 2. Execution is space. | |
| |
| Mechanically, MaVV is implemented with a Minor V&V (MiVV) process consisting of: | |
| - Test Generation: From the allowed ODD, test scenarios are generated. | |
| - Execution: This test is “executed” on the product under development. Mathematically, a functional transformation which produces results. | |
| - Criteria for Correctness: The results of the execution are evaluated for success or failure with a crisp criteria-for-correctness. | |
| |
| In practice, each of these steps can have quite a bit of complexity and associated cost. Since the ODD can be a very wide state space, intelligently and efficiently generating the stimulus is critical. Typically, in the beginning, stimulus generation is done manually, but this quickly fails the efficiency test in terms of scaling. In virtual execution environments, pseudo-random directed methods are used to accelerate this process. In limited situations, symbolic or formal methods can be used to mathematically carry large state spaces through the whole design execution phase. Symbolic methods have the advantage of completeness but face algorithmic computational explosion issues as many of the operations are NP-Complete algorithms. | Ref: |
| The execution stage can be done physically, but this process is expensive, slow, has limited controllability and observability, and in safety critical situations, potentially dangerous. In contrast, virtual methods have the advantage of cost, speed, ultimate controllability and observability, and no safety issues. The virtual methods also have the great advantage of performing the V&V task well before the physical product is constructed. This leads to the classic V chart shown in figure 1. However, since virtual methods are a model of reality, they introduce inaccuracy into the testing domain while physical methods are accurate by definition. Finally, one can intermix virtual and physical methods with concepts such as Software-in-loop or Hardware-in-loop. | - Razdan, R., (2019) “Unsettled Technology Areas in Autonomous Vehicle Test and Validation,” Jun. 12, 2019, EPR2019001. |
| The observable results of the stimulus generation are captured to determine correctness. Correctness is typically defined by either a golden model or an anti-model. The golden model, typically virtual, offers an independently verified model whose results can be compared to the product under test. Even in this situation, there is typically a divergence between the abstraction level of the golden model and the product which must be managed. Golden model methods are often used in computer architectures (ex ARM, RISCV). The anti-model situation consists of error states which the product cannot enter, and thus the correct behavior is the state space outside of the error states. An example might be in the autonomous vehicle space where an error state might be an accident or violation of any number of other constraints. | - Razdan, R., (2019) “Unsettled Topics Concerning Automated Driving Systems and the Transportation Ecosystem,” Nov 5, 2019, EPR2019005. |
| The MaVV consists of building a database of the various explorations of the ODD state space, and from that building an argument for completeness. The argument typically takes the nature of a probabilistic analysis. After the product is in the field, field returns are diagnosed, and one must always ask the question: Why did not my original process catch this issue? Once found, the test methodology is updated to prevent issues with fixes going forward. | - Ross, K. Product Liability Law and its effect on product safety. In Compliance Magazine 2023, [Online]. Available: https://incompliancemag.com/product-liability-law-and-its-effect-on-product-safety/ |
| |
| |
| Ref: | |
| [1] Razdan, R., (2019) “Unsettled Technology Areas in Autonomous Vehicle Test and Validation,” Jun. 12, 2019, EPR2019001. | |
| [2] Razdan, R., (2019) “Unsettled Topics Concerning Automated Driving Systems and the Transportation Ecosystem,” Nov 5, 2019, EPR2019005. | |
| [3] Ross, K. Product Liability Law and its effect on product safety. In Compliance Magazine 2023, [Online]. Available: https://incompliancemag.com/product-liability-law-and-its-effect-on-product-safety/ | |
| |
| |