The Safe and Effective Use of Learning-Enabled Components in Safety-Critical Systems

Kunal Agrawal, Sanjoy Baruah, Alan Burns

Autonomous systems increasingly use components that incorporate machine learning and other AI-based techniques in order to achieve improved performance. The problem of assuring correctness in safety-critical systems that use such components is considered. A model is proposed in which components are characterized according to both their worst-case and their typical behaviors; it is argued that while safety must be assured under all circumstances, it is reasonable to be concerned with providing a high degree of performance for typical behaviors only. The problem of assuring safety while providing such improved performance is formulated as an optimization problem in which performance under typical circumstances is the objective function to be optimized while safety is a hard constraint that must be satisfied. Algorithmic techniques are applied to derive an optimal solution to this optimization problem. This optimal solution is compared with an alternative approach that optimizes for performance under worst-case conditions, as well as some common-sense heuristics, via simulation experiments on synthetically-generated workloads.

The paper will be presented in the session
Locks, neural networks and resilience – Wednesday, July 8, 15:50 – 16:50 (CET)

https://drops.dagstuhl.de/opus/volltexte/2020/12370/pdf/LIPIcs-ECRTS-2020-7.pdf

Please note, all rights on the videos remain with the authors

Comments are closed.