Abstract
High performance but unverified controllers,e.g.,artificial intelligence-based (a.k.a.AI-based) controllers, are widely employed in cyber-physical systems (CPSs) to accomplishcomplex control missions. However, guaranteeing the safety and reliability of CPSswith this kind of controllers is currently very challenging, which is of vital importancein many real-life safety-critical applications. To cope with this difficulty, we proposein this work a Safe-visor architecture for sandboxing unverified controllers in CPSsoperating in noisy environments (a.k.a.stochastic CPSs). The proposed architecturecontains a history-based supervisor, which checks inputs from the unverified controllerand makes a compromise between functionality and safety of the system, and a safetyadvisor that provides fallback when the unverified controller endangers the safety ofthe system. Both the history-based supervisor and the safety advisor are designed basedon an approximate probabilistic relation between the original system and its finiteabstraction. By employing this architecture, we provide formal probabilistic guaranteesonpreservingthesafetyspecificationsexpressedbyacceptinglanguagesofdeterministicfiniteautomata(DFA).Meanwhile,theunverifiedcontrollerscanstillbeemployedinthecontrol loop even though they are not reliable. We demonstrate the effectiveness of ourproposed results by applying them to two (physical) case studies. (c) 2021ElsevierLtd.Allrightsreserved.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Informatik |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik |
ISSN: | 1751-570X |
Sprache: | Englisch |
Dokumenten ID: | 103330 |
Datum der Veröffentlichung auf Open Access LMU: | 05. Jun. 2023, 15:42 |
Letzte Änderungen: | 05. Jun. 2023, 15:42 |