Towards a Framework for Certification of Reliable Autonomous Systems

Thumbnail Image
Supplemental Files
Date
2021-04-01
Authors
Fisher, Michael
Mascardi, Viviana
Rozier, Kristin Yvonne
Schlingloff, Bernd-Holger
Winikoff, Michael
Yorke-Smith, Neil
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Authors
Research Projects
Organizational Units
Organizational Unit
Organizational Unit
Organizational Unit
Organizational Unit
Organizational Unit
Journal Issue
Is Version Of
relationships.hasVersion
Series
Department
Aerospace EngineeringComputer ScienceVirtual Reality Applications CenterElectrical and Computer EngineeringMathematics
Abstract

A computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control. The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article.

Comments

This article is published as Fisher, Michael, Viviana Mascardi, Kristin Yvonne Rozier, Bernd-Holger Schlingloff, Michael Winikoff, and Neil Yorke-Smith. "Towards a Framework for Certification of Reliable Autonomous Systems." Autonomous Agents and Multi-Agent Systems 35, no. 1 (2021): 8. DOI: 10.1007/s10458-020-09487-2. Posted with permission.

Description
Keywords
Citation
DOI
Copyright
Wed Jan 01 00:00:00 UTC 2020
Collections