TEST TALK
Towards a Framework for the Certification of Reliable Autonomous Systems
The topic of this Test Talk is a discussion of the seminal research paper entitled “Towards a Framework for the Certification of Reliable Autonomous Systems.” Peter Watkins, QA Consultants’ Chief Operating Officer, was joined by Dr. Holger Schlingloff, Chief Scientist and Head of the Systems Quality Center of Fraunhofer FOKUS, to discuss the many challenges around certifying reliable autonomous systems as well as a proposed three-layer autonomous framework of reaction, rules, and principles.
Transcript
Introduction
We want to welcome the viewers to this Test Talk. The topic is the discussion of the seminal research paper entitled Towards a Framework for the Certification of Reliable Autonomous Systems. We’re joined today by one of the primary authors of the paper, Dr. Holger Schlingloff, Chief Scientist and Head of the Systems Quality Center at Fraunhofer FOKUS in Berlin. Welcome, Holger, and I’m delighted you could join us for this Test Talk.
Background of Dr. Schlingloff
Dr. Holger Schlingloff is the Chief Scientist of the Systems Quality Center at the Fraunhofer Institute of FOKUS. With over 15 years of experience in testing automotive systems, he has collaborated with major OEMs like Daimler, Volkswagen, Opel, Audi, and Tier 1 suppliers such as Bosch, Siemens, IAV, Carmeq, and Panasonic. He also works with the Daimler Center of Automotive Information Technology Innovations in Berlin. His expertise includes quality assurance of embedded controlled software, model-based development, model checking, logical verification of requirements, static analysis, and automated software testing.
How Did You Get Into Quality Assurance Research?
Holger: As a student, I started off with philosophy, eager to understand how the world works. I quickly moved into formal philosophy, focusing on logic and reasoning, then transitioned into computer science, applying those logical principles to software verification and quality assurance, which led me to where I am today.
Purpose and Importance of the Research Paper
The paper proposes a framework for certifying reliable autonomous systems, created by an international team from the UK, Italy, the U.S., Germany, New Zealand, and Lebanon. We assembled this team during a seminar at the Dagstuhl Castle in Germany, which hosts international researchers to discuss autonomous systems. In our seminar, we focused on what it takes to verify, test, and certify reliable autonomous systems.
What Is a Reliable Autonomous System?
A reliable autonomous system functions without human intervention and provides consistent functionality as needed. Examples include reliable phone networks or autonomous vehicles. Verification, testing, quality assurance, and certification are essential in ensuring reliability.
Challenges in Certifying Autonomous Systems
The paper outlines four main challenges:
1) No existing standards for certifying autonomous systems (except BS8611),
2) Current standards are inadequate for these systems,
3) Difficulty in converting written standards into formal specifications for testing,
4) The challenge of using consistent certification methods across domains. Our team proposes a three-layer framework (reaction, rules, and principles) to tackle these issues.
Explaining the Three-Layer Framework
The framework includes three layers:
• Reactions Layer: Immediate responses to sensory input, like a vacuum robot reacting to an obstacle.
• Rules Layer: Following predefined rules, like a vacuum robot ensuring it cleans an entire room.
• Principles Layer: Making decisions beyond predefined rules, such as overriding a rule in an emergency. This layer is akin to human ethical reasoning, where exceptions are made for the greater good.
Ethical Reasoning in Autonomous Systems
Creating a universal standard for ethical reasoning is challenging, especially when applying it to autonomous systems. These systems don’t evolve independently, so designers have an opportunity to build ethical frameworks into their functionality.
Should Autonomous Systems Follow Human Standards?
The paper suggests applying human standards, like licensing, to autonomous systems, proposing that if autonomous systems can operate as safely as—or even safer than—humans, they should be certified for public use. We might also need new testing protocols to ensure systems behave adequately in complex scenarios, such as responding to traffic accidents or sudden obstacles.
International Challenges in Certification and Verification
Certification standards vary globally. In Europe, independent third parties are often involved in verification, while other countries may only provide guidelines. A harmonized, global approach to autonomous system certification would be ideal but requires significant collaboration.
Can Industry Standards Keep Up with Technological Development?
The rapid development of autonomous systems outpaces current regulatory frameworks. There’s a real risk of industry standards pushing technology that society may not be ready for. Regulators must act swiftly to ensure these technologies are introduced responsibly.
Future Steps for Autonomous Certification
We need a public discussion on the level of autonomy we desire in these systems and the level of confidence we require in them. Engineers should consider incorporating the three-layer framework into system designs to address these concerns. Users should consider how much they trust these systems and explore alternatives or backup solutions.
Conclusion
Thank you, Holger, for joining us for this Test Talk and sharing insights into autonomous system certification.
Innovate with us
to elevate your software performance and experience
All fields marked with “*” (asterisk) are required.