The first thematic mission of DEEL project has been launched in April 2019 at the IRT Saint Exupéry. It has been named AI CERTIFICATION. This Working Group addresses the certification of systems embedding machine learning (ML) algorithms. The term “Certification” is used in a broad sense with or without authority, depending on the domain.
The group is composed of twenty-five specialists in the fields of certification, dependability, Artificial Intelligence, and more generally, embedded systems development. Industrial members represent the domains of aeronautics (Airbus, Safran, Thales, Scalian, DGA, Onera, Apsys), railway (SNCF), and automotive (Continental, Renault). Some members of the workgroup are involved in both the AVSI and the SOTIF projects, and EUROCAE WG114.
The goal is twofold: point out the major challenges or, possibly, blocking points that could prevent the adoption of ML techniques in the domains, and propose new approaches to address these challenges.
Towards that goal, Working Group activities are organized along three main axes:
- Sharing knowledge on certification and ML
- Identifying the main difficulties raised by the usage of ML in safety critical systems
- Feeding the IRT Saint Exupéry research teams of DEEL Project (AI and embedded systems) with technical challenges.
The group holds a meeting two days a month, promoting exchanges with the core-team, and working on “certification” topics.
This activity is part of the WP3 of the DEEL project.
February 27 & 28, 2020, Sprint challenges kick-off – Session Summary
During the session, the Work Group started three new activities, in sub groups, for the first sprint of 2020. Two of them are challenges, combining safety experts and AI experts, to build a structured argumentation on different use cases (surrogate model, and computer vision). The last activity was devoted to developing the main challenge, identified in the White Paper, on “probabilistic assessment”. The work group has validated the overall organisation of the White Paper, with the objective to get a consolidated version for the March session.
An acculturation session on Runtime assurance was given by J. Guiochet (LAAS – CNRS). And a special focus was put on the paper “A High-Probability Safety Guarantee for Shifted Neural Network Surrogates”, by J. Sen-Gupta, S. Gerchinovitz and M. Ducoffe (DEEL Core Team).
January 09 & 10, 2020 “Verification, Validation & Uncertainty Quantification” – Session Summary
The “AI acculturation” sessions were focused on “Verification, Validation, and Uncertainty Quantification (VVUQ). Bertrand IOOS, from EDF R&D, came and presented its activities in VVUQ for qualification of numerical simulation (computer code tools). This presentation was followed by a discussion about application of VVUQ to AI, with the presence of several ANITI Chair members.
During the session, the Working Group worked on the white paper themes redaction and cross-presentations. A special session was devoted on choices of activities that the “mission certification” will lead in 2020.
The workgroup and its activities will be presented by Eric Jenn, at ERTS 2020 Conference in Toulouse.
October 24 & 25, 2019 White paper main topics – Session Summary
Several “AI acculturation” presentations were given about robustness and formal proof using SDP (Semi-Definite Programing) relaxation, and about the i.i.d. assumption in machine learning. Moreover, DEEL core team has presented first results about mathematical guaranties (by construction or statistical) for a surrogate neural network.
During the session, the Working Group sought to identify ten topics relative to Certification of AI and High level properties, and started the redaction of these for the White paper. Another session was devoted to the assessment of this mission for the year 2019 and the expectations and objectives for the coming year.
September 26 & 27, 2019 « Blocking points » for certification of ML – Session Summary
The “Certification acculturation” session was about the current certification process of complex algorithms depending configuration parameters and comparison with Machine Learning (ML). An overview of the new Eurocae WG114 on Artificial Intelligence (IA) was also given. The “AI acculturation” was focused on classical methods for interpretability of ML, and DEEL core team research direction for interpretability and explainability of ML, with industrial use case demonstration.
During the session, the Working Group sought to identify and analyse “Blocking Points” of Machine Learning (ML) techniques in relation to the High Level Property objectives. “Blocking Points” are difficulties encountered in the development process that yield to a deficit of confidence in the capability of the ML system to perform its indented function, and that must be overcome in order to permit the certification activities.
August 29 & 30, 2019 “High-level properties” for ML certification – Session Summary
The “Certification acculturation” session was focused on “Assurance cases” and “Quality Models”. The « AI acculturation » session covered several topics. A presentation was given on a typical process of Machine Learning system design, with examples on Decision trees, Random Forests, and Neural Networks. S. Gerchinovitz, researcher at IMT – Institut Mathématique de Toulouse – and DEEL team member, presented the Gaussian linear models and showed some confidence intervals that can be used for safety assessment. The concept of explicability was also presented with a focus on its interpretation for ML.
This month, the activity of the Working Group was focused on the definition of the relevant “high-level properties” for ML certification. Discussions took place about the extent to which the learning phase shall be considered for certification.
June 25 & 26, 2019 “A quick overview of deep learning theory » – Session Summary
S. Gerchinovitz and F. Malgouyres, researchers at IMT – Institut Mathématique de Toulouse- and DEEL team members, facilitated an « AI acculturation » session entitled « A quick overview of deep learning theory ». Several scientific papers were also presented by AI experts on formal guarantees for Neural Networks, or on dataset generation. A tutorial was given on Deep Learning frameworks and their use for learning neural networks. The “Certification acculturation” session focused on the use of formal tools and methods in the Certification process. The group continued a use case analysis to address different topics of Machine Learning certification, such as specification refinement, dataset generation, and explicability.