Dependable AI

© AdobeStock

Artificial intelligence delivers high performance in a wide range of applications. Nevertheless, because it is so difficult to interpret, it is rarely used in critical applications (e.g., in the medical industry). To change this, one of the groups of the department focuses on methods for explaining and verifying AI systems, as well as for quantifying uncertainty.

Explainability means the ability to comprehend the inner workings of an AI system and its decision-making process, such as understanding the inner logic or a specific decision. Explainability can be made possible globally, i.e., for the whole model, or locally, for a specific data instance. In some cases, explainability also involves recommendations for action in the sense of conditional statements of “if – then”. Last but not least, the explainability of an AI system must be tailored to the target audience. A developer often needs a different explanation than the end user.

Verification is all about assessing the safety-relevant aspects and robustness of an existing AI model. For example, the robustness of an AI system with regard to malfunctions can be checked. Uncertainty quantification enables the measurement of uncertainties in processes or data to be used directly in algorithms. This improves decision-making and makes it possible to plan and predict situations with inherent uncertainty more effectively and reliably.

More info on reliable AI is available here on the AI lead topic web page.

 

veoPipe

To generate the acceptance of, and trust in, products with AI capabilities, aspects of explainability and the verification of AI components must be taken into account in the product development process. To this end, a robust development pipeline is being developed in this project.

 

ML4SAFETY

In the ML4SAFETY project, an integrated framework for verifying “safeML” is being developed which will allow innovative machine learning methods to be used in safety-relevant applications. This helps manufacturers and suppliers of autonomous, safety-critical systems as well as testing and approval institutions to bring demonstrably safe ML-based systems onto the market.

 

ELISE

ELISE is the “European network of Artificial Intelligence research centers” that cooperates closely with ELLIS (“European Laboratory for Learning and Intelligent Systems”). The goal is to spread knowledge and methods among science, industry, and society.

 

Transaction Miner audit with Experian

As an independent partner, Fraunhofer IPA reviewed the AI component of a product on behalf of the company Experian.

 

AIQualify simplifies AI system qualification

The software framework from the “AIQualify” project makes it easier to qualify AI systems systematically, especially for SMEs, which often lack expert knowledge.