Reliable, Robust, and Trustworthy AI

Trusting Artificial Intelligence

We enable the efficient development and reliable operation of trustworthy AI systems.

Symbol Human agency

Human agency

Foster users' trust and acceptance through tailored Human-AI interfaces.

Symbol Robust systems

Robust systems

Ensure reliability and resilience by building on sound foundations.

Symbol Data privacy

Data privacy

Utilize and benefit from data while keeping it protected and confidential.

Architecture diagram of a recommendation system

Case Study: An explainable LLM-based recommendation system

The capabilities of pre-trained Large Language Models (LLMs) open up new possibilities for the development of recommendation systems. However, data protection requirements, limited data availability, and last but not least the black-box nature of LLMs present hurdles for their use.

We show how Explainable AI and feedback mechanisms support self-determined use of the recommendation system. In addition, we present strategies for data protection-compliant and cost-effective operation of LLM-based solutions.

Different target groups have different needs and requirements for the explainability of AI systems

Better User Experience with Explainable AI

A structured approach for the development of explanation components has proven successful in numerous projects in research and practice.

In four consecutive phases, we first capture the target group and application context, then identify suitable XAI methods, which we then test with a prototype and finally develop into an application-ready component.

Interplay of MLOps, Model Governance, and Explainable AI.

MLOps, Model Governance, and Explainable AI Ensure Robust Use of Artificial Intelligence

Modern AI systems have the reputation of being black boxes, the functioning of which remains hidden from users and developers. On the way to future use of AI, companies are therefore faced with challenges that have not previously arisen in classical software development.

In an article for heise online, our co-founder Kilian Kluge and INNOQ expert Isabel Bär show how companies can build their AI software on the three pillars of MLOps, Model Governance, and XAI to effectively control and operate it in accordance with legal requirements.