Accessibility Tools

  • Content scaling 100%
  • Font size 100%
  • Line height 100%
  • Letter spacing 100%

AI Blog

Reliability of AI systems: “We make a complex task tangible”

A post by Christian Meyer

Even though AI has long been in use in many places without question, reservations about AI are widespread. They are understandable and justified. After all, modern AI systems act like a "black box": Why a system decides this way or that is not immediately apparent. In some cases, the decisions could run counter to our values and applicable law. This must be avoided at all costs. Therefore, stricter regulation of AI applications is on the horizon. The European AI regulation, Artificial Intelligence Act (AI ACT), is one such step. che KI-Verordnung, Artificial Intelligence Act (AI ACT), ist ein solcher Schritt.

If the rules of the AI ACT prove effective, this will also have a positive impact on trust in AI applications – and protect democratic society. Anyone who wants to set up a new AI application now would be well advised to already consider the question of "reliability". 

What does "reliable" mean in the AI context? 

We speak of a "reliable" AI application if it is built in compliance with data protection, makes unbiased and comprehensible decisions, and can be controlled by humans. This is provided for in the AI ACT, which will come into force this year or in 2023 at the latest. The AI ACT aims to help ensure that AI applications are safeguarded against risks: For example, if an AI application is discriminatory and thus violates privacy rights, the company that developed it or the organization that provides it could face lawsuits. To minimize such risks, systems must be built and operate according to the definition of "reliable AI."  

Project managers, stakeholders and steering committees thus need to address the requirements for reliable AI. After all, in the end, they are the ones who bear responsibility for the project and will be held accountable if something goes wrong. To minimize the risks, it is recommended to audit AI systems or AI projects.  

By using new and creative auditing procedures, it is possible to ensure that the training data does not contain bias and that the system has learned the desired decision behavior – as well as that the system works properly in the real world - and not just under laboratory conditions. 

Six dimensions for reliable AI  

msg has developed such an audit procedure. It is based on the "Guide to the Design of Trustworthy Artificial Intelligence" by Fraunhofer IAIS, which identifies six dimensions that can be relevant for an AI system. The first step, then, is to clarify which of these six dimensions are relevant to the application at hand. This involves the following: 

1. “Fairness”: Is equal treatment of all users of the AI application ensured? 

2. “Autonomy and control”: Do users have control over the AI system? Can they intervene and revise decisions? 

3. “Explainability”: Does the AI system produce comprehensible and explainable results? 

4. “Robustness”: How stable is the AI application? 

5. “Security”: Is the system protected against unauthorized access? 

6. “Data protection”: Is personal data protected?  

In some cases, it is appropriate not to treat all people equally, but to give preference to people with impairments, for example. Others do not process personal data, which is why the data protection dimension loses relevance. Or: Applications that merely recommend music tracks do not necessarily have to be explainable.  

Once the dimensions relevant to the case in question have been identified, it is necessary to examine what risks exist in relation to the individual dimensions. For this purpose, we provide detailed questionnaires in our audit procedure, which check all aspects. This creates a clear profile picture and an assessment from which recommendations for action can be derived. 

Audit procedure for transparent AI applications 

In the robustness dimension, for example, you have to address the following types of aspects: Is it documented and justified which metrics are used to assess robustness? Have qualitative requirements been formulated for the data? Are the following criteria considered in terms of data quality:  technical requirements, completeness of data, veracity of data, correctness of annotations/labels? Our audit procedure list over 250 such questions (or aspects) that need to be considered. Only then can you get an accurate picture of the reliability of the AI system, identify weak points, and then of course fix them. 

Our audit procedure brings clarity, transparency and auditability to AI applications. We have operationalized the dimensions of reliability of AI systems, making a highly complex task tangible. From the relevance analysis with regard to the dimensions and aspects and the resulting risk analysis, the weaknesses can be identified and a profile and ultimately an assessment can be created. We formulate clear recommendations for action on the identified weaknesses. We summarize the results in a final report. This can also serve as a basis for project reviews and audits.

Christian Meyer msg

About the author

Christian Meyer is Principal Consultant at msg systems ag in Hamburg and leads the development of msg's AI testing process. He has been involved with AI for over 20 years and has built and led several startups with AI solutions.

If the rules of the AI ACT prove effective, this will also have a positive impact on trust in AI applications – and protect democratic society. Anyone who wants to set up a new AI application now would be well advised to already consider the question of "reliability". 

What does "reliable" mean in the AI context? 

We speak of a "reliable" AI application if it is built in compliance with data protection, makes unbiased and comprehensible decisions, and can be controlled by humans. This is provided for in the AI ACT, which will come into force this year or in 2023 at the latest. The AI ACT aims to help ensure that AI applications are safeguarded against risks: For example, if an AI application is discriminatory and thus violates privacy rights, the company that developed it or the organization that provides it could face lawsuits. To minimize such risks, systems must be built and operate according to the definition of "reliable AI."  

Project managers, stakeholders and steering committees thus need to address the requirements for reliable AI. After all, in the end, they are the ones who bear responsibility for the project and will be held accountable if something goes wrong. To minimize the risks, it is recommended to audit AI systems or AI projects.  

By using new and creative auditing procedures, it is possible to ensure that the training data does not contain bias and that the system has learned the desired decision behavior – as well as that the system works properly in the real world - and not just under laboratory conditions. 

Six dimensions for reliable AI  

msg has developed such an audit procedure. It is based on the "Guide to the Design of Trustworthy Artificial Intelligence" by Fraunhofer IAIS, which identifies six dimensions that can be relevant for an AI system. The first step, then, is to clarify which of these six dimensions are relevant to the application at hand. This involves the following: 

1. “Fairness”: Is equal treatment of all users of the AI application ensured? 

2. “Autonomy and control”: Do users have control over the AI system? Can they intervene and revise decisions? 

3. “Explainability”: Does the AI system produce comprehensible and explainable results? 

4. “Robustness”: How stable is the AI application? 

5. “Security”: Is the system protected against unauthorized access? 

6. “Data protection”: Is personal data protected?  

In some cases, it is appropriate not to treat all people equally, but to give preference to people with impairments, for example. Others do not process personal data, which is why the data protection dimension loses relevance. Or: Applications that merely recommend music tracks do not necessarily have to be explainable.  

Once the dimensions relevant to the case in question have been identified, it is necessary to examine what risks exist in relation to the individual dimensions. For this purpose, we provide detailed questionnaires in our audit procedure, which check all aspects. This creates a clear profile picture and an assessment from which recommendations for action can be derived. 

Audit procedure for transparent AI applications 

In the robustness dimension, for example, you have to address the following types of aspects: Is it documented and justified which metrics are used to assess robustness? Have qualitative requirements been formulated for the data? Are the following criteria considered in terms of data quality:  technical requirements, completeness of data, veracity of data, correctness of annotations/labels? Our audit procedure list over 250 such questions (or aspects) that need to be considered. Only then can you get an accurate picture of the reliability of the AI system, identify weak points, and then of course fix them. 

Our audit procedure brings clarity, transparency and auditability to AI applications. We have operationalized the dimensions of reliability of AI systems, making a highly complex task tangible. From the relevance analysis with regard to the dimensions and aspects and the resulting risk analysis, the weaknesses can be identified and a profile and ultimately an assessment can be created. We formulate clear recommendations for action on the identified weaknesses. We summarize the results in a final report. This can also serve as a basis for project reviews and audits. 

Christian Meyer msg

About the author

Christian Meyer is Principal Consultant at msg systems ag in Hamburg and leads the development of msg's AI testing process. He has been involved with AI for over 20 years and has built and led several startups with AI solutions.