Quality Assurance of AI
AI has a lot to offer us. Our imagination is required: Where do we use it? What should it do? How should it work? Regardless of the area of...
Artificial intelligence should support people. Whether that’s counting screws in a factory or assisting the head physician during a complicated operation. However, these different areas of application place enormously different demands on AI. The ethical principles are also different around the world. So what does fairness actually mean? And where do discrimination and justice begin? AI should act in accordance with our values, for which it must be trained - but these values must first be defined.
“If you take a look at the fairness debate surrounding artificial intelligence, a very exciting paper by Kleinberg found that what we so objectively call fairness is partly contradictory” - Marc Hauer, Tobias Krafft
Marc Hauer is a research associate at the Algorithm Accountability Lab at RPTU Kaiserslautern, specializing in the design of responsible AI systems. He heads the DIN SPEC working group “Fairness of AI in Financial Services” and works as a freelance expert and consultant in the field of algorithms and AI.
Tobias D. Krafft is a PhD student in the field of “Algorithm Accountability” at the TU Kaiserslautern and Managing Director of Trusted AI GmbH, focusing on black box analysis and AI regulation. He heads the DIN working group “Ethics/Responsible AI” and received the Weizenbaum Prize in 2017 for his research in the social context of AI. He is also involved in the Gesellschaft für Informatik for the Socioinformatics course.
Highlights of this episode:
Further links:
Today I talk to Marc Hauer and Tobias Krafft about the concept of fair AI and how to operationalize fairness in this context. We explore how an assurance case framework can help ensure fairness in AI systems and why stakeholder engagement is crucial.
In this episode, I take a critical look at the topic of fair artificial intelligence (AI) together with my guests Marc Hauer and Tobias Krafft. What does it actually mean when we talk about ‘fair AI’? This question is particularly relevant as AI permeates more and more areas of our lives. The two experts discuss how fairness in AI is not only a quality feature, but also has profound social and ethical dimensions.
Tobias explains that the challenge in assessing fairness is to find objective measures to quantify subjective concepts. Common practice is based on comparing qualitative measures between different sensitive groups. However, even these approaches have their limitations and often lead to complex debates about their correct application and interpretation.
An exciting development in the quest for fair AI is the use of the Assurance Case Framework. Originating from safety engineering, this framework provides a structured method for arguing and validating safety or, in this case, fairness assertions. Through step-by-step refinement, main and partial assertions are made, which ultimately need to be supported by evidence. This process not only enables deeper reflection on the requirements made, but also more transparent communication to stakeholders.
A key element in the design of fair AI systems is the involvement of different stakeholders. The definition of what is considered ‘fair’ can vary widely and often depends on the perspectives and needs of the parties involved. Through workshops and discussions with these groups, relevant fairness measures can be identified and translated into testable requirements.
While Marc and Tobias admit that the concept of a completely fair AI may remain an ideal, they emphasize the importance of ongoing discussions and improvements in this area. In particular, the role of open source initiatives and community-driven projects could help to establish widely accepted standards for fair AI in the future.
AI has a lot to offer us. Our imagination is required: Where do we use it? What should it do? How should it work? Regardless of the area of...
As a deep learning enigneer, David explores the possibilities of using AI. It’s about his approach to AI-generated test cases, the limitations and...
Unfortunately, the acceptance test is often seen as the final test stage. If many errors are then found, the excitement and the amount of work...