The regulation of artificial intelligence (AI) requires clear norms and standards to ensure safety and performance. The standardization of high-risk systems, in which certification bodies play a central role, is particularly challenging. Other important aspects are the documentation requirements and cyber security measures for high-performance AI systems. In view of the ambitious timetables up to 2025, the development and implementation of these standards is becoming increasingly important. A look into the future shows what further challenges and opportunities can be expected in the regulation of AI.
In this episode of the I talk to Taras Holoyad from the Federal Network Agency about the regulation of Artificial Intelligence (AI). Taras explains how norms and standards are developed for AI systems to ensure their safety and performance. He emphasizes the challenges of standardization, especially for high-risk systems, and the role of certification bodies. The discussion also covers documentation requirements and cybersecurity measures for high-performance AI systems. We also explore the urgency of developing and implementing these standards by 2025 and take a look into the future.
“In principle, artificial intelligence is unfortunately not comparable to the intelligence level of a human being, but rather a very elaborately designed algorithmic system.” - Taras Holoyad
After studying electrical engineering at the TU Braunschweig, Taras Holoyad first calculated electrical machines for road vehicles and then joined the standardization of artificial intelligence at the Federal Network Agency. During his day job, Taras works on AI regulation strategies, is project leader for the international standard for AI classification “ISO/IEC 42102” and Vice Chair at the European standardization body ETSI TC “Methods for testing and specification”. One of the goals of his teams is to achieve a uniform understanding of systems under the term “artificial intelligence” and to explain the associated quality criteria and testing processes. Taras is also working with colleagues from research, industry and the public sector to develop a package insert for AI systems, a label for AI products and a glossary on artificial intelligence so that AI systems can also be evaluated at a low threshold within our society.
Artificial intelligence (AI) plays a crucial role in software development. It makes it possible to carry out complex data analyses, recognize patterns and make automated decisions. The integration of AI in software applications not only improves efficiency, but also the user experience.
In the podcast with Taras Holoyad, the importance of the test description for AI skills is discussed in detail. The focus is on the following points:
Taras Holoyad, an expert in the field of AI regulation, shares valuable information on how companies can test and describe their AI capabilities. This is particularly important in light of new regulatory requirements and standards in the field of artificial intelligence.
The Federal Network Agency plays a crucial role in AI regulation in Germany. Its tasks cover various areas that are important for ensuring standards and market surveillance:
Through these measures, the Federal Network Agency ensures that AI applications are safe and reliable. This promotes fundamental trust in technologies that are increasingly influencing our everyday lives.
The standardization of test criteria for AI systems is a decisive step towards ensuring quality and reliability in software development. The international standard ISO/EEC 42102 plays a central role here. This standard defines two dimensions that are relevant for the test description of AI capabilities: Methods and capabilities.
In the first dimension, both classic and modern approaches are considered in the AI test description.
These different methods require specific strategies for carrying out and documenting tests. Choosing the right methodology has a significant impact on the effectiveness of the test.
Optimization methods are particularly important for tests, as they help to find the best parameters for algorithms. They help to ensure that models not only work correctly, but are also efficient.
An example of an optimization method is gradient descent, which is used in many machine learning algorithms. This method minimizes the error of a prediction by gradually adjusting the weights of a model.
The application of these methods in a standardized format ensures repeatability and consistency in the test results. In this context, the importance of a structured test description is also emphasized. With clearly defined test criteria, testers can ensure that all aspects of an AI capability are adequately evaluated.
The standardization of test criteria according to ISO/EEC 42102 thus not only promotes comparability between different test scenarios, but also creates a basis for the responsible development of AI systems in accordance with regulatory requirements.
The capabilities of AI systems are diverse and can be divided into different categories. These capabilities include:
The international standard ISO/EEC 42102 plays a crucial role in the standardization of test criteria for these capabilities. The need for standardization is particularly important to develop consistent and traceable test descriptions for AI capabilities. This allows companies to ensure that their AI systems meet the required quality standards. Algorithms recognize content from images using techniques such as image classification or object recognition and generate new data based on the learned patterns.
Quality assurance is a central aspect in the development of AI systems. Specific quality criteria play a decisive role in the evaluation of these systems. The most important methods in AI testing are
An example of a specific metric is the Confidence Score, which evaluates the reliability of predictions in image processing systems. A high confidence score indicates that the model has a high probability of making correct decisions. To measure the different quality criteria, the following aspects should be considered:
These criteria form the basis for an effective test description and help to evaluate and optimize the performance of AI systems.
The AI Act provides a significant regulatory framework for organizations that develop or deploy artificial intelligence. The implications are profound, particularly for organizations implementing high-risk systems. These systems are classified as security-relevant and require strict specifications to ensure security and transparency.
Certification bodies play a crucial role in the implementation of the AI Act. They are responsible for:
These bodies must be evaluated by independent experts to ensure that they have the necessary expertise. The pressure on these institutions is high, as not only are timely audits required, but they also need to adapt quickly to rapidly changing technologies.
The future of AI testing methods will be heavily influenced by advances in regulation and standardization. The importance of standardization cannot be overstated, especially when it comes to developing test descriptions for AI capabilities. Future developments could include the following aspects:
The continuous development of artificial intelligence testing methods will be crucial to ensure that software developments meet new regulatory requirements. In the future, a dynamic environment could emerge in which adjustments to standards can be implemented quickly in order to keep pace with technological advances.