Your regular update for technical and industry information
Your regular update for technical and industry information
We spoke to Philippe Coution, who works for us as Business Development Lead AI Quality and has actively contributed to TÜV SÜD's AI quality framework. In his job in the Digital Services department at TÜV SÜD, he deals with the digital transformation and the associated services of TÜV SÜD. For example, he supports the development of AI standards and makes recommendations to customers on how they can meet these new requirements and adapt to the new age of AI. In our interview, he addresses questions around the ethical principles of AI, why we need them, the challenges associated with AI and how to maintain a good balance between AI and regulation.
P. Coution: Let's think the other way around. Why didn't we consider ethics as important when human-developed systems took over tasks? Mainly because these systems either lacked autonomy - our cars still have drivers, for example - or because they couldn't provide controversial input; although you could argue that cross-selling recommendations to children on a streaming platform, for example, already crosses certain boundaries. Ethical concerns arise from AI's increasing ability to give us recommendations, make decisions and generate content. But their autonomy alone cannot be the problem. Even when systems act very autonomously, such as robots in a factory, they can be 100 percent predictable and can be used without hesitation in a particular environment. What can happen to a robot that has not been planned and programmed beforehand?
It is the combination of autonomy and lack of predictability that is problematic. And it is even more than this combination that worries us. It is the wide dispersion that makes it so critical. Ethical concerns first arose when we started to use AI everywhere - invisibly in the background and with harmful or unclear effects on children, for example through recommendation systems and educational apps, or in legally protected spaces through HR applications or for facial recognition. When we consider how many people are affected by the results of such AI systems, it becomes clear how important the design and input of the systems are in terms of ethical criteria: Is the data used free from unwanted bias? Are they representative of the target groups? Are the applications designed with ethical considerations in mind? Are they used in their intended context? The question of ethics cannot be applied to an AI system merely as a final check. It must be a fundamental requirement of an AI system that must be set before development and then enforced throughout its lifetime. It must not be an afterthought to mitigate the consequences, but a prerequisite for deliberate use.
P. Coution: In the same way that they ensure, for example, legally compliant vehicles on the road or legally compliant medical products on the market: through adapted quality management systems. These are embedded in product development, production and monitoring - even after market launch. But ethics goes beyond mere regulation. Ethical principles should be reflected in corporate values. This is where external ethics experts specializing in AI can provide important support.
The first step, of course, is to raise awareness of these issues at management level and their willingness to integrate ethical principles at the highest level and beyond the legal requirements. Once ethical principles are integrated into corporate values and strategy, the next step is to empower the technology and quality management teams.
The best practice here is training by experts who not only explain the legal framework, but are familiar with its practical application in AI standards, such as the IEEE CertifAIed© on ethics in AI. It's not about abstract values and standards. It is about the practical implementation of projects in a concrete context. Here, the implementation of an AI quality management system, governance and accountability are crucial to ensure ethical, compliant AI systems. Not just in the short term, but in the long term - from product definition to post-market monitoring.
P. Coution: AI quality refers to the degree to which an AI system fulfils certain requirements throughout its lifecycle. Ethics is one of these requirements, as are safety, performance, compliance and sustainability. Why is it important to consider the entire lifecycle? Because quality is not a state. Quality is a continuous process. Therefore, companies should implement AI quality and risk management systems to ensure that all ethical concerns and risks are considered throughout the system's lifecycle. Only AI quality management will ensure trustworthy AI in the long term. And more importantly, it is the only way to check, test and improve the quality of AI systems. To put it in a nutshell: AI quality is essential to ensure that systems comply with regulations and are demonstrably ethical. This allows companies to realize the full potential of AI while controlling the associated risks.
P. Coution: In the area of tension between innovation and regulation, there are of course many ways to read the draft AI Act. But what we should keep in mind is that Innovation is not a goal. A better product or a better society are goals. Innovation is only one of the means to achieve these goals. Regulation is also not the right conceptual approach. Rather, it is about risk management, because that is the backbone of any sustainable development. For example, would we want nuclear power plants without risk management? No. And a smartphone game? Probably not. A smartphone game for children aged 0 to 3? Hardly. That's what it's all about: the right level of control for each individual case. We still remember the failure of self-regulation in the past, for example with Cambridge Analytica. There should be no doubt that we cannot leave regulation to the profit-oriented Silicon Valley companies. They have failed so many times before.
Nevertheless, to answer the question about the balance between regulation and innovation: I fundamentally support the regulatory approach of the AI Act, even if the draft is far from perfect. I have concerns about the regulation of basic models. At the same time, I think that the chapters on promoting innovation are not ambitious enough. If we want to make the EU a strong innovation region for AI, we need to do more. I am thinking of support for start-ups, for example through protected test phases, and special support for small and medium-sized enterprises, for example through transformation programs. We also need infrastructure programs and a clear strategy for critical components such as chips, GPUs and co. in order to take a leading role in AI and benefit from the Brussels effect*
*Definition of the Brussels effect: The Brussels effect refers to the process of one-sided globalization of regulation that results from the European Union de facto shifting its laws outside its borders through market mechanisms.
Copyright: Statworx
Why and how ethics matters for the implementation of autonomous driving
Learn More
Get an overview of the current status and developments in highly automated driving legislation
Learn More
The Key to Adopting AI at Scale and Complying with Regulation
Learn More
Site Selector
Global
Americas
Asia
Europe
Middle East and Africa