AI brain with laptop
8 min

EU AI Act & ISO IEC IEEE Standards for AI Webinar FAQs

Posted by: Philippe Coution Date: 31 Jan 2024

EU AI Act and ISO/IEC/IEEE standards for AI and their impact on the Industry Webinar FAQs

At the end of our Upcoming EU AI Act and ISO/IEC/IEEE Standards for AI and their Impact on the Industry webinar in October 2023, we had lots of questions from our audience and thought the answers would be helpful to share on our blog.

Is IEC 5459 now published or is it still in draft?


ISO/IEC TR 5469:2024(en)Artificial intelligence — Functional safety and AI systems was published in January 2024.

In December 2023, ISO/IEC 42001 was published which enables organisations to implement Quality Management systems for Artificial Intelligence. This is a key element of compliance to EU AI Act.

How will the success of the AI Act be measured and demonstrated?

Governments, organisations and society will measure success in different ways. Government success is measured by a thriving AI ecosystem, a functioning market and active innovation, while respecting fundamental rights and values. As an organisation, compliance with the new regulation can bring benefits such as an equal playing field, market access and an overall lowered exposure to risks. If you have adapted yourself to the changing environment, your interactions with customers and suppliers will also start to raise expectation and pressure organisations to use AI responsibly.

Assessing societal impact may pose challenges, but it's crucial, especially considering the competitive and rapidly growing landscape of AI globally. The act is positioned to address at least the currently foreseeable high risks of AI.

In terms of timeline, the legislator anticipates an experimental phase following enactment. A planned monitoring period will assess how the act is implemented, allowing for adjustments and adaptations as needed. This post-enforcement experimentation phase is part of the legislator's strategy, and everyone is prepared for it.

There are also some more technical provisions for later reviews, such as qualification of foundations models.

AI quality

Will the AI act stifle or promote innovation?

Europe, including the UK, isn't leading AI globally. The existing law, while not stifling innovation, could miss the chance to sufficiently promote it.

In relation to innovation, the European Commission, Council, and Parliament are addressing specific details of this risk in both the original and amended proposals. Concerns arise regarding potential restrictions in the EU's innovation environment, for example the extra burden for foundational models like ChatGPT in the amended proposals from May. The ongoing three-phase process involves all stakeholders striving to strike a balance between innovation and the core values of the EU. As always, the EU is a leader in data privacy, security regulation, and structured approaches. By the end of the three-phase process, there's hope for more clarity and reduction of legal uncertainty that could promote innovation.

All-in-all, this will support in the creation of a sustainable ecosystem, which is needed for innovation. 

What should companies be doing today to prepare for the EU AI act?

For companies already employing AI, the first step is introducing guidelines and processes to ensure awareness when using limited- or high-risk applications. And you need to be aware that this could occur in virtually any organisational function, such as procurement, HR, or product development.

Secondly, there's a parallel need to raise awareness among those procuring or developing AI, emphasising risk mitigation throughout the entire project lifecycle, from defining requirements to deployment and decommissioning. Establishing appropriate safeguards is key.

Overall, organisations probably have to update their quality management approach, because usually AI risk is only addressed at a very narrowly and not across the entire life cycle.

Furthermore, it is crucial to focus on data governance, ensuring it aligns with risk management requirements to avoid pitfalls later on. Currently, the emphasis is on raising awareness amongst teams about the importance of AI, its risk management, and the embedding in the quality management approach. Taking action now is vital, as the deadline for the act to be enforced will come round quickly, and you don't want to be caught unprepared.

At the very least, you should start researching today, even though you might not be implementing it for quite some time. TÜV SÜD is here to help you. 

AI and autonomous systems are already here, especially in the automotive arena. Will compliance with these regulations be retrospective? And if so, how would this be accomplished?

No law is retroactive. Obviously, the only exception is if you are in the prohibited category, as you would have to remove your system from the market. So, if your system / product is already on the market, there’s no need to worry unless you make a significant change to the system after the law has been passed.

For example, if you have system A version 1.6 and you release version 2.0, which is a significant change of your system then you will need to start the compliance process again. But if you go from 1.6 to 1.65, your product will probably still be considered to be compliant.

In the automotive sector, navigating regulations is tricky. Currently, it falls under section B of annex 2 of the EU AI Act, which differs from NLF. The legislator is still uncertain about how it aligns with the EU AI Act, and clarity on this interplay is awaited.

In short: as of now, it's not retrospective, unless substantial changes are made to the product. 


  • Gain insights into how Readiness Analysis can support companies in their AI journey
  • Understand the common framework companies can participate in and benefit from with this transformation
  • Find out how AI Quality is the catalyst in the transformation of digitalisation


How will adherence to the EU AI act be controlled? Is there any agency or similar for enforcing the regulation?

There are two aspects to this. On an EU level, there is a European board or agency that is responsible for overseeing the different aspects of the EU act. This office will have regional or national arms or bodies that will be responsible for the enforcement of this law or of this regulation. This European agency will be for the registration of high-risk systems, which you need to register before market entry. This applies to the top tier of systems.

For others, there's no active registration; you proceed with market entry as usual, depending on your industry, similar to obtaining a CE marking for certain products. If your AI falls under additional requirements for CE marking, a Notified Body can assist with external auditing if needed or desired.

From 2024 onwards, member states hold a key role in the application and enforcement of this Regulation. In this respect, each member state should designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities.

To increase efficiency and to set an official point of contact with the public and other counterparts, each member state should designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.

Additional technical expertise will be provided by an advisory forum, representing a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia.

In addition, the Commission will establish a new European AI Office, within the Commission, which will supervise general-purpose AI models, cooperate with the European Artificial Intelligence Board and be supported by a scientific panel of independent experts. 

With functional safety being embedded in automation systems more and more, and AI now being embedded in relation to machine learning, how will users be sure that AI is not affecting safety within a purchased systems firmware? And will manufacturers have to provide additional certification proving that AI is not influencing functional safety code?

If you examine Annex 1 Part A of machinery regulation, as a manufacturer of a small component, you fall under Article 5. And if you’re making this yourself, you're already integrating article 6. The regulation in the machinery part already addresses this aspect. The broad definition encompasses various ways a component can impact safety functions so it can be:

  • inside a safety function
  • used during the development of a safety function
  • not part of a safety function, but with indirect impact on the safety function

Make sure you’re well informed about functional safety by undertaking functional safety training or find out how we can support you with testing, inspecting and certifying your products. You also need to inform the next organisation in the value chain and provide relevant proof, documentation and information, if you influence the safety situation of the machine, whatever the grade.

If your system involves machine learning or AI and falls into the high-risk category of the EU AI Act, you must meet specific requirements. These include proving that you have fulfilled your responsibilities, such as documentation and record-keeping, demonstrating efforts to cover or mitigate safety aspects. The standard aims to guide developers and AI providers in addressing functional safety.

It introduces three classes in the horizontal dimension, representing the extent to which existing functional safety tools can cover AI use cases. The vertical dimension aligns with AI use case levels. Depending on where the two dimensions are, you would localise your AI use case and then define the set of requirements that would be relevant for this use case. While the standard is not yet published, this approach is envisioned as the initial step in implementing proper functional safety for AI systems. 

Do the US and China have their own versions of the act or some regulations? And if so, they are aligned with each and / or the EU? How is the world working together for an AI regulation, if at all?

In addressing AI regulation, Europe adopts a horizontal approach with a central piece of legislation forming the core structure, supported by standards. In China, a vertical approach is taken, where specific regulations for each industry or vertical are developed independently.

The US situation is less defined at the Federal level, with discussions and initiatives in progress, including proposals from the White House, like the AI rights blueprints, and efforts within Congress, such as Chuck Schumer's proposal for AI regulation.

Discussions with major AI companies and Kamala Harris, Vice President of the United States, have mainly occurred at the state level. Different states have different regulations, for example, California with the California Consumer Privacy Act (CCPPA). States like California and New York have initiated AI regulations, including employee-based AI regulation. While various state-level initiatives exist, a clear Federal approach is not yet evident.

Addressing the challenge of different approaches from major players, a recent piece in the Financial Times by Ian Bremer and Mustafa Solemnian suggests a global governance model for AI. This proposal envisions a global board for AI governance, similar to the financial industry, to manage potential risks across regions and be responsible for addressing emerging issues in China, the US, and elsewhere.

I believe that the proposed global governance model for AI is the best solution, because you cannot rely on a decentralised way of governing AI. AI, whether within distinct clusters or in open-source form, can pose risks that transcend borders, affecting regions like Europe or the US. To effectively address these risks, a global frame is necessary. While cooperation between the US and China on this matter may seem doubtful, there's hope for a collective, global effort to tackle AI risks and find solutions for the benefit of humanity. 

Want to learn more ABOUT THE EU AI ACT?

Watch the Upcoming EU AI Act and ISO/IEC/IEEE Standards for AI and their Impact on the Industry webinar recording and take a look at our services for AI.

We've also created an AI quality framework to mitigate AI risks. It is based on current and upcoming standards and regulations, business requirements and best practices.

One of the tools of this quality framework is a Readiness Analysis which provides guidance on risks and maturity. We have applied it in an industrial context and our white paper details how it works and shows the benefits of it.


Next Steps

Site Selector