Enabling revolutionary progress
Enabling revolutionary progress
A PRACTICAL APPROACH TO SIDESTEP AI RISKS
To ensure that AI benefits humanity, people need to actively shape the system, process and environment of AI applications. Current regulations are lagging behind the ethical and technical requirements necessary to create people-centred AI with minimal risks. While I am optimistic about the broad conversations carried out by multiple stakeholders, organisations need more, now. TÜV SÜD is committed to an interdisciplinary approach to AI governance, and has devised clear processes that will help businesses reliably assess the quality and trustworthiness of AI.
Artificial intelligence (AI) has entered our lives in ways both big and small. The technology is behind the smart virtual assistants in our mobile phones, the warehouse management that makes our one-day delivery possible, the predictive diagnoses that are saving lives, and more. According to PwC1, AI could add $15 trillion to the global economy by 2030, offering unprecedented opportunities for individuals, businesses and governments.
Around the world, organisations are realising the revolutionary potential of AI. The EU announced in December last year a new financing instrument of up to 150 million euros to support early and growth-stage AI companies2 – hoping to spur breakthrough applications and related technologies like blockchain and the Internet of Things.
Organisations are also joining the AI trend. In 2018, already 86% of companies reported mid-stage or advanced AI deployments, viewing the technology as a major facilitator of future business operations3. And there is plenty of evidence to support this position. While the potential of AI will differ across countries and industries, it is expected to have a largely positive impact, including lowering costs, improving labour productivity, enhancing business intelligence and customer experience4.
As much as AI generates customer benefits and business value, organisations must be aware of the unique risks that it introduces. Leaders have a responsibility to hone their knowledge about the societal and organisational risks of AI – or risk having to deal with the technology gone wrong.
Some predictions imagine super-intelligent, runaway artificial intelligence robots taking over the world. These notions are far-fetched but hold a nugget of truth. Unintended outcomes such as discrimination and indeterminate decision-making have the potential to damage reputations and hurt individuals. Take Microsoft’s AI chatbot Tay that was manipulated by online users to spew racist remarks5.
In another case, IBM’s Watson dished out unsafe and erroneous cancer treatment recommendations after being trained on a small and unreliable dataset6.
Other concerns revolve around the ethical usage of AI. The technology can easily fall into the wrong hands or be developed with malicious intent. AI in the grips of governments and large private companies raise the spectre of surveillance and censorship. And when things go wrong – who is legally liable with regards to things such as intellectual property rights or societal impact?
These concerns arise partially out of what we call the ‘black box’ of AI. Unlike traditional system development where a set of rules is formalised and given to the system to follow, AI system development flips the script. Machine learning algorithms generalise and infer rules from a given dataset – resulting in opaque rules that even developers lack specifics about.
Emphasising the complications of AI applications, Dr. Saerbeck (CTO of Digital Service at TÜV SÜD) said, “Current machine learning models encode functionality in hundreds of thousands if not millions of parameters. We currently lack a robust framework to understand the role and impact of each of these values. This results in uncertainty. We simply don’t know under what conditions a given model will fail. AI governance is currently the only effective mitigation to manage AI risks. We need to update our processes to measure and reliably quantify while assessing quality metrics such as robustness, accuracy, and predictability for AI.”
Organisations already know the risks associated with AI – and it's holding them back from the potential benefits of the technology. Among these risks is a lack of transparency, which is preventing adoption of AI7. As Dr. Saerbeck puts it, “Trustworthiness is essential to be able to apply AI in mission critical applications and reap the benefits of the full potential of AI.”
This elusive ‘trustworthiness’ is essentially supported by three pillars:
“AI governance is essential to manage AI quality in high-risk applications. Mitigation through a single control, such as human vetting of reports as in the example of façade inspection, is insufficient due to limited capacity and associated costs. Especially if an avoidable mistake is being made, companies will have a hard time arguing why an incomplete governance plan was in place.”
– Dr. Martin Saerbeck, CTO of Digital Service, TÜV SÜD
Companies still struggle to adequately manage the quality of AI by, for example, improving its governance, reducing its bias and monitoring its model performance. In an effort to close this gap, several guidelines are currently being developed, such as the European Commission’s Ethics Guidelines for Trustworthy AI, a new legal framework on AI released by the Commission in April 20218, Asia’s first Model AI Governance Framework9 and an Implementation and Self-Assessment Guide for Organisations (ISAGO10 from Singapore).
As Dr. Saerbeck explains, these guidelines provide a good framework, but lack practical advice to implement AI applications with confidence. They define high level goals such as non-discrimination, fairness, and so forth, but don’t spell out how to achieve that. For example, how such goals translate to the choice of algorithm, the testing processes that need to be in place, or metrics that need to be taken, are all up to the businesses to figure out.
What it is:
TÜV SÜD’s smart façade inspection is a system that uses AI to check for deteriorating materials and underlying problems with building façades. TÜV SÜD employs a rigorous AI Quality Management framework to sidestep AI risks and deliver precise and actionable inspection reports.
Opportunities
Risks
(3D Reconstruction of building scan)
How it works:
A drone is used to capture images along the façade of the building. AI assists inspectors in the analysis of collected data. The algorithms help to:
What it means:
This case study underlines the challenge of establishing trust in AI. While the opportunities of smart façade inspection outweigh the risks, lapses in this application – and generally across AI use cases – can expose the company to financial and reputational damage. These risks can and must be managed through robust AI governance.
To fill this gap, TÜV SÜD has developed an AI Quality Management System. On a high level, it guides organisations through key questions encompassing AI data, algorithms and models. Drilling deeper, it closely charts the AI lifecycle to help businesses anticipate the risks and pitfalls at each stage of AI development and implementation to achieve the requirements laid out in current governance frameworks.
For organisations that are keen on building expert in-house teams that can implement safe, ethical and transparent AI, TÜV SÜD provides training workshops on the AI Quality Management System. Attendees will learn about the five major quality pillars (Safety, Security, Ethics, Legal, Performance), their characteristics, dedicated risks assessments and strategies to address them throughout the AI system’s lifecycle. The goal is to provide actionable steps for AI implementation that is tailored to an organisations’ specific context.
To understand the key questions of quality AI, please view our infographic here.
The journey to trustworthy AI won’t be easy, and it won’t be a one-off effort. As Dr. Saerbeck explains, “Trust in AI is not a property that can be achieved by a single individual or during a single stage in the system life cycle. From conceptualisation to decommissioning of the AI system, each stakeholder has an important role to play, to make sure that required quality is achieved and maintained. Trust in AI cannot be solved at a technology level but has to involve the entire company.”
But enter the race or lose out – that’s the state of affairs for businesses and AI. Even though regulations are still lagging, the important thing is for leaders to start now and get their enterprises off on the right foot with practical AI governance guidelines and principles.
Biography of Dr. Martin Saerbeck
![]() |
In his role as CTO of Digital Service at TÜV SÜD, Dr. Martin Saerbeck leads strategic research and development initiatives of novel digital testing solutions in AI, robotics, and IoT technology. Dr. Martin holds a degree in Computer Science and a PhD in Industrial Design, with over 15 years of experience in developing technology solutions for both industry and academia. |
1AI could add $15 trillion to the global economy by 2030, PwC via Industry Week
2New EU financing instrument of up to $150 million to support European AI companies, European Commission
3Leadership in the age of AI, Infosys
4In-depth: Artificial Intelligence 2019, AI Statista
5In 2016 Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation, Spectrum
6How IBM Watson Overpromised and Underdelivered on AI Health Care, Spectrum
720 years inside the mind of the CEO, PwC
8Europe fit for the digital age: Commission proposes new rules and actions for excellence and trust in AI, European Commission
9Singapore’s approach to AI governance, PDPC
10Singapore launches new AI initiatives at World Economic Forum, OpenGov
A practical guide to gain the required knowledge to master AI quality
Learn More
Site Selector
Global
Americas
Asia
Europe
Middle East and Africa