Enabling revolutionary progress
Enabling revolutionary progress
To ensure that AI benefits humanity, people need to actively shape the system, process and environment of AI applications. Current regulations are lagging behind the ethical and technical requirements necessary to create people-centred AI with minimal risks. While we are optimistic about the broad conversations carried out by multiple stakeholders, organisations need more, now. TÜV SÜD is committed to an interdisciplinary approach to AI governance and has devised clear processes that will help businesses reliably assess the quality and trustworthiness of AI.
Artificial intelligence (AI) has entered our lives in ways both big and small. The technology is behind the smart virtual assistants in our mobile phones, the warehouse management that makes our one-day delivery possible, the predictive diagnoses that are saving lives, and more. According to PwC1, AI could add $15 trillion to the global economy by 2030, offering unprecedented opportunities for individuals, businesses and governments.
Around the world, organisations are realising the revolutionary potential of AI. The EU announced in December last year a new financing instrument of up to 150 million euros to support early and growth-stage AI companies2 – hoping to spur breakthrough applications and related technologies like blockchain and the Internet of Things.
Organisations are also joining the AI trend. In 2018, already 86% of companies reported mid-stage or advanced AI deployments, viewing the technology as a major facilitator of future business operations3. And there is plenty of evidence to support this position. While the potential of AI will differ across countries and industries, it is expected to have a largely positive impact, including lowering costs, improving labour productivity, enhancing business intelligence and customer experience4.
As much as AI generates customer benefits and business value, organisations must be aware of the unique risks that it introduces. Leaders have a responsibility to hone their knowledge about the societal and organisational risks of AI – or risk having to deal with the technology gone wrong.
Some predictions imagine super-intelligent, runaway artificial intelligence robots taking over the world. These notions are far-fetched but hold a nugget of truth. Unintended outcomes such as discrimination and indeterminate decision-making have the potential to damage reputations and hurt individuals. Take Microsoft’s AI chatbot Tay that was manipulated by online users to spew racist remarks5.
In another case, IBM’s Watson dished out unsafe and erroneous cancer treatment recommendations after being trained on a small and unreliable dataset6.
Other concerns revolve around the ethical usage of AI. The technology can easily fall into the wrong hands or be developed with malicious intent. AI in the grips of governments and large private companies raise the spectre of surveillance and censorship. And when things go wrong – who is legally liable with regards to things such as intellectual property rights or societal impact?
These concerns arise partially out of what we call the ‘black box’ of AI. Unlike traditional system development where a set of rules is formalised and given to the system to follow, AI system development flips the script. Machine learning algorithms generalise and infer rules from a given dataset – resulting in opaque rules that even developers lack specifics about.
Emphasising the complications of AI applications, Dr. Saerbeck (CTO of Digital Service at TÜV SÜD) said, “Current machine learning models encode functionality in hundreds of thousands if not millions of parameters. We currently lack a robust framework to understand the role and impact of each of these values. This results in uncertainty. We simply don’t know under what conditions a given model will fail. AI governance is currently the only effective mitigation to manage AI risks. We need to update our processes to measure and reliably quantify while assessing quality metrics such as robustness, accuracy, and predictability for AI.”
Organisations already know the risks associated with AI – and it's holding them back from the potential benefits of the technology. Among these risks is a lack of transparency, which is preventing adoption of AI7. As Dr. Saerbeck puts it, “Trustworthiness is essential to be able to apply AI in mission critical applications and reap the benefits of the full potential of AI.”
This elusive ‘trustworthiness’ is essentially supported by three pillars:
“AI governance is essential to manage AI quality in high-risk applications. Mitigation through a single control, such as human vetting of reports as in the example of façade inspection, is insufficient due to limited capacity and associated costs. Especially if an avoidable mistake is being made, companies will have a hard time arguing why an incomplete governance plan was in place.”
– Dr. Martin Saerbeck, CTO of Digital Service, TÜV SÜD
Companies still struggle to adequately manage the quality of AI by, for example, improving its governance, reducing its bias and monitoring its model performance. In an effort to close this gap, several guidelines are currently being developed, such as the European Commission’s Ethics Guidelines for Trustworthy AI, a new legal framework on AI released by the Commission in April 20218, Asia’s first Model AI Governance Framework9 and an Implementation and Self-Assessment Guide for Organisations (ISAGO10 from Singapore).
As Dr. Saerbeck explains, these guidelines provide a good framework, but lack practical advice to implement AI applications with confidence. They define high level goals such as non-discrimination, fairness, and so forth, but don’t spell out how to achieve that. For example, how such goals translate to the choice of algorithm, the testing processes that need to be in place, or metrics that need to be taken, are all up to the businesses to figure out.
Selectați locația dvs.
Global
Americas
Asia
Europe
Middle East and Africa