Digital Economy Dispatch #133 -- Why Responsible AI Starts with You

Digital Economy Dispatch #133 -- Why Responsible AI Starts with You
28th May 2023

It is impossible to participate in a discussion with colleagues, read social media posts, or watch the news on TV without being asked the same question:

What will be the impact of AI on our future?

It is a critical concern given the shockwave induced by ChatGPT and its ilk. Unfortunately, what follows is too often a series of banal statements about out-of-control autonomous decision making and the emergence of “machines that think for themselves”. At best these discussions serve as provocations to ensure we examine the limits of the powerful technologies under development. At worst they are poorly informed speculation about an uncertain future based on little or no evidence.

Yet, beyond the futurology and scare stories, a separate thread of discussion has been emerging as a key debate that is likely to have a deep effect on our understanding of AI and its implications. It is based around the concept of “Responsible AI”.

As a result, while wrestling with many of the same challenges to understand the impacts of AI, these efforts attempt to place a more reasoned and reasonable framework around the discussion on the ways to guide, govern, and manage how AI moves forward. By focusing on the way AI systems are trained and exploring the contexts in which AI is applied, these investigations are broadening our perspectives on how to balance the opportunities and threats posed by AI. They raise the possibility that we can engage in a meaningful conversation about what it means to take a responsible approach to AI.

A Responsible Approach to AI

It has become quite common these days to hear criticisms of the biases embedded in AI, the lack of fairness in its algorithms, and the role of Big Tech in dominating the availability of AI systems. Serious concerns that demand investigation and discussion. They form part of a much more deeply rooted unease with placing our trust in AI as seen in recent surveys.

This was clearly highlighted as the fundamental takeaway from the latest IBM Global AI Adoption Index. Conducted in April 2022, the study explores the deployment of AI across 7,502 businesses around the world and shows that a majority of organizations that have adopted AI have not yet taken key steps to ensure their AI is trustworthy and responsible, such as reducing unintended bias (74%), tracking performance variations and model drift (68%), and making sure they can explain AI-powered decisions (61%).

Unsurprisingly, the IBM study points out that the challenges faced by many of these organizations is multi-faceted. To address such concerns requires a coordinated approach affecting all those involved in the AI value chain, but particularly:

  1. Developers and Engineers responsible for designing and implementing AI systems that must adhere to ethical standards and legal regulations. By incorporating transparency, fairness, and accountability into the AI algorithms, developers can mitigate biases and promote responsible decision-making.

  2. Data Scientists who must be vigilant in recognizing and addressing potential biases in data that could result in discriminatory outcomes. Additionally, data scientists must strive for transparency and interpretability of AI models to enable meaningful human oversight.

While helpful, this engineering perspective must be aligned with a much broader context in which AI is being used. Notably, many organizations and institutions are looking to establish a wider ethical framework that can be used to guide their approach to AI. This has naturally been a priority for the heavily criticised AI technology producers such as Microsoft, Google, and IBM. But it is also true of many other organizations, from the BBC to the NHS.

As an illustration, consider the ethical framework for AI use in the UK’s defence sector that was published in June 2022. This explicitly adopted a broad systems perspective to address a wide set of AI-related issues. Its scope was defined by focusing on 5 key principles:

  1. Human-Centricity: The impact of AI-enabled systems on humans must be assessed and considered, for a full range of effects both positive and negative across the entire system lifecycle.

  2. Responsibility: Human responsibility for AI-enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.

  3. Understanding: AI-enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design.

  4. Bias and Harm Mitigation: Those responsible for AI-enabled systems must proactively mitigate the risk of unexpected or unintended biases or harms resulting from these systems, whether through their original rollout, or as they learn, change or are redeployed.

  5. Reliability: AI-enabled systems must be demonstrably reliable, robust and secure.

These 5 principles offer a useful basis for examining the range of effects of AI, and create a great starting point for organizations and institutions to debate the way it should approach AI-based activities to ensure that AI technologies are developed without bias, distributed fairly, and deployed effectively.

However, it seems to me to be missing a fairly obvious point – responsible AI starts with you.

The “I” in AI

There is no doubt that AI brings many benefits. However, with great power comes great responsibility. As AI continues to advance, each of us must accept that we have a personal responsibility to learn more about the technology of AI and the impact of its use.

This became clear to me recently in interactions with several different organizations as (inevitably) the conversation turned to a discussion on the opportunities and challenges of AI in the workplace. Amongst those taking part, it was evident that there was a very wide spread of knowledge of AI. While some had spent time to learn about the technology and consider its implications, many simply had no meaningful base on which to draw conclusions.

Of course, not everyone can be (or needs to be) an expert in AI. It is a vast field of study with many avenues. However, to participate in the debate on how AI will impact our future as a responsible leader, manager, product owner, or citizen, it has now become essential that everyone gains an appreciation for the basic concepts of AI.  

Let’s take an example. In a discussion recently with decision makers in a large industrial company, there was discussion on ChatGPT and its potential impact in areas such as marketing, HR, and financial planning. As often happens in these conversations, the discussion soon drifted and lacked any cohesion.

To bring things together I posed the simple question: Who knows what the “GPT” stands for in “ChatGPT”? Suddenly all was quiet. Of the dozen or so people on the call, only a handful admitted to knowing the answer. The follow up question (“And what do the words ‘Generative Pre-trained Transformer’ mean that it does?) seemed rather redundant. Try this on your next AI discussion and see if you get the same response.

Why does this matter? With broad agreement that AI is likely to have significant impact on all our lives, it has become imperative that we arm ourselves with some basic vocabulary and concepts with which to engage in meaningful discussion. Calls for a more responsible approach to AI are urgent and important. However, organizational structures and ethical frameworks will only be effective if we each accept our own personal responsibilities to learn more about AI, challenge ourselves to consider the way AI is changing our world, and adopt a more considered approach to the flurry of AI advances that will impact our work and lives.

Its time for us all to take responsibility for AI.