• Digital Economy Dispatches
  • Posts
  • Digital Economy Dispatch #160 -- The AI Illusion: Why We Might Be Overestimating the Pace of AI Adoption

Digital Economy Dispatch #160 -- The AI Illusion: Why We Might Be Overestimating the Pace of AI Adoption

Digital Economy Dispatch #160 -- The AI Illusion: Why We Might Be Overestimating the Pace of AI Adoption
3rd December 2023

It can be tough to admit you’re wrong. Especially when you are supposed to know what you’re doing and made a big fuss about what you had to say. So, I have a lot of admiration for Ajay Agrawal, Joshua Gans, and Avi Goldfarb with their follow up to the best selling “Prediction Machines” book from 2017. Their second book, “Power and Prediction” came out at the end of 2022. And it starts with an apology: We got it wrong.

Perhaps we can put it down to over enthusiasm. Just like many people, they were rather carried away by the possibilities offered by AI and mis-calculated the speed at which it would be adopted. The essence of “Prediction Machines” was that disruption occurs when technology advances and shifting economics collide. We saw this with the move from human effort to steam-generated power, and then again from steam to electricity. The thesis of “Prediction Machines” was that we now see the same with AI as the latest digital advances not only revolutionize computing capability to allow us to analyze vast streams of data, but also redefine the economics of making predictions. As a result, in almost every domain there is a land rush to deploy AI. Surely nothing could now stop the AI juggernaut overtaking very aspect of our lives?

But they were wrong.

Move forward 5 years, and despite many significant AI advances, and a crescendo of hype, the reality is that many organizations are a long way from rolling out AI in any meaningful way. Whether it is in healthcare, government, or financial services, major challenges must be overcome. This is confirmed in a survey from Gartner published in August 2022. That survey identified a major gap between the number of AI models developed by organizations and the actual number that make it into production. It reported that, on average, only 54% of AI models move from pilot to production. That figure is barely improved from the often-cited 53% that Gartner reported in a 2020 survey.

Far from the excitement and headlines about AI coming from research labs, tech-based startup accelerators, and venture capital companies, most existing organizations remained focused on small scale AI efforts and pilot projects. In their complex, highly structured environments, the reality of driving substantial change can be overwhelming.

The Power and the Glory

It is in this context that Agrawal, Gans, and Goldfarb’s “Power and Prediction” book was released. In that book they continue their exploration of the economics of AI. They reinforce the argument that AI is not merely a technological advancement but a fundamental shift in the way decisions are made, power is distributed, and value is created.

The authors begin by reviewing the core principles they outlined in their previous book on the underlying AI's disruptive potential. They explain how AI-powered prediction machines can gather, analyze, and interpret vast amounts of data, enabling them to make increasingly accurate predictions about consumer behaviour, market trends, and operational efficiency. From this perspective, they explore the far-reaching consequences of this predictive power with AI reshaping industries, from finance and retail to healthcare and manufacturing. They also use this lens to warn us about why AI has the potential to exacerbate inequality and raise ethical concerns, and give their views on how these fears can be addressed.

It is here that they also admit to their previous failings. They placed too much emphasis on the way that AI’s combined technical-economic revolution would reshape organizations without sufficiently recognizing the disruption that this would cause to existing ways of working. As they noted, “we realized that we must consider not only the economics of the technology itself, but also the systems in which the technology operates”. In other words, they observed that it is the organizational context that most often determines the speed of change. And this is as much true for AI as it has been for many other digital and non-digital technologies affecting the way organization’s operate.

As a result, in deepening their understanding of AI, the authors make 2 substantial observations related to the slow pace of AI adoption and the barriers that organizations most overcome to ensure AI has meaningful impact and delivers value. The first concerns that broader systemic impacts of AI adoption. They note that where the introduction of AI is aimed at broad systemic change, the complexity of those system-level changes demands a clear focus on appropriate change management and governance mechanisms. The second highlights the concerns about whether AI is automating decisions and dehumanizing decision making. They re-interpret the critical relationship between AI as the engine for predictions and humans as responsible for providing the judgement required to take appropriate action.

The Distinction Between Point, Application, and System

We are in the early stages of understanding and applying AI in practice. Although many of the core elements of AI have been developed over several decades, it is only in the last few years that we have gained broader understanding of its widespread use. Agrawal, Gans, and Goldfarb look at this adoption through the lens of economics and emphasize the crucial distinction between point, application, and systems level adoption of AI. They believe that this distinction is essential for understanding the varying impacts of AI on businesses and society.

  • Point-level adoption involves applying AI to specific tasks or processes within an existing system. This is often the first step for organizations venturing into AI, as it requires minimal disruption to existing workflows. Examples include using AI to detect fraudulent transactions in the financial sector or to optimize ad targeting in the marketing industry.

  • Application-level adoption entails developing AI-powered applications that replace or augment existing applications. This level of adoption requires more significant changes to existing systems but can lead to greater efficiency and effectiveness. Examples include AI-powered customer service chatbots or AI-driven diagnostic tools in healthcare.

  • Systems-level adoption involves fundamentally rethinking and redesigning entire systems around AI capabilities. This level of adoption is the most transformative but also the most challenging to implement. Examples include AI-powered supply chain management systems or AI-driven risk management platforms.

The authors argue that most AI adoption today is at the point level, with some organizations venturing into application-level adoption. This is possible because most early uses of AI simply replace one approach to forecasting with another. Gains are largely measured in traditional ways as efficiencies and cost reduction. However, the most significant impact of AI will come from systems-level adoption where substantial re-evaluation of value creation is required. This is because systems-level adoption allows AI to truly disrupt and transform industries, creating new business models, changing power dynamics, and generating substantial economic value.

Recognizing this distinction is important for organizations to make informed decisions about AI adoption. Point-level adoption can provide quick wins and incremental improvements, but organizations should also consider the potential of application-level and systems-level adoption to achieve more transformative gains. Critically, to achieve success requires that organizations introduce change mechanisms and governance structures appropriate to these different forms of AI use.

Moreover, the authors also suggest that understanding the different levels of AI adoption is crucial for policymakers and regulators to effectively anticipate and manage the societal implications of AI. Systems-level adoption, in particular, raises concerns about data privacy, algorithmic bias, and the distribution of power. Policymakers need to develop frameworks to address these concerns and ensure that AI is harnessed responsibly for the benefit of society.

Decision Making is Prediction + Judgement

One of the most important observations in “Power and Prediction” is that the successful adoption of AI in decision-making processes hinges on making a clear distinction between computer-generated predictions and human-based judgment. While AI excels at analyzing vast amounts of data and producing accurate predictions, it lacks the human capacity for contextual understanding, ethical considerations, and nuanced decision-making. It is with this in mind that the authors argue that AI should not be viewed as a replacement for human judgment but rather as a tool to augment and enhance it.

This human judgement comes in 2 forms. In the first, the judgement is buried in the coding of rules that become part of the AI systems we use. When the AI system is developed, the decisions to be made in different circumstances were determined by the system’s developer. The automation of actions is a consequence of this human judgement taken long before the AI system is used. In the second, the results of AI prediction are presented to a human to take action. In this case, the decision occurs after AI system is run. It is human judgement that determines whether the quality of those predictions is sufficient to justify a specific action in each case.

Recognizing the distinction between prediction and judgement offers us a way forward to improve decision making. To effectively leverage AI in decision making, organizations must implement mechanisms to separate predictions from judgments. This involves:

  1. Transparency and Explainability: AI models should be transparent in their operation and provide explanations for their predictions. This allows decision-makers to understand the logic behind the predictions and assess their reliability.

  2. Human Oversight and Control: Human decision-makers should retain ultimate control over decisions, using AI predictions as inputs rather than as sole determinants. This ensures that ethical considerations and contextual factors are taken into account.

  3. Continuous Monitoring and Evaluation: AI models should be continuously monitored and evaluated to ensure their accuracy, fairness, and relevance. This helps identify potential biases or errors that may impact decision-making.

By separating computer-generated predictions from human-based judgment, the authors argue, organizations can harness the power of AI while preserving the critical role of human expertise and decision-making. This approach is at the centre of human-AI collaboration as advocated in "Power and Prediction”. It is this recognition that enables organizations to make informed, ethical, and effective decisions in an increasingly AI-driven world.

Ahead with AI

Admitting mistakes takes courage, even for thought leaders in AI. Agrawal, Gans, and Goldfarb's latest book, "Power and Prediction", offers a mea culpa for overestimating AI's rapid adoption. Despite significant strides, the reality remains: organizations grapple with hurdles in implementing AI at scale. The essential insights from this book highlight the transformative potential of AI while recognizing the crucial organizational challenges to be overcome.

To navigate this rapidly changing landscape, the authors offer practical guidance for experienced managers and leaders. They emphasize the importance of understanding the economics of AI, including the costs, benefits, and risks associated with AI adoption. They also place the role of AI as a prediction engine in sharp contrast to essential human judgement that underpins every AI system.

Their work provides a compelling and thought-provoking examination of AI's transformative power, and a realistic perspective on why many organizations continue to struggle with the disruptive nature of digital transformation. It is an important remainder for all of us seeking to understand and harness the potential of AI to drive innovation, enhance productivity, and make informed strategic decisions.