Advertisement
Canada markets open in 1 hour 58 minutes
  • S&P/TSX

    22,059.03
    -184.97 (-0.83%)
     
  • S&P 500

    5,567.19
    +30.17 (+0.54%)
     
  • DOW

    39,375.87
    +67.87 (+0.17%)
     
  • CAD/USD

    0.7332
    +0.0001 (+0.01%)
     
  • CRUDE OIL

    82.39
    -0.77 (-0.93%)
     
  • Bitcoin CAD

    78,044.90
    -272.95 (-0.35%)
     
  • CMC Crypto 200

    1,224.63
    +58.51 (+5.02%)
     
  • GOLD FUTURES

    2,381.70
    -16.00 (-0.67%)
     
  • RUSSELL 2000

    2,026.73
    -9.89 (-0.49%)
     
  • 10-Yr Bond

    4.2720
    -0.0830 (-1.91%)
     
  • NASDAQ futures

    20,622.25
    +1.50 (+0.01%)
     
  • VOLATILITY

    12.66
    +0.18 (+1.44%)
     
  • FTSE

    8,230.35
    +26.42 (+0.32%)
     
  • NIKKEI 225

    40,780.70
    -131.67 (-0.32%)
     
  • CAD/EUR

    0.6767
    +0.0005 (+0.07%)
     

Is AI about automation, or augmentation? Understanding the difference can guide your AI investments

VCG/VCG—Getty Images

The phrase “human in the loop” is fast becoming today’s corporate mantra for the adoption of artificial intelligence. AI is primarily an augmenting technology, the thinking goes, that is best deployed alongside human workers, as a co-pilot.

This understanding of AI as a technology and its relationship with humans is a striking departure from the traditional vision of full automation that has successfully propelled the introduction of novel technologies in business. Take the introduction of automated financial market-making in the 1990s, which is all too familiar to us now but has been aptly described as a “transformation of common sense.” Automation in this space rendered human market-makers redundant—thereby making entirely new ways of transacting possible at a global scale.

But which these two visions of the future—augmentation or full automation—best fits an AI-powered economy?

Answering this question is critical because each approach to technology adoption can lead to a dramatically different economic place, impacting value-creation and competitive advantage—now and in the near future. When organizations commit to a vision of augmentation, for instance, the technology itself changes as it gets designed around human workers. As a result, productivity and performance gains are inherently constrained by what humans—albeit “augmented humans”—can accomplish.

ADVERTISEMENT

Herbert Simon’s The Sciences of the Artificial provides a good illustration of these limitations. An expert in organizational decision-making, Simon chronicles the U.S. State Department's switch from teletypes to line printers in the 1960s, which was designed to improve message handling during crises—and how it failed because the influx of information still required humans to process it. The augmentation paradigm of technology adoption can sacrifice much of what is economically valuable about automation—greater standardization, security, speed, and precision.

Given our all-too-human limitations as slow, serial information processors, the gap between computationally powerful machines and even “augmented humans” is only expanding in the age of AI. So, too, is the gap between the economic promise of workflow-level automation and augmentation at the level of its constituent tasks. That’s why understanding where full automation might be feasible tomorrow is a powerful guide for today’s investments—especially with nascent technologies like generative AI. By evaluating the recognizable obstacles well-known to impede full automation, we propose a set of investment criteria to help leaders navigate the comingled feelings of uncertainty and promise that define the dawn of GenAI.

The factory floor of cognition

Imagine a bank committed to a “human in the loop” model for AI-augmented lending. Such a bank would need to design risk assessment algorithms that can operate, but are limited to what human workers can reasonably interpret. The volume and speed of credit approvals would also be constrained by workers’ throughput. By comparison, Alibaba’s MyBank, created in 2015, has no loan officers or human risk analysts at all. MyBank’s AI risk models are powered by more than 100,000 variables, enabling it to approve loan applications in minutes at a competitive default rate (1.94%) with less than one percent of its peers’ processing costs. The MyBank model is possible only because it takes humans and their cognitive limitations out of the process, freeing the technology to fully automate the complex lending decision process.

In the real world of physical operations, the best automation efforts, much like MyBank’s model, are those that tend toward the full automation of complex processes. Japanese manufacturer Fanuc, for instance, uses robots working in lights-out factories to make new robots. Human intervention is required only for routine maintenance and issue resolution. Today, three years after its launch, the system is so efficient that it can operate unsupervised for 30 days at a time and produce 11,000 robots per month. Similarly, China’s Tianjin port—the world’s 7th-largest—partnered with Huawei in 2021 to launch an automated terminal. The port is powered by an AI “brain” that automates and adjusts schedules and remotely operates 76 self-driving vehicles to manage the movements of containers from 200-plus countries with essentially no human intervention (< 0.1% intervention rate versus the industry standard 4%-5%).

Such instances of successful automation are not outliers anymore, and a common pattern runs through each: Machines perform best when they interact with other machines. For machines, humans are far too idiosyncratic, erratic even, making them confounding partners. As Karl Marx rightly observed, “An organized system of machines… is the most developed form of production by machinery.” This is why it’s easier to engineer a fully automated warehouse (as Amazon is doing) than it is to design robots that operate effectively and safely alongside human workers who are apt to do something unexpected. The Tianjin port would not be able to achieve the same performance if half of its fleet were operated by humans.

AI essentially extends this logic to a larger swath of human action, rendering much human cognitive activity (so-called knowledge work) as the equivalent to the old factory floor. The performance achieved by LLMs, for example, proves that natural language, for all its subtleties, is sufficiently systematic and pattern-based to be reproduceable and, crucially, amenable to automation. AI can of course augment a human workforce, but operating alone it can also lead a business deeper into bolder automation efforts.

Limits of automation as a guide to investment

Let’s return to the conundrum faced by many business leaders today: The latest AI technologies are impressive, but beyond now-ubiquitous GenAI-powered chatbots and copilots, where to invest next? Our view is that leaders can make better decisions when they can tell the difference between technological applications that constitute a step towards full automation and those that primarily augment human workers. The best way to do so is by focusing on the known roadblocks to full automation, the presence of which signals the relatively more modest returns associated with merely augmenting uses of a technology.

Integration constraint: interfacesAutomation is more difficult the more dependent a process is on interfacing with different systems. Consider the infamous case of the London Stock Exchange’s Taurus project, started in 1983 and called off in 1993 with some £75 million in estimated losses. Taurus was meant to automate London’s paper-based stock trading system, but it faltered under the weight of redesign requirements that allowed human registrars to continue playing a “middleman” role in the trading process by using their own systems that then had to interface with the stock exchange’s.

The problem of interfaces may be somewhat alleviated as autonomous agents acquire the ability to directly access, control, and execute changes on external systems (e.g., as GenAI-powered chatbots become able to directly issue refunds, re-book flights, and so on). That technology hasn’t arrived just yet, and until it does, interfaces remain a constraint on automating end-to-end workflows that rely on legacy systems.

Engineering constraint: systematicityThe less structured a given process is, the harder it will be to automate, as a lack of systematicity calls for more involved management of exceptions, which is harder to engineer.

It’s important not to mistake complexity for systematicity. Complex processes involving natural language, such as live interactions with human customers, have been effectively systematized by LLMs. The operation of global supply chains, in contrast, still proves elusive because of their exposure to unpredictable shocks, such as violent conflict, regulatory change, or dramatic climate events. This lack of predictability makes systematization difficult and will require human judgment for the foreseeable future.

Economic constraint: uniqueness. Even when some activity is sufficiently stable and systematic from an engineering or design perspective, it may not be sufficiently repeatable to make automation economically feasible. This is typically the case of “one-off” activities, like construction, that have critical specifications that are unique. Erecting a building always involves adapting general blueprints to the specificities of a given terrain. That often makes the effort to automate such an adapted design more onerous than having humans do it. Electronics company Foxconn, for example, realized that using robots to produce many consumer electronics often doesn’t pay off in the end. Short production cycles and rapidly changing specifications mean by the time production of a given item can be automated, the manufacturing cycle has moved on to new products.

For executives, these three types of constraints are particularly important to grapple with because they will tend to endure, in some form, even with advances in technology. That’s also why human labor will remain essential to all businesses, and why augmentation strategies will continue to be important despite efforts to achieve full automation. What’s critical for businesses is to understand which workflows are, in fact, amenable to full automation, and develop a technology strategy that clearly differentiates the “augmentable” and “automatable.” Wherever automation is indeed feasible, technology deployments should not be designed “around” human workers, if a business is to maximize AI’s value as a source of competitive advantage.

***

Business leaders would do well to reacquaint themselves with the fact that the economic value of technology is greatest when it enables full workflow automation. Those who understand this reality can use it to assess any AI “use case” pitched their way by asking a series of questions. Is the use case part of a broader process that is amenable to full automation? Are the other components in the process systematic enough for a machine to replicate them in the near future? Or are they too unique to make an automated alternative worthwhile? Are they too entangled in different systems, each with its own independent logic? The answers to those questions can help steer leaders towards the most compelling uses of those precious tech deployment dollars.

Read other Fortune columns by François Candelon.

François Candelon is a partner at private equity firm Seven2 and the former global director of the BCG Henderson Institute.

Henri Salha is a former partner and managing director at Boston Consulting Group and former senior vice president of operations at Essilor.

Namrata Rajagopal is a consultant at Boston Consulting Group and an ambassador at the BCG Henderson Institute.

David Zuluaga Martínez is a partner at Boston Consulting Group and an ambassador at the BCG Henderson Institute.

Some of the companies mentioned in this column are past or present clients of the authors’ employers.

This story was originally featured on Fortune.com