Advertisement
Canada markets open in 2 hours 21 minutes
  • S&P/TSX

    24,302.26
    +77.36 (+0.32%)
     
  • S&P 500

    5,780.05
    -11.99 (-0.21%)
     
  • DOW

    42,454.12
    -57.88 (-0.14%)
     
  • CAD/USD

    0.7261
    -0.0017 (-0.23%)
     
  • CRUDE OIL

    75.21
    -0.64 (-0.84%)
     
  • Bitcoin CAD

    84,117.90
    +123.29 (+0.15%)
     
  • XRP CAD

    0.74
    +0.01 (+1.88%)
     
  • GOLD FUTURES

    2,658.80
    +19.50 (+0.74%)
     
  • RUSSELL 2000

    2,188.42
    -12.17 (-0.55%)
     
  • 10-Yr Bond

    4.0960
    +0.0290 (+0.71%)
     
  • NASDAQ futures

    20,381.50
    -48.25 (-0.24%)
     
  • VOLATILITY

    20.93
    0.00 (0.00%)
     
  • FTSE

    8,222.92
    -14.81 (-0.18%)
     
  • NIKKEI 225

    39,605.80
    +224.91 (+0.57%)
     
  • CAD/EUR

    0.6639
    -0.0014 (-0.21%)
     

What the Sam Altman-OpenAI debacle tells us about the AI industry

OpenAI boss Sam Altman was one of the letter’s signatories (PA) (AP)
OpenAI boss Sam Altman was one of the letter’s signatories (PA) (AP)

When Geoffrey Hinton, the so-called ‘Godfather of AI’ abruptly quit Google in May, among his motivations was his view that the senior team at Google had ceased to be a “proper steward” of AI technologies and an apparent fear that they were being led astray by commercial motivations.

Last week, a board member of UK-based Stability AI quit the firm, furious at his colleagues’ view that it was acceptable to use copyrighted work without permission to train its products.

We don’t know for certain why Sam Altman was sacked by the OpenAI board – his replacement, Emmett Shear, insists it was not over safety concerns – but it could well be a similar kind of worry.

There is an obvious tension in many AI firms right now between their ambitions as a business and potential risks that meeting those ambitions carry, both to the companies themselves and to wider society – risks which Rishi Sunak was keen to point out ahead of his AI safety summit earlier this month.

Open AI's board members may feel more comfortable with the dial turned down to two and thought Sam Altman had it turned up to eleven

Some inside these businesses are desperate to take the moral high ground and ensure that models are developed one step at a time, with risk kept to an absolute minimum in the process. Others see the AI industry’s advances as a race to the top, and perceive the biggest risk as losing ground to their big tech rivals in the way that OpenAI may well have ended up doing over the last few days.

Faced with this dilemma, different people within these organisations want to progress at different speeds, and internal tensions can rapidly build up.

In October, Emmett Shear tweeted: “I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down. If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”

So we can infer from his hire that the OpenAI board concurs with this approach. Its members – who are after all members of a non-profit thanks to its rather complex corporate structure – may feel more comfortable with the dial turned down to two and thought Sam Altman had it turned up to eleven.

But these choices are fundamentally a function of the AI industry being so nascent. Execs at a biotech business, for example, might not face a dichotomy about how much human harm they’re prepared to tolerate in a clinical trial, because there are strict rules already laid out for them on this.

AI firms are of course subject to the same laws as everyone else – but how quite to measure harm or risk, when it comes to developing a complex technology whose outputs cannot fully be anticipated – is a little tricky.

Until there is thorough cross border regulation, though, these kinds of fracas will continue to erupt within different AI businesses. The danger is that the most foolhardy firms end up dominating the industry – and the ones that give a damn are left behind.