Advertisement
Canada markets open in 8 hours 12 minutes
  • S&P/TSX

    22,244.02
    +20.35 (+0.09%)
     
  • S&P 500

    5,537.02
    +28.01 (+0.51%)
     
  • DOW

    39,308.00
    -23.90 (-0.06%)
     
  • CAD/USD

    0.7348
    +0.0001 (+0.02%)
     
  • CRUDE OIL

    83.71
    -0.17 (-0.20%)
     
  • Bitcoin CAD

    74,084.97
    -5,964.79 (-7.45%)
     
  • CMC Crypto 200

    1,122.85
    -138.33 (-10.98%)
     
  • GOLD FUTURES

    2,371.50
    +2.10 (+0.09%)
     
  • RUSSELL 2000

    2,036.62
    +2.75 (+0.14%)
     
  • 10-Yr Bond

    4.3550
    0.0000 (0.00%)
     
  • NASDAQ futures

    20,420.00
    +8.50 (+0.04%)
     
  • VOLATILITY

    12.26
    +0.17 (+1.41%)
     
  • FTSE

    8,241.26
    +70.14 (+0.86%)
     
  • NIKKEI 225

    40,842.61
    -71.04 (-0.17%)
     
  • CAD/EUR

    0.6789
    -0.0003 (-0.04%)
     

Can we afford to let AI companies ask for forgiveness instead of permission?

ANDREW CABALLERO-REYNOLDS / AFP) (Photo by ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

Asking for forgiveness, rather than permission, is Silicon Valley's favorite business model—from Uber's early days entering cities without seeking approval from local officials to the social networking companies' loose treatment of user data.

With the AI market booming, the forgiveness cycle is kicking into high gear once again.

Consider Google's latest AI imbroglio. On Thursday, the company published a lengthy blog post explaining why its new AI search—a feature that it automatically activated for all U.S. users this month, without any ability to opt out—was telling people to put glue on their pizzas and to eat rocks.

It turns out AI search isn't smart enough to recognize satirical and troll-y content that exists on the web (especially in online discussion forums like Reddit), Google acknowledged. As a result, the company is now limiting the amount of such content it includes in its AI search results.

ADVERTISEMENT

The incidents "highlighted some specific areas we needed to improve," Google VP Liz Reid wrote.

Also this week we finally heard from Helen Toner, the former OpenAI board member who was ousted in the fallout of the Sam Altman crisis last year. (The board, as you'll recall, had briefly fired Altman for not being "consistently candid" in his role as CEO.)

According to Toner, one of the reasons the board lost trust in Altman stemmed from the launch of OpenAI's most popular product, ChatGPT, in November 2022. The board was never informed beforehand of the launch, Toner claims, and found out about it after the fact, as people were discussing it on Twitter.

None of these incidents are catastrophic—hopefully, no one was daft enough to add glue to their pepperoni pizza—but they underscore an entrenched behavior in Silicon Valley that shouldn't be glossed over at a time when we're trying to determine how much regulation to impose on the AI industry and how much to allow the industry to regulate itself.

There are signs that tech companies are acting more responsibly. In recent weeks, OpenAI has signed a string of multimillion-dollar deals with publishers such as Vox Media, The Atlantic, and News Corp. The deals allow OpenAI to train its large language models on the content of these publishers, rather than just scrapping it all off the web for free.

Of course, OpenAI is currently being sued by the New York Times for allegedly doing exactly that. Would any of these content deals be happening if OpenAI hadn't already been challenged for its behavior?

Alexei Oreskovic

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

Today's edition of Data Sheet was curated by David Meyer.

This story was originally featured on Fortune.com