Advertisement
Canada markets closed
  • S&P/TSX

    22,059.03
    -184.97 (-0.83%)
     
  • S&P 500

    5,567.19
    +30.17 (+0.54%)
     
  • DOW

    39,375.87
    +67.87 (+0.17%)
     
  • CAD/USD

    0.7338
    +0.0006 (+0.08%)
     
  • CRUDE OIL

    83.00
    -0.16 (-0.19%)
     
  • Bitcoin CAD

    75,093.64
    -4,002.09 (-5.06%)
     
  • CMC Crypto 200

    1,139.05
    -69.64 (-5.76%)
     
  • GOLD FUTURES

    2,391.50
    -6.20 (-0.26%)
     
  • RUSSELL 2000

    2,026.73
    -9.89 (-0.49%)
     
  • 10-Yr Bond

    4.2720
    -0.0830 (-1.91%)
     
  • NASDAQ futures

    20,598.75
    -22.00 (-0.11%)
     
  • VOLATILITY

    12.48
    +0.22 (+1.79%)
     
  • FTSE

    8,203.93
    -37.33 (-0.45%)
     
  • NIKKEI 225

    40,999.80
    +87.43 (+0.21%)
     
  • CAD/EUR

    0.6769
    +0.0007 (+0.10%)
     

Ilya Sutskever left OpenAI after mutinying against Sam Altman—now he’s launching his own startup for safe AI

Jack Guez—AFP/Getty Images

The OpenAI chief scientist who nearly brought down CEO Sam Altman in a failed November mutiny, as brief as it was spectacular, is launching an AI company of his own.

Ilya Sutskever revealed on Wednesday he was teaming up with OpenAI colleague Daniel Levy and Daniel Gross, a former AI executive at Apple, to found Safe Superintelligence, a moniker chosen to reflect its purpose.

“SSI is our mission, our name, and our entire product road map, because it is our sole focus,” the three wrote in a statement on the U.S. startup’s bare-bones website. Building safe superintelligence, they went on to argue, was “the most important technical problem of our time.”

Artificial superintelligence, or ASI, is believed to be the ultimate breakthrough in AI, since experts predict machines will not stop developing once they reach the kind of general-purpose intelligence, known as AGI, comparable to that of humans.

Luminaries in the field like computer scientist Geoffrey Hinton believe ASI is an existential danger to mankind, and building safeguards that align with our interest as a species was one of the top missions Sutskever had at OpenAI.

ADVERTISEMENT

His high-profile departure in May came almost six months to the day after he joined members of the independent board of directors Helen Toner, Tasha McCauley, and Adam D’Angelo in removing Altman as CEO against the will of chair Greg Brockman, who immediately resigned.

Sutskever came to regret his role in briefly ousting Altman

The spectacular coup, which Toner recently blamed on a pattern of deception by Altman, threatened to tear the company apart. Sutskever quickly expressed his regret and reversed his position, demanding Altman be reinstated to prevent the potential downfall of OpenAI.

In the aftermath, Toner and McCauley left the nonprofit board, while Sutskever seemingly vanished from the public eye all the way up until the announcement of his departure last month.

In his resignation announcement, he implied he was going to commit to a project “very personally meaningful to me” and promised to share details at a later unspecified date.

His departure nonetheless set in motion events that quickly revealed deep governance issues that appeared to confirm the board’s initial suspicions.

First, Sutskever’s co-lead Jan Leike accused the company of breaking its promise to give their AI safety team 20% of its compute resources and resigned. Later it emerged employees at OpenAI were slapped with watertight gag orders that forbade them from criticizing the company after they left, at the penalty of losing their vested shares.

Finally, actress Scarlett Johansson—who portrayed an AI chatbot in Spike Jonze’s 2013 sci-fi film, Her—then sued the company, claiming OpenAI had effectively stolen her voice to use for its latest AI product. OpenAI refuted the claim but pledged to change the sound anyway out of respect for her wishes.

These instances suggested OpenAI had abandoned its original purpose of developing AI that would benefit all of humanity—and instead pursued commercial success.

“The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety; people interested in profits like Sam Altman didn’t,” Hinton told Bloomberg last week.

A leader in the field since AI’s Big Bang moment

Sutskever has long been one of the brightest minds in the field of AI, researching artificial neural networks that conceptually mimic the human brain in order to train computers to learn and abstract based on data.

In 2012, he teamed up with Hinton to collaborate on the landmark development of Alex Krizhevsky’s deep neural network AlexNet, commonly considered AI’s Big Bang moment. It was the first machine learning algorithm that could accurately label images fed to it, revolutionizing the field of computer vision.

When OpenAI was founded in December 2015, Sutskever received top billing over cochairs Altman and Elon Musk even though he was only research director. That made sense at the time, as it was formed originally as a nonprofit that would create value for everyone rather than shareholders, prioritizing “a good outcome for all over its own self-interest.”

https://www.youtube.com/watch?v=mqjX8AzfJ_A

Since then, however, OpenAI has effectively become a commercial enterprise, in Altman’s words “to pay the bills” for its compute-heavy operations. In the process it adopted a complicated structure with a new for-profit entity where returns were capped for investors like Microsoft and Khosla Ventures, but control remained with the nonprofit board.

Altman called this convoluted governance necessary at the time in order to keep everyone on board. Recently The Information reported he sought to change OpenAI’s legal structure, opening the door for a controversial IPO.

Sutskever’s new commercial enterprise dedicated to safe superintelligence will be located in Silicon Valley’s Palo Alto as well as Tel Aviv, in order to best recruit top talent.

“Our team, investors, and business model are all aligned to achieve SSI,” they wrote, pledging there would be “no distraction by management overhead or product cycles.”

How he and his two cofounders aim to both create ASI endowed with robust guardrails while also paying the bills and earning a return for their investors was not immediately clear from the statement, however. Whether it, too, has a capped for-profit structure, for example, was not revealed.

They said only that the business model of Safe Superintelligence was designed from the outset to be “insulated from short-term commercial pressures.”

This story was originally featured on Fortune.com

.