Advertisement
Canada markets open in 8 hours 16 minutes
  • S&P/TSX

    22,375.83
    +116.63 (+0.52%)
     
  • S&P 500

    5,214.08
    +26.41 (+0.51%)
     
  • DOW

    39,387.76
    +331.36 (+0.85%)
     
  • CAD/USD

    0.7308
    -0.0003 (-0.05%)
     
  • CRUDE OIL

    79.85
    +0.59 (+0.74%)
     
  • Bitcoin CAD

    85,995.47
    +1,793.12 (+2.13%)
     
  • CMC Crypto 200

    1,352.11
    +52.01 (+4.00%)
     
  • GOLD FUTURES

    2,361.50
    +21.20 (+0.91%)
     
  • RUSSELL 2000

    2,073.63
    +18.49 (+0.90%)
     
  • 10-Yr Bond

    4.4490
    -0.0430 (-0.96%)
     
  • NASDAQ futures

    18,230.75
    +16.25 (+0.09%)
     
  • VOLATILITY

    12.69
    -0.31 (-2.38%)
     
  • FTSE

    8,381.35
    +27.30 (+0.33%)
     
  • NIKKEI 225

    38,211.22
    +137.24 (+0.36%)
     
  • CAD/EUR

    0.6780
    +0.0002 (+0.03%)
     

OpenAI's co-founder says at some point it'll be 'quite easy, if one wanted, to cause a great deal of harm' with AI models like ChatGPT

Picture of phone that displays OpenAI logo.
OpenAI's chief scientist and co-founder, Ilya Sutskever, says there will come a time when AI models could be pretty easily exploited to "cause a great deal of harm."Beata Zawrzel/NurPhoto via Getty Images
  • As AI chatbots like ChatGPT take off, there's a growing concern they could be misused.

  • OpenAI co-founder Ilya Sutskever says it'll be "quite easy" to cause "a great deal of harm" with AI models one day.

  • "These models are very potent and they're becoming more and more potent," he said.

OpenAI released its latest version of ChatGPT this week, and while the AI chatbot keeps adding new capabilities, there's growing concern that AI tools like it could also be used for bad purposes.

Ilya Sutskever, OpenAI's chief scientist and co-founder, told The Verge there will come a time when AI could be pretty easily exploited to cause harm.

ADVERTISEMENT

"These models are very potent and they're becoming more and more potent," he said. "At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models."

He made the remarks while explaining why OpenAI no longer provides detailed information about how it trains these models.

"As the capabilities get higher it makes sense that you don't want want to disclose them," he told The Verge. "I fully expect that in a few years it's going to be completely obvious to everyone that open-sourcing AI is just not wise."

OpenAI CEO Sam Altman has voiced similar concerns in the past.

In an interview earlier this year, he said that while the best-case scenario for AI is "so unbelievably good that it's hard for me to even imagine," the worst case is "lights out for all of us."

In a Twitter thread last month, Altman said he thinks AI tools can help people become more productive, healthier, and smarter, but also added that the world may not be "that far away from potentially scary" artificial intelligence tools, so regulating them will be "critical."

 

Read the original article on Business Insider