Advertisement
Canada markets closed
  • S&P/TSX

    21,318.90
    -5.41 (-0.03%)
     
  • S&P 500

    5,078.18
    +8.65 (+0.17%)
     
  • DOW

    38,972.41
    -96.82 (-0.25%)
     
  • CAD/USD

    0.7387
    -0.0009 (-0.12%)
     
  • CRUDE OIL

    78.62
    -0.25 (-0.32%)
     
  • Bitcoin CAD

    77,078.95
    +975.35 (+1.28%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • GOLD FUTURES

    2,040.60
    -3.50 (-0.17%)
     
  • RUSSELL 2000

    2,056.11
    +27.14 (+1.34%)
     
  • 10-Yr Bond

    4.3150
    +0.0160 (+0.37%)
     
  • NASDAQ futures

    18,011.00
    -10.00 (-0.06%)
     
  • VOLATILITY

    13.43
    -0.31 (-2.26%)
     
  • FTSE

    7,683.02
    -1.28 (-0.02%)
     
  • NIKKEI 225

    39,191.99
    -47.53 (-0.12%)
     
  • CAD/EUR

    0.6813
    -0.0002 (-0.03%)
     

Generative AI still mostly experimental, say executives

FILE PHOTO: Illustration shows AI (Artificial Intelligence) letters and computer motherboard

By Katie Paul

NEW YORK (Reuters) - One year after the debut of ChatGPT created a global sensation, leaders of business, government and civil society said at the Reuters NEXT conference in New York that generative AI technology is still mostly in an experimental stage, with limited exceptions.

While ChatGPT has enchanted consumers with its ability to generate everything from Shakespeare-style sonnets to student term papers, its propensity to "hallucinate" erroneous information has kept it from revolutionizing most areas of industry so far, they said.

"What's been a lesson, I think, is the gap between being able to do something somewhat and being able to do it well enough for a particular purpose," said Anthony Aguirre, founder and executive director of the Future of Life Institute, a nonprofit aimed at reducing catastrophic risks from advanced artificial intelligence.

Aguirre cited self-driving cars as an example of a technology struggling to make the transition to full deployment. The cars "work at some level right now, but they're not reliable enough to replace humans. That has turned out to be much, much harder than anticipated."

Sherry Marcus, director of applied science at Amazon's AWS, said customers were at different stages of progress. “I’ve observed many generative AI applications that are in production while other customers are just beginning their journey.”

One way generative AI was already being deployed widely, highlighted by speakers across industries, was to write computer code.

On Microsoft's Github, an online platform for storing code, about half of the programming was written using assistance from an AI tool called Copilot that automatically suggests lines of code, said Microsoft Corporate Vice President Lili Cheng.

“When we talk to developers, they really feel they're more productive" with Copilot, said Cheng. "I think it's a great example of using a generative model, together with data that's inside of GitHub, to make people feel more effective and make programming more accessible to more people.”

She cited AI-generated summaries of meeting transcripts as another example of how the technology was proving its utility.

Financiers likewise told Reuters they were actively deploying AI models in their businesses for tasks like coding, generating documentation and deploying capital more efficiently, although they said they were moving cautiously because of the regulated nature of financial services.

Gary Marcus, a professor at New York University, said generative AI was error-prone in coding just like in other areas, but that the problem was less of a hindrance in the tech sector because programmers knew how to troubleshoot it.

"The place that it really is revolutionizing is coding, it’s just going fastest, and that’s because coders know how to fix the errors these systems make," said Marcus. "But if you have almost any other kind of business, the hallucinations are a serious problem."

Companies should move slowly and deliberately when integrating the technology into uses where accuracy matters, executives emphasized.

Cisco's Vijoy Pandey said he believed AI had proven its utility for "the low-hanging fruit," uses for which "the cost of being wrong has been pretty low." The challenge now, he said, was to move the technology into a new phase for more sensitive "business-critical use cases," like legal and security.

"We should just assume people will be doing stupid things" and focus in the coming years on building technology, guidelines and frameworks "to protect everybody against stupid actions," said Pandey.

To view the live broadcast of the World Stage go to the Reuters NEXT news page:

(Reporting by Katie Paul; Editing by David Gregorio)