Advertisement
Canada markets closed
  • S&P/TSX

    22,468.16
    +2.79 (+0.01%)
     
  • S&P 500

    5,321.41
    +13.28 (+0.25%)
     
  • DOW

    39,872.99
    +66.22 (+0.17%)
     
  • CAD/USD

    0.7331
    -0.0005 (-0.06%)
     
  • CRUDE OIL

    79.06
    -0.20 (-0.25%)
     
  • Bitcoin CAD

    95,336.34
    -1,924.13 (-1.98%)
     
  • CMC Crypto 200

    1,524.09
    +35.55 (+2.39%)
     
  • GOLD FUTURES

    2,421.40
    -4.50 (-0.19%)
     
  • RUSSELL 2000

    2,098.36
    -4.14 (-0.20%)
     
  • 10-Yr Bond

    4.4140
    -0.0230 (-0.52%)
     
  • NASDAQ futures

    18,808.00
    +8.75 (+0.05%)
     
  • VOLATILITY

    11.86
    -0.29 (-2.39%)
     
  • FTSE

    8,416.45
    -7.75 (-0.09%)
     
  • NIKKEI 225

    38,719.35
    -227.58 (-0.58%)
     
  • CAD/EUR

    0.6751
    -0.0003 (-0.04%)
     

Microsoft Calls for a New US Agency and Licensing for AI

(Bloomberg) -- Microsoft Corp. is calling for a new US agency to regulate artificial intelligence and licensing requirements to operate the most powerful AI tools, company President Brad Smith said Thursday.

Most Read from Bloomberg

Smith compared AI to the printing press, elevators and food safety for both the transformative power of a new technology and the regulatory need to protect against the greatest potential harms. His call for a new agency echoes proposals from OpenAI, the startup behind the wildly popular ChatGPT, which received a $10 billion investment from Microsoft.

ADVERTISEMENT

“We would benefit from a new agency,” Smith said in a speech in Washington. “That is how we will ensure that humanity remains in control of technology.”

The idea for a government agency with responsibility to set the ground rules for AI gained attention last week in a Senate hearing with Sam Altman, chief executive officer of OpenAI. Altman and many of the senators questioning him agreed that the legislative process is too slow and partisan to keep pace with AI capabilities and potential applications, and an agency would be better-positioned to set rules to protect users.

Although that proposal has sparked conversations on Capitol Hill, it is still far from being turned into legislation. Calls in recent years to regulate social media went nowhere in Congress.

Read more: When Altman Went to Washington and Asked for AI Rules

Critical Infrastructure

Smith also said that rapidly developing AI technology must be transparent, with developers partnering with government and academic researchers to address societal challenges that will emerge. He proposed “safety brakes” for AI technology used in high-risk applications such as critical infrastructure.

“New laws would require operators of these systems to build safety brakes into high-risk AI systems by design,” Smith said in a blog post accompanying his speech. “The government would then ensure that operators test high-risk systems regularly to ensure that the system safety measures are effective.”

The Biden administration has released several non-binding guides for developing and using AI products, although the US lags far behind Europe’s regulatory efforts. The EU’s AI Act was in the final stages of debate when the release of ChatGPT and other generative AI applications cast doubt on rules that focus on how the technology is used, rather than how it is initially developed.

European Conflict

Altman told reporters in London Wednesday that OpenAI could pull its products from Europe if it can’t comply with new rules that have been proposed for general-purpose AI. In a tweet Thursday, EU Commissioner Thierry Breton accused Altman of “attempting blackmail.”

Read more: OpenAI’s Altman Clashes With EU Commissioner Over AI Regulation

Asked about Altman’s threat, Smith said it’s important for the tech industry to explain how proposed regulation would work in practice. He said he’s optimistic that “reason will prevail” in the final version of Europe’s AI Act.

“The legislative process in every democratic country inevitably has its twist and turns,” Smith said in Washington after his speech. “There are days when those of us who might know more about a technical field get up and see something that we quite rightly would want to point out is not likely to work the way that people who wrote it actually intended.”

US tech companies have praised a framework released in January by the National Institute of Standards and Technology, which is focused on how AI technology is used — and the risk level of that application — rather than how it’s developed. Smith held that model up in his speech as a “new intellectual discipline for artificial intelligence” to help measure and manage this technology.

Smith’s speech was attended by several members of Congress. When Democratic Representative Ritchie Torres of New York asked how Congress should balance regulation with innovating to keep ahead of China, Smith urged western democracies to stick together to set a global standard for AI regulation.

“I do share the concern that there may be other parts of the world that don’t adopt the same kinds of guardrails that we do,” Smith said. “It’s so important to bring the European Union and the United Kingdom and the United States and other countries together to say, here is a model, here is a model that not only promotes innovation but protects people, protects humanity, preserves fundamental rights.”

Microsoft’s push into artificial intelligence, including its support for OpenAI, has pressured competitors such as Alphabet Inc.’s Google to more quickly release their own AI applications and integrate the technology into existing products. Last week, Google published its own policy recommendations for responsible development of AI that it said would take advantage of its economic potential while curbing some of the risks to society.

(Updates with more on policy proposals and Altman in Europe beginning in sixth paragraph)

Most Read from Bloomberg Businessweek

©2023 Bloomberg L.P.