Advertisement
Canada markets open in 6 hours 39 minutes
  • S&P/TSX

    21,953.80
    +78.01 (+0.36%)
     
  • S&P 500

    5,509.01
    +33.92 (+0.62%)
     
  • DOW

    39,331.85
    +162.33 (+0.41%)
     
  • CAD/USD

    0.7310
    -0.0001 (-0.01%)
     
  • CRUDE OIL

    83.23
    +0.42 (+0.51%)
     
  • Bitcoin CAD

    83,362.88
    -2,541.20 (-2.96%)
     
  • CMC Crypto 200

    1,314.34
    -20.58 (-1.54%)
     
  • GOLD FUTURES

    2,344.30
    +10.90 (+0.47%)
     
  • RUSSELL 2000

    2,033.87
    +3.81 (+0.19%)
     
  • 10-Yr Bond

    4.4360
    -0.0430 (-0.96%)
     
  • NASDAQ futures

    20,244.50
    -10.75 (-0.05%)
     
  • VOLATILITY

    12.03
    -0.19 (-1.55%)
     
  • FTSE

    8,121.20
    -45.56 (-0.56%)
     
  • NIKKEI 225

    40,580.76
    +506.07 (+1.26%)
     
  • CAD/EUR

    0.6802
    +0.0002 (+0.03%)
     

Why tech companies want the government to regulate AI

It’s been almost one year since the release of ChatGPT, the match that ignited the market-transforming AI craze.

And while reality and other inconvenient factors may occasionally put a damper on the hype train, corporate leaders including Alphabet (GOOG, GOOGL) CEO Sundar Pichai, Microsoft (MSFT) president Brad Smith, and OpenAI’s Sam Altman are throwing their support behind AI regulation — traditionally one of the biggest headwinds for new tech.

So why would AI's leaders be in support of regulation? For several reasons, actually. At its core, it’s because a set of well-thought-out rules would ensure that AI firms are investing in products they know won’t be regulated out of existence in the future.

Regulations also mean that companies would have a unified set of rules rather than a patchwork of disparate AI laws across various states.

ADVERTISEMENT

“The worst-case scenario for businesses is having 50 different sets of rules at the state level. It would be expensive to devise different software for Idaho as opposed to Illinois,” Darrell West, a senior fellow at the Brookings Institute’s Center for Technology Innovation, explained.

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz
OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled 'Oversight of A.I.: Rules for Artificial Intelligence' in May 2023. (Image: Reuters/Elizabeth Frantz) (Elizabeth Frantz / reuters)

And, of course, by calling for regulations, companies could have a bigger hand in helping to craft AI rules moving forward. But getting some form of legislation passed anytime soon could prove to be a tall order for Washington.

“They definitely want to help shape it to ensure it’s reasonable and cost-effective from their point of view,” explained Gartner distinguished VP and analyst Avivah Litan. “They want to stave off any heavy imposing regulations that would cause them to spend extraordinary amounts of money in order to remain compliant.”

How regulations can help companies and consumers

It might sound counterintuitive for tech companies to call for AI regulation, but for businesses investing millions, and in some instances billions, of dollars into new technologies, concrete laws ensure their investments won’t end up being outlawed in the future.

“You may not like the regulation but at least you know exactly what to expect and how to optimize your process to be compliant with the regulation,” explained MIT School of Engineering distinguished professor Regina Barzilay.

Alphabet/Google CEO Sundar Pichai, center, departs following a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at Capitol Hill in Washington, Wednesday, Sept. 13, 2023. (AP Photo/Jacquelyn Martin)
Alphabet/Google CEO Sundar Pichai, center, departs following a closed-door gathering of leading tech CEOs to discuss the priorities and risks surrounding artificial intelligence and how it should be regulated, at Capitol Hill in Washington, Wednesday, Sept. 13, 2023. (AP Photo/Jacquelyn Martin) (ASSOCIATED PRESS)

Regulation won’t just give companies a clearer understanding of how to invest in AI, it also affords their customers a greater peace of mind that using their products is safe. Without regulation, AI customers are more or less taking a shot in the dark that the technology they’re using is safe based on guarantees from the company they purchased it from.

On the consumer end of things, AI regulation could help crack down on AI-driven phone scams, prevent AI-based discrimination in financial services, and eliminate the risk of bias from AI in everything from the justice system to the housing market.

“There's accuracy risks, there's IP risks, there's bias and discrimination risks, consumer protection risks, sustainability risks, and no one can control them except for the companies that produce these models,” Litan said. “And if they're not regulated, they're not going to be incentivized to do this, because it doesn't necessarily improve their revenue in many cases.”

Precedent from abroad

AI regulation isn’t some impossible task. Outside of the US, the European Union is already well on its way to crafting AI legislation. The economic bloc initially proposed its AI rules in April 2021, focusing on the different levels of risk associated with the technology. The more risk involved the more regulation the AI is subject to.

Out of the gate, the EU's proposed rules outlaw a number of forms of AI including those used to categorize people based on characteristics including race, gender, and religion; predictive policing; and systems that pull images of people from the web or closed-captioned TV.

High-risk applications would need to be assessed by regulators before hitting the market and include those that can impact a person’s health and safety, as well as those used by social media platforms with more than 45 million users to recommend content. Foundation models that power generative AI systems would also need to be registered in an EU database. Other forms of AI would need to meet transparency requirements that help users understand their capabilities.

China has also introduced its own AI regulations. In August, the country finalized guidelines requiring companies of China-facing AI platforms to submit their algorithms for review to the appropriate government entities. However, those platforms designed for use outside of China don’t have to abide by the same rules.

Lawmakers in Washington and the Biden administration have begun holding meetings with major AI companies. Biden is also expected to issue an executive order on AI in the coming days.

But they’re going to have to move much faster if they hope to keep up with the pace of innovation.

Daniel Howley is the tech editor at Yahoo Finance. He's been covering the tech industry since 2011. You can follow him on Twitter @DanielHowley.

Click here for the latest technology business news, reviews, and useful articles on tech and gadgets

Read the latest financial and business news from Yahoo Finance