Canada markets closed
  • S&P/TSX

    -49.47 (-0.25%)
  • S&P 500

    -11.65 (-0.27%)
  • DOW

    -158.84 (-0.47%)

    -0.0050 (-0.68%)

    -0.94 (-1.02%)
  • Bitcoin CAD

    +218.55 (+0.60%)
  • CMC Crypto 200

    +0.90 (+0.15%)

    -14.00 (-0.75%)
  • RUSSELL 2000

    -9.21 (-0.51%)
  • 10-Yr Bond

    -0.0240 (-0.52%)

    +18.05 (+0.14%)

    +0.18 (+1.04%)
  • FTSE

    +6.23 (+0.08%)
  • NIKKEI 225

    -14.90 (-0.05%)

    -0.0052 (-0.74%)

Tech CEOs, lawmakers should prioritize AI regulation, literacy: Professor

Top tech executives Elon Musk (TSLA), Mark Zuckerberg (META), and Bill Gates (MSFT) were among the tech leadership to join the Senate for its AI forum to discuss the broader global ramifications of artificial intelligence’s rapid expansion.Duke University Professor and Venture Capitalist Sultan Meghji joins Yahoo Finance to discuss what the highest regulatory priorities should be in these talks.“There are [already] a variety of existing mechanisms that can be leveraged, so I think modernizing the existing regulatory regimes should probably be job number one,” Meghji points out. The former CIO of the FDIC also emphasizes the importance of AI literacy from regulators and lawmakers.

Video Transcript

AKIKO FUJITA: As we mentioned, Elon Musk, Mark Zuckerberg, Google's Sundar Pichai among those that attended today. It took place behind closed doors, raising concerns from some lawmakers and industry watchers. Let's bring in Sultan Meghji, former chief innovation officer at the FDIC and now a venture capitalist and professor at Duke University. It's good to talk to you today. How much expectation should we be placing on these meetings? It is, after all, the executives that are behind the very technology that's about-- that they're trying to regulate.

SULTAN MEGHJI: Absolutely. It's sort of like having the CEO of a major oil company do a closed door meeting with the Senate to decide how to regulate oil and gas. I think, you know, there's rightly a decent amount of skepticism over this. I think we should temper our expectations. You know, proactive regulation is not something that our government is necessarily that great at. Now, does that mean that maybe they can do something differently this time? I think a lot of us are hopeful, but I wouldn't have high expectations at all.

SEANA SMITH: Sultan, when we talk about what should be the priority here, what senators, what lawmakers need to consider when we're talking about potential regulation, you're not too optimistic that a lot is going to come here from today, but in terms of what they need to consider going forward, what tops that list?

SULTAN MEGHJI: Well, what's interesting is so much of the regulatory infrastructure in the United States already has the ability to regulate applications of artificial intelligence in important areas. Financial services, defense, healthcare, et cetera. And I worry that we're going to jump right past centuries of good law and try to create something new from scratch that we don't quite understand, we don't know what the implications are, we don't know the timeline.

You know, the AI death knells of, you know, Skynet and things like that from the "Terminator" movies are quite far away, and there are a variety of existing mechanisms that could be leveraged, and so I think modernizing the existing regulatory regimes should probably be job number one job.

Number two should be also understanding that the federal workforce needs to get educated about artificial intelligence, gets educated about modern technology, and as a former federal regulator, I can tell you there's a big gap between where they currently are and where they need to be.

AKIKO FUJITA: What about this issue of a new regulatory agency? You know, Elon Musk talked about that. It feels like that's a big division even within the tech community. We've heard the likes of IBM and Google come out and say, we don't necessarily think that's a good idea.

SULTAN MEGHJI: You know, I think the important thing to ask is, what would be the result of that? Like, what value would a system like that create? Could you imagine a single governmental body big enough, broad enough, and funded well enough that could, in essence, regulate everything from artificial intelligence used to create new drugs to the AI used to protect us from criminals on financial transactions on the internet to worrying about offensive cyber actions by other nation states against us using AI? That's an incredibly broad scope, and I think it misses the ability-- misses the opportunity to actually do something meaningful.

SEANA SMITH: This is a closed door meeting. Only a handful here of some of the largest tech CEOs. Who was notably missing from this meeting? Who would you think should have had one of the seats at the table when we're talking about the future of AI?

SULTAN MEGHJI: Well, one of the most advanced uses of AI that wasn't there was in the defense community. So there are a variety of defense contractors, aeronautics firms, et cetera, that are doing incredibly advanced things in artificial intelligence that really are changing how war is fought, and we've seen it from everything from the Russian invasion of Ukraine to how drones flown by our own military here in the US operate, and the fact that incredibly advanced technology is not part of this when they're the ones that are actually using AI that have people dying at the end of it already, and that to me was a huge gap in this.

AKIKO FUJITA: While we're talking about the potential for regulation in the US, the reality is this is a conversation that's been happening globally. The EU, as often is, one step ahead in terms of drafting the regulation. There's a number of things that have come out of that. One, banning real time facial recognition usage, biometrics, but also requiring generative AI companies to really open up their systems. To what extent does that provide a template for US regulators to follow, and how realistic is it to require that here?

SULTAN MEGHJI: You know, having an artificial intelligence that is, in essence, transparent about why it's making the decisions it's making is one of the harder pieces of AI right now. There are a lot of AIs out there that are fundamentally black boxes. It can't tell you why it makes that decision. By requiring that, you are, to a degree, limiting the development and use of certain types of artificial intelligence. That's part one.

Part two is I think it's going to be very difficult to get an American regulatory body to follow the lead of a European body or even establish a framework. For the last 15 years, we've seen very, very strong regulatory frameworks come out of the EU in financial services, and getting American financial regulators to follow those is a bit of a challenge when it's a-- when it's a much narrower focus.

AKIKO FUJITA: You know, we have seen this in other tech regulation discussions before, which is the US is trying to create its own regulation, the EU is doing its own thing, and then you've, of course, got China as well. How effective can regulation be when it's so fragmented? And the larger question, because we haven't necessarily talked about China, how do American lawmakers balance between making sure to keep in check here but not necessarily stifling innovation so these American companies can compete with their Chinese counterparts?

SULTAN MEGHJI: That's an absolutely fantastic framing of that question. I wish more people would frame it like that, because creating a regulatory system doesn't just mean saying no. It means finding opportunities to create a path to say yes incredibly easily, and we're not very good at that here in the United States on the regulatory side. Creating a regulatory framework that leads on innovation, that leads on protecting Americans, and that leads on creating safety nets is important, but that's really not what we're seeing here in the US, and globally there's an incredibly fragmented environment.

Even getting global regulators to focus together in a unified way on something as simple as the spam that hits your email is a multi-year journey that's incredibly difficult. I think getting especially the People's Republic of China to play ball with any global regulatory system is a challenge, and so I think at the US side, we just need to accept that they're not going to participate, or if they are, they're just going to lie about it, and we need to figure out how to lead the world on this.

AKIKO FUJITA: A conversation I'm sure we will continue to have. Sultan Meghji, venture capitalist and professor at Duke University, appreciate the time today.

SULTAN MEGHJI: Thanks for having me.