Nvidia’s cloud service platform will allow for companies ‘to rent’ an AI chatbot: Exec
Nvidia Vice President of Enterprise Computing Manuvir Das sits down with Yahoo Finance’s Julie Hyman and Dan Howley to discuss the integration of artificial intelligence into its consumer business models.
JULIE HYMAN: Well, Nvidia is going all in on AI technology at it's GTC 2023 developer conference. The company is rolling out new services and hardware advancements in generative artificial intelligence, the metaverse, cloud computing, and more.
By the way, I'm Julie Hyman. I'm joined by Dan Howley, and we are both joined by Manuvir Das. He is vice president of enterprise computing at NVIDIA. Manuvir, thank you so much for joining us. I know you're very busy with the GTC conference.
MANUVIR DAS: It's a pleasure to be here.
JULIE HYMAN: So what's interesting to me is a lot of the conversation around AI thus far has been around consumer-facing AI, right, because people can now see and use ChatGPT. There's a lot of enthusiasm around this.
I actually spoke to an executive at Adobe earlier today about their approach to enterprise AI or AI for enterprise. And I'm curious when you're looking at that aspect of it, what is sort of the biggest revolution that you see in AI capabilities for companies and how it's going to change the way that they work?
MANUVIR DAS: You know, Julie, that is really our focus. I'm so glad you started with that because, look, ChatGPT has really opened the eyes of the world. It's an amazing invention, and it's really one very large AI, one very large model that is used by millions of people across the world at the same time, right? Whereas if you think about companies, a company like Adobe or a bank, what they're really looking for is their own AI, their own model that is based on the knowledge that they have in their company that acquires the skills that they need to run their business and that communicates effectively with their customers for things that their customers are interested in, right? So every company actually needs its own ChatGPT, right, its own AI model, if you will.
And so the approach that we've taken at NVIDIA is how do we help all of these enterprise companies get that, right, each in their own unique way? That's why we're working, for example, with Adobe, and the platforms that we announced today are all about democratizing this, democratizing this across enterprise companies so every company can infuse generative AI into their business.
DAN HOWLEY: Manuvir, I want to ask what you think is the biggest opportunity for companies when it comes to developing their own AI models using this kind of technology.
MANUVIR DAS: You know, it's amazing because you take any function that humans are doing in a company. You know, there is so much help that a human can get in doing that day-to-day job, right, whether it's the mundane activities they do or looking up information, finding the right answer to something right at their fingertips. Maybe you're talking to a customer, helping them troubleshoot something, and you know in the back of your mind some other customer experienced that a year ago. But with these AI models now, you can just get that information right away and help your customer, right?
So wherever there's human productivity, there's an opportunity for generative AI to increase that human productivity and create a better experience.
JULIE HYMAN: Manuvir, if you could, get a little more specific about what you guys have just announced with regard to all of this, how it's different for you because you guys have been operating in AI already, of course, when it comes to chips. But how is this-- how is this different or a step forward for Nvidia?
MANUVIR DAS: Yeah, that's a great question, Julie. So there's two ways in which this is different. So we've always been the provider of the underlying hardware, the GPU that really makes AI go, right, because on GPUs, you can run AI, you know, hundreds, thousands of times faster than you could on CPU. So we've always done that.
So we have two new things here at GTC. The first is something called DGX Cloud, which is a service that we provide in the cloud. And the idea is, you know, when you do AI as a company, you do some data science with data scientists who have to understand how to do AI. But the much more complex part of it is the engineering because you have to stand up these supercomputers, if you will, that have the horsepower to actually run AI, and that's a very difficult thing to do.
So what we've done with DGX Cloud is it's a new capability that's available, hosted on the public clouds where companies can come in and rent one of these supercomputers that we've already built on their behalf, and we've done all the engineering work. They can just consume as big or as small of it as they want for as long a period of time. And we've taken care of all the engineering, so all they have to worry about is the data science, which is where all the innervation of AI is, right? So that's the first new thing.
The second new thing is a platform called Nvidia AI Foundations. And so really the way to think about that platform is for a company that wants to embrace generative AI for its own business-- which means they have to build their own AI models fed with their data, customized with the skills they care about, with guardrails so that the AI model only does the things that are appropriate for their business. They need a foundry, if you will, to create these models.
And that is what Nvidia AI Foundations is. It's a foundry where companies like Adobe can come in and create their own models, whether it's text to text or text to video, text to images, or even the language of biology, which is used, for example, in drug discovery. So DGX Cloud and Nvidia AI Foundations, these are the new things that Nvidia is bringing to market.
DAN HOWLEY: Manuvir, do you see generative AI as the next big growth opportunity for Nvidia, or do you think it's something that's been very hyped and part of a larger whole?
MANUVIR DAS: We think it absolutely is a big growth opportunity, Dan, and the reason is, you know, Nvidia for years now has worked with companies to adopt a variety of use cases for AI that are specific to their business, but, you know, every company has to get bought in. I think what's happened with ChatGPT and generative AI is we have come to the moment there's a broad horizontal use case that every company, no matter what industry it's in, can see the value of that, and I think this is what is really going to make the adoption of AI take off.
And obviously Nvidia is very excited to be part of that, firstly because, you know, the best way to do generative AI is, of course, on Nvidia GPUs, so we're happy to work with everybody who does generative companies like OpenAI. And secondly, with the new platforms that I just told you about, there's an opportunity for customers to come work with us directly on those platforms, even as we help the entire ecosystem, companies like OpenAI-- who we have the greatest respect for-- be as successful as they can be.
JULIE HYMAN: It's obviously a very interesting time for us, for you, for everyone interested in this area. Manuvir Das, vice president of enterprise computing at Nvidia. Of course our Dan Howley as well. And as that GTC conference goes on, we are going to be speaking with the founder and CEO of Nvidia. Dan and I will be speaking with Jensen Huang tomorrow. That's at 4:10 PM. We'll be talking to him about AI but about much more as well, and we are very excited about that conversation.