Advertisement
Canada markets open in 6 hours 14 minutes
  • S&P/TSX

    24,162.83
    +194.33 (+0.81%)
     
  • S&P 500

    5,751.07
    +51.13 (+0.90%)
     
  • DOW

    42,352.75
    +341.15 (+0.81%)
     
  • CAD/USD

    0.7362
    -0.0005 (-0.07%)
     
  • CRUDE OIL

    74.52
    +0.14 (+0.19%)
     
  • Bitcoin CAD

    86,405.77
    +2,208.60 (+2.62%)
     
  • XRP CAD

    0.74
    +0.01 (+1.90%)
     
  • GOLD FUTURES

    2,663.80
    -4.00 (-0.15%)
     
  • RUSSELL 2000

    2,212.80
    +32.65 (+1.50%)
     
  • 10-Yr Bond

    3.9810
    +0.1310 (+3.40%)
     
  • NASDAQ futures

    20,190.75
    -36.50 (-0.18%)
     
  • VOLATILITY

    19.21
    0.00 (0.00%)
     
  • FTSE

    8,310.31
    +29.68 (+0.36%)
     
  • NIKKEI 225

    39,332.74
    +697.12 (+1.80%)
     
  • CAD/EUR

    0.6709
    0.0000 (0.00%)
     

AWS CEO talks teaming up with Oracle & what's next for AI

Amazon Web Services (AMZN) is teaming up with Oracle (ORCL) in a new strategic partnership, launching Oracle Database@AWS. AWS CEO Matt Garman says the partnership was driven by the desire to offer something unique and valuable to customers.

One big focus for AWS has been AI. Garman tells Yahoo Finance that, "When the generative AI wave happened, we took a step back and said, 'How do customers really wanna get value out of that?' They want to really secure infrastructure. They have to worry about low cost. They want all of the best capabilities that are possible. And so then we went about and built an enterprise platform for customers to really drive AI into their applications. And what that means is it's not just one model. It's not just one feature. It's not just one particular application. It's a platform where you can use all of those capabilities." He says that by utilizing a platform approach, customers can use a specific large-language model or several large-language models in a way that works best for them.

On the perception that AWS may be perceived as lagging Microsoft (MSFT) in the AI race, Garman says, "At Amazon we have a saying that we're willing to be misunderstood for long periods of time. And for us, we felt that it was more important to actually build a baseline platform that customers could really go build real enterprise value into their applications on, as opposed to rushing in and quickly getting out chatbot technology... But that's not really the eventual place that the value is gonna come from this technology. It's not just chatbots."

Watch the video above to hear Garman explain why there's "a ton of potential" for AWS chips.

This post was written by Stephanie Mikulich.

Video Transcript

I'm Madison Mills here at the Goldman Sachs 2024 Communic Copia and Technology Conference, And I'm joined by the CEO of Amazon web services.

Matt Garman.

Matt, Thanks so much for making time for us.

Right off the stage.

Yeah.

Happy to be here.

Thanks for having me.

Of course.

So we've got some big from AWS that I want to start with.

You just announced this partnership with Oracle, and I'm fascinated in this because historically, Amazon has not always been the most bullish on Oracle previously.

A little bit more of a competitive relationship.

What did Oracle change?

Well, look, we've we've always wanted to to have a broad set of offerings for customers.

And so we've actually offered Oracle in AWS for the last decade.

Plus, and you can run Oracle on R DS and have for the last, uh, 10 years.

And so, um, part of it was was Oracle's willingness to to partner with us, and we really sat down with them and And what we really thought about was how do we bring the best of AWS together with, um, Oracle databases, where customers do like running on Oracle databases and they're they're quite good for a variety of use cases.

And, uh So we sat down with the team and thought of How do we deliver a product that would be unique and interesting and bring that value?

And so, um, the teams really opened up and they thought about how integrating backups into S3 could work.

We thought about how integrating a I services into Oracle databases would work.

We thought about how zero ETL could work so that customers can seamlessly use their Oracle database running inside an AWS environment with low latency, but actually use that data to do analytics inside of Red Shift or other services.

And the Oracle team was open to that.

And so that was what really kind of helped us open up that partnership and think about how do we go deliver something that's unique for customers that, um, that actually delivers that value, and I think it's, uh, it took us a while to get there.

But, um, I do think it's a partnership that that strongly benefits customers, and so that's something that's good for AWS.

It's good for our oracle, Um, and we're both really excited about it, and we'll talk about more of the partnerships you all have, but I do want to talk about the conference that we are at the Goldman Tech conference, and Goldman released a list of its Top A I plays ahead of the event.

They had nine names.

Amazon was not on the list, but I I'm curious, is the A I strategy from Amazon?

Is that a feature?

Or that it's not necessarily being listed as the top A. I name in this space?

Is it something where you all are kind of offering a lot of different competitive L MS to customers instead of going full force on your own?

Yeah, so when we when we think about what customers want to do and they really think about a I, they want to get value out of that in their applications and our end customers are customers of all sites.

They're enterprises, they start ups, their governments and we When, when the kind of generative a I wave happened, we took a step back and said, How do customers really want to get value out of that?

They want to really secure infrastructure.

They have to worry about low cost.

They want all of the best capabilities that are possible.

And so then we went about and built an enterprise platform for customers to really drive a I into their applications.

And what that means is it's not just one model.

It's not just one feature.

It's not just one particular application.

It's a platform where you can use all of those capabilities.

All of those models and customers are going to want to use all of them and oftentimes multiple of them together.

And so we had this vision from the very beginning about how do you build a platform that helps people go and build interesting A. I applications on top of AWS, And so that is how we're very excited about that now.

Some of that means that we we partner on.

Many of those models and customers love using anthropic models, which are today the best performing models out there in the world, and and I believe Dario is speaking at this conference and they're fantastic partner of ours.

And many customers like to use their models, but other, But customers also like to to mix and match.

They like to take those models and then sometimes use llama open source models from from meta and combine them together in interesting ways.

And sometimes customers will use small models.

Actually, a very popular model is a as an Amazon model, um, around embeddings, where customers apply that to their data to better label it and use their data as part of their rag index.

And so there's a lot of those capabilities that customers are gonna wanna combine together.

And our view is building bedrock is a platform to help use all of those capabilities together at all parts of that stack.

And sometimes it's data labelling like from start ups like Scale A I.

Some of it's the models like an anthropic.

Some of it's, um, chaining some of those things together, like Lang Chain and other capabilities like that.

There's a lot of those pieces that customers want to use, and we wanna make sure that they can use all of those inside of the AWS environments and and run it in their cloud environment and take advantage of the data that's living in AWS.

And so that's how we approach that, Um, we're also building a I into our own applications into our own capabilities.

Um, but our focus is how do we help customers get the most value out of the applications that they're building?

Talk to me about the strategy when it comes to your in house large language models.

Some competitors in the space have been very proactive on having a first party lm like Google with Gemini, for example, What is your thinking on that strategically for Amazon?

Yeah, Look, we also build our own models, and that's something that we're we're working on, and I would say that there's more to come there, but I think that the again there's not going to be one model that's going to rule them all.

And I think that is a very different strategy than our other competitors started with.

We started from this position of.

There's gonna be a bunch of models here, and some of them will be built by Amazon.

Some of them will be built by other providers out there, and we want customers to be able to pick from the very best models that are available, um, out there in in the market today, and so if you start with that view that there's gonna be lots of models.

And it's not just my model.

It actually really enables the the partner ecosystem to get excited about building and innovating on your platform because they know that they have just as much ability to go win business as you do.

If you're the one that if you're very focused on only your first party models and nothing else is there, then maybe you can use other ones if you have to.

That's a very different position for your partners than starting from Hey, there's a bunch of models come and pick the best one and so we're we're kind of partner first on that front, and it doesn't mean that we won't have our own because we do have our own models.

But that partner first, um, focus allows us to to really have that open, uh, platform and and and and we're seeing the customers like that.

They like the ability to always choose what's the best performing model out there.

And I think in the world today, where it's such a fast moving technology, the best model today may not be the best model tomorrow, and so if you put all your chips on this is the one model that I'm gonna bet all of my business on.

You're kind of stuck if it's no longer the best model.

But if you have a platform that has a bunch of models available, it's actually relatively easy to take advantage of new ones or add new capabilities from other providers as they come along.

And and that's the strategy that we've adopted more broadly, the RO I on a I has been a huge topic, this earnings cycle for investors.

I'm curious.

What kind of revenue are you projecting that a W SS could be getting coming in from a I specifically for your own business?

Yeah.

I mean, we we we, um we see the A. I business growing very rapidly, and it's already a a multibillion dollar business for us.

Um, and and it's continuing to grow very well.

And so I think, for our perspective, we are fortunate that we make money in a number of different ways, whether it's from selling models but also from the infrastructure of of providers training large models on top of us.

Um, and so there's a bunch of those pieces where where we can go monetize those, um, I think over time this is a A technology that has such huge potential to deliver massive gains for enterprise and for start ups and and customers of all size that, um, that I think that the the payout for us is is it's very similar to our existing business.

Really, it's We go invest in infrastructure, we invest in compute, we invest in databases, we invest in technology and then customers come and they they pay us by the hour for it.

And so in many ways, for us, the model is not all not all that different than the core cloud model.

It's a different use case.

It's growing really rapidly.

Um, but it's it's a model that that kind of how we built the business up to 100 and $5 billion run rate that we're at today and and I hear what you're saying about offering customers this idea of choice.

Um, but it's it's fascinating the conversations I've had with analysts even here today.

There seems to be this undertone of an idea that Amazon is behind when it comes to generative A I that Microsoft has the first mover advantage.

How worried.

Are you about that?

Is that a concern for you?

That perception?

Yeah, well, I mean, I. I would be worried if I was, uh you know, there's it's a partnership for them, by the way.

It's not their own technology.

And so that is an interesting, um, struggle for Microsoft that they're also gonna have to deal with, which is it's not their own technology.

And there's actually, you know, interestingly in their their last earnings announcement, I believe, uh, open A I was listed as a competitor of theirs, which is an interesting dynamic for them.

But, um, you know, we we like to partner not necessarily compete.

But, uh, you know, I think, Look, we're, uh, at Amazon.

We have a saying that we're we're willing to be misunderstood for long periods of time.

And for us, um, we felt that it was more important to actually build a baseline platform that customers could really go build real enterprise value into their applications on as opposed to rushing in and quickly getting out chatbot technology that that could, you know, looks cool and allows people to write haikus and other things like that.

But, you know, that's that's not really the eventual place that the value is gonna come from this technology.

It's not just chat bots.

And so I think a lot of others were quick to go out there and by the way, open A. I has great technology, so you know they have really good models.

But there's a lot of other pieces out there, and I think the more you can be industry specific and understand how you help customers solve specific problems, the more you have a platform to help customers build really good rag indexes like we have with knowledge bases or build agents into your A I so that you can actually go and, um, execute, um, on various different capabilities.

Um, the more you can have some of these other pieces that are not just the model, I think the model is the first thing that the the world has built.

And now you're seeing a lot of these other capabilities and, um and that's why we're actually seeing so bedrock, which is that platform for us is a really, really rapidly growing business for us, and, uh and I'm quite bullish, that inference, which is really the the piece of a I where the enterprises go and actually build it into their application.

Customers are gonna want to build that on the platform where they have their applications running.

And so we're increasingly seeing more and more customers use the bedrock platform to actually go build real applications into their actual enterprise applications.

And so, um, you know, it's OK, you guys can, uh, you can say what you like, but it's, uh I think that taking a step back thinking about what customers are really gonna want over the long term and building taking the time to build that secure, great platform, um is gonna be great for our customers.

And And I think, um, we're already seeing that business really start to accelerate.

And you mentioned your chips business.

So let's let's talk about chips.

I'm just curious in terms of your own LLM.

Um, what is your current usage ratio wise of NVIDIA trip chips versus your own in-house chips for training?

Well, look, we we don't share that specifically, but, um but, you know, if you if you talk to customers out there, still, you know, the the large majority of usage is in NVIDIA chips, and they have a great platform.

And, um and they're very popular with customers.

That said, we think that it's a really large, uh, market segment, and there's a tonne of potential for multiple options.

And so customer choice is super important.

And we're, um, seeing, I would say, an accelerating pace of customers getting excited about both training them and inferential, which is our our custom processors.

And so inferential is the chip that's specifically built for a I inference, and that's now been on the market for about four or five years.

And, uh, and we're increasingly see customers use that, um, particularly it's really valuable for a couple of different use cases where there's small inference that needs to be done where customers can really save costs.

And we see sometimes customers save 6070 percent off of their inference bills when they move to inferential, and we're increasingly seeing really large, larger and larger customers do that Move on the training side, Um, we're on our first generation of trainum, and so, uh, a lot of the work that we've done over the last year or two is to get the software stack in a place where it, um, can be really powerful for building some of these really large models.

And so, um, that's where we've seen a lot of the the market, um, and customer base progress over the last year or two.

And we have customers like, um uh, Ninja Tech, who built their entire model all on trainum.

And they're super excited about that platform.

And we have trainum too.

Um, that will be launching by the end of this year.

And that is a platform that we feel incredibly excited about for really large training clusters where customers can get outsized price performance gains, um, relative to the the traditional, um, platforms that they've been using around NVIDIA.

Now, that being said, we think that there is going to be, uh, a mix of of use cases for a long period of time.

And we support Intel.

We support a MD, and we have our own general purpose processor called Graviton.

But we also support NVIDIA GP U, and we're going to support trainum inferential, and we think that there's a really large market segment and there's room enough for customers to be using the best product for the use case for a long time.

But given the price point of NVIDIA chips, wouldn't it be beneficial for you to continue to rely on your own chips more and more over time versus we think so.

And we think we think it is for us and for customers both.

So how long until are fully reliant on your own AWS chips?

Like I said, I actually think there's gonna be use cases for both overtime, So I don't I don't know that we'll ever be fully reliant on them because I think that the NVIDIA team builds some great processors and some great some great products, and they execute really well.

So I think there's gonna be use cases where NVIDIA GP US are are the best solution for a really long time.

And, um, you know, we're gonna push really hard to open up and have more and more and more of those served really well by training.

I want to end on the cloud business because we're running out of time here.

But obviously you've been with AWS since the beginning kind of part of the founding of that cloud business and AWS is the market leader in that space.

But you still have anywhere from 80 to 85% of workloads running on Prem instead of on the cloud.

What's the strategy for getting more of that data onto the cloud?

Yeah, well, look, I think it's a big opportunity for lots of customers.

Um, we see whenever customers make that move and they move from an on premise world into the cloud and AWS, we see their innovation, um, moves more quickly.

We see them, uh, increase the agility of the organisation.

And, um, we actually see customers reduce costs and and just move faster over time for their business.

And so there's a huge amount of value there When you layer a I on top, um, you can take the very best A i models in the world.

And if you point them on an on prem mainframe, you don't get very good results.

Um, because there's just not much you can actually do when you access that data.

The more we talk to customers, they realise that if they can move their data from these, uh, data islands when they're on prem or their mainframes or databases and move all of those into the cloud so that they're accessible, Um, and labelled in a way that they can actually get value from all of those sources of data.

That move to the cloud is really what unlocks that that value.

And I think a I is one of those things that's gonna encourage people to do that faster.

But, um, but there's a huge potential out there for us to to help customers, um, on that journey, as they they continue that move and finally really quickly here.

You mentioned on the main stage just now that most of your customers are not interested in building their own large language models, but they are gonna play an A I How does that impact a W s's business moving forward?

Well, it's just, uh you know, if you think about it, most customers don't build their own database either, or most customers don't, you know, build those core kind of capabilities.

And so there's gonna be a few providers out there open a I and anthropic and Amazon and and Google and Meta and others that are gonna build these large foundational models.

And then customers are going to to use those and our our is.

A lot of customers are gonna modify those.

They're going to, uh, add data to those and do kind of post training runs.

They're going to distil the models down to be very specific to their use cases.

They combine them in different interesting ways and use them in their applications so they'll use those models.

But a lot of people will use those core foundational models as those building blocks.

Matt Garman, AWS CEO.

Thank you for making the time for us.

Really appreciate it.