Advertisement
Canada markets close in 51 minutes
  • S&P/TSX

    24,478.99
    -72.56 (-0.30%)
     
  • S&P 500

    5,816.64
    +6.78 (+0.12%)
     
  • DOW

    42,170.31
    -204.05 (-0.48%)
     
  • CAD/USD

    0.7204
    -0.0017 (-0.23%)
     
  • CRUDE OIL

    71.78
    +1.59 (+2.27%)
     
  • Bitcoin CAD

    92,968.58
    -1,071.45 (-1.14%)
     
  • XRP CAD

    0.71
    -0.02 (-3.05%)
     
  • GOLD FUTURES

    2,755.10
    +6.20 (+0.23%)
     
  • RUSSELL 2000

    2,213.59
    -5.33 (-0.24%)
     
  • 10-Yr Bond

    4.2280
    +0.0280 (+0.67%)
     
  • NASDAQ

    18,545.01
    +129.52 (+0.70%)
     
  • VOLATILITY

    19.59
    +0.51 (+2.67%)
     
  • FTSE

    8,248.84
    -20.54 (-0.25%)
     
  • NIKKEI 225

    37,913.92
    -229.37 (-0.60%)
     
  • CAD/EUR

    0.6665
    +0.0002 (+0.03%)
     

Why Nvidia, Tesla are betting on AI-powered humanoid robots

A new wave of humanoid robots is threatening to shake up the labor market. Built like humans and fueled by artificial intelligence, the machines are learning to tackle complicated tasks at a rapid rate.

The advancement of generative AI is supercharging how quickly humanoids can learn, backed by vast amounts of data and high-powered chips. Chipmaker Nvidia (NVDA) has doubled down on its future, building out an ecosystem that enables robotics makers to customize their products. The Isaac robotics platform, which includes generative AI foundation models and tools, allow developers to simulate real- world movements in the digital world, while Nvidia’s Thor system-on-a-chip (SOC) provides the computing power needed to drive the transformation.

Already, Tesla (TSLA), Amazon (AMZN) and others are incorporating humanoids in their work spaces. At a recent shareholder meeting, Tesla CEO Elon Musk said the machines had the potential to propel the EV maker into a $25 trillion market cap company.

Tesla is looking to capture the market by integrating its Optimus humanoid into factory floors. Musk has said two robots have already been deployed at the company’s Fremont, California factory and predicted “a few thousand” Optimus robots to be working by 2025.

But the company faces plenty of competition. Austin-based Apptronik has already signed partnerships with logistics firm GXO and Mercedes-Benz to deploy its humanoids to their floors, while Amazon has begun using Agility’s Digit robot in its test facility.

Watch to see what’s NEXT in humanoid robots.

If you’re going to future-proof your portfolio, you need to know what’s NEXT. In this series, Yahoo Finance will feature stories that give a glimpse at the future, and show how companies are making big moves today that will matter tomorrow.

For more on our NEXT series, click here, and tune in to Yahoo Finance Live for more expert insight and the latest market action, Monday through Friday.

Video Transcript

Consider this a glimpse into your future.

Inside robotics maker Electronics, Austin lab engineers are perfecting the next generation of workers built like humans, fueled by artificial intelligence.

Yahoo Finance was invited in for an exclusive look.

Today we've got thousands of robots that do one thing.

The future is one robot that can do thousands of different things.

Need input, input.

All right, you got it.

The idea of a general purpose humanoid has long been the of science fiction, but generative A. I is finally making that reality, backed by vast amounts of data and high powered chips.

If this thing started walking towards me in a warehouse, that's a little intimidating.

That new technology is leading to new possibilities.

With Chip Leader NVIDIA right at the centre of it, the soul of NVIDIA.

There is no reason why every home should not have at least one humanoid or more.

Competition is growing, with companies like Elon Musk's Tesla vying to win a market projected to reach $38 billion according to Goldman Sachs, that could have significant implications for the labour force.

This is what's next in humanoid robots.

So go ahead and open up that right hand pressing down on the joystick.

All right.

Now, when you reach for those socks, try to overshoot where you think you should be.

Go just a little bit further.

All right, Go ahead and close that hand now.

Very nice.

Lift up.

All right.

Oh, go ahead and drop it in that box.

I'm getting a crash course in machine learning.

Taking control of Apollo the robot through a VR headset and a pair of controllers.

Am I giving it too much force?

No, no, no.

That was perfect.

So close.

It's a process known as Tele operation.

In this case, I'm training Apollo how to pack a box and clear the table.

The way I move its hands, the way I touch an object is all being processed as data to create a mental model for Apollo.

Historically, in the past, we've had to train engineers for years to speak the same language that computers speak, which is, you know, programming languages and assembly language at at some level with Tele operation, anyone can go and pick up this rig, jump inside of it, and in five minutes betray the robot teaching the robot what to do that advancement led to a recent breakthrough for Apollo.

In this video exclusive to Yahoo Finance, Epic says it shows the robot performing these tasks autonomously without a human operator, a result made possible based on just 10 hours of training.

The Holy Grail for is what we call zero shot learning or the ability to show the robot what to do.

And it can do it the same way that you do that task.

And if we build big data sets of humans doing tasks in these environments, and we have robots with the same morphology as a person, then that allows us to have robots that can do a whole wide range of tasks over Epic.

Spent nearly a decade perfecting Apollo.

It's built like a human so it can work alongside humans.

It's 5 ft eight, weighs 100 60 pounds.

It can lift 55 pounds with two arms and two legs.

We want the smallest robot that we can build that still is the most versatile robot we can build.

So we were trying to find this trade off generative A. I has supercharged that development by enabling humanoids to process more data more quickly That means engineers no longer need to programme every move.

Robots can simply watch and learn.

This is all in one Apollo, so you can see there's a GP U.

There's AC PU, and then this is sort of like the brain stem.

And this is the nervous that controls all the motors robots.

In the age of A. I have many more sensors, and so you have to read off of those sensors and then control that network of motors very quickly.

That's especially important when it comes to Apollo's hands in something called dexterous manipulation.

It's second nature to US humans, but one of the hardest movements for robots to learn.

One year ago, Cardenas says, this wasn't an option.

Now these fingers can grab objects and sense them with a vast network of sensors built into the hand.

There's all sorts of tasks that we do today with our hands, where we're just, you know, we're listening until it clicks.

We're tightening until we feel it stop and we can get it just right.

So we have that high sense of touch, and that's very difficult to get robots to do, to replicate with the same level that that people can do these tasks.

Investments are flowing in backing from big tech companies like Amazon and Microsoft set new records for funding this year, according to CB insights, and it's about to get even bigger.

With humanoid robots taking their first steps into the real world, Tesla's already touted its own humanoid optimists, working autonomously in factory floors.

Figure a IS trained its humanoid for use at a BMW plant, while Amazon's integrated, Oregon based Agility's Digit robot into a test facility.

I would say we're at step zero point.

You kind of have this initial moment of breakthrough.

People saw what was happening with large language models, and it's really taking that and applying it to the physical world.

NVIDIA has accelerated those applications by building an ecosystem for humanoids to run on.

It combines high powered chips that process data at high speeds with something called the omniverse, a type of metaverse that allows users to train robots in the digital world for skills applied in the real world.

When you create a distal twin, you are able to simulate all of this in the digital world and then put it in the physical world and vice versa.

So doing it in simulation is absolutely mandatory because it's faster, safer and cheaper.

In simulation, I could put 1000 or a million humanoids, and I can test a different version of the algorithm in parallel, so just compute so fast.

Dip says NVIDIA is doubling its computing capabilities every six months.

Now the company is developing foundation models to speed up the pace of learning, so robots can copy any human movement just by observing.

Sure, big, let's high five.

Can you give us some cuss dirt?

Check this out.

We are basically able to train the robot using text input, or it can take, you know, speech input.

Or it can take live demonstration or videos from the Internet.

Tali says that technology has fundamentally changed the use case for humanoids.

It's no longer about robots that can perfect one task, but general purpose robots that can do multiple tasks and learn to do even more.

My prediction is next year we'll have over 1000.

Maybe a few 1000 Optimus robots working at Tesla, Elon Musk is racing ahead with his own plans at Tesla's Fremont factory, Musk says two optimist robots have already been deployed.

He predicts there will be 1 billion humanoids working two decades from now.

Autonomous transport is called sort of a 5 to $7 trillion market cap situation.

Optimus, I think is is, uh, a 25 literally $25 trillion market cap situation.

So other companies, you know, they need to go out and win contracts, um, to demonstrate the capability, whereas Tesla, at least so far, and what we've seen in the video is just using it in their own factory.

And so they don't need any sales force or anything like that to continue to drive performance and test things out.

They have a factory which is the perfect sandbox for them to keep pushing the limit.

The tech advancement comes as the US faces a labour crunch, especially in manufacturing.

Goldman Sachs estimates the industry is already short 500,000 jobs.

That's expected to grow to 2 million by the end of the decade.

If you have a robot that can really do anything a human can then that fundamentally changes the economy as we know it right, because then there's no more constraint on human labour.

What does the cost need to come down to in order for this to be able to use be used on a much larger scale.

Yeah.

So this goes back to that adoption curve where I think capability has to match the price point.

And I think at, uh, roughly $100,000 if it could do 30% of what a human does, Uh then that starts to be an interesting dynamic.

NVIDIA's diala puts that price tag even lower, saying it needs to be $10,000 to become ubiquitous.

You sort of mentioned this this rethink that's happening about a robot that can do one task versus a robot that can do 1000 different things.

Why do we need that?

We don't have enough people to do the tasks and jobs that we need to be done, you know, all across the globe.

So if you look at the US, if every unemployed person got a job, tomorrow would still be short millions of jobs.

So there's this dream of a versatile robot that can come in and fill the gap and do a lot of these jobs that we're increasingly seeing younger generations don't wanna do, think loading and unloading trucks in the hot Texas sun, for example.

Uh, it's very difficult to get people to do that today.

And so this is, you know, the initial types of applications where you can see a robot like Apollo step in preparing to tackle its first job with Ron deploying the humanoid inside logistics provider G's warehouses.

It's also working with Mercedes Benz to integrate the robots into the car makers manufacturing line.

There are still questions about safety, physical safety, with robots working alongside humans but also cyber security and the potential for hacking.

We need to have that safety at the hardware level at the chip level in terms of functionally or security wise.

If anything is not quite right, you shut down or you gracefully right Stop.

It needs to be designed at the software layer.

It probably also needs to be defined at the layer in the cloud where it is not just within the robot, but you have the ability to connect to it and and and and stop it right as as needed.

Are we there right now?

I think it's going to be an evaluation when you think about where this could all go.

Does that ever scare you?

Does it excite you?

Do you sort of think should we create a limit to this?

For me, it excites me far more than scares me at all, in the sense that, you know, in the end, we have multiple checks, checkpoints that they can have.

First of all, policy could come in place.

And all the companies that we are trying to create, mostly trying to solve for all good scenarios, Right?

For me, the the dream of a versatile robot is things like elder care where you have a whole series of things that you need a robot to do to be able to take care of us.

As we get older, you need basically a nurse and you need a house keeper.

You need a variety of different things that you need this robot to do.

And so just one narrow piece of that that robots can do today won't solve the problem.

And so for us to really realise the dream of robots, where we have these helpers that really free us up and enable us to do new things, we need much more versatility than we have today, So that sounds like in the future.

Every house has an Apollo, right?

Is that what you envision?

Yeah.

I think in the future, every house will have a robot.

The question is what form it will take.

I think it will be some sort of humanoid, depending on how you define that.

And, yeah, I think that similar to the personal computer or to smartphones today, I think that it'll be hard to envision our lives without robots in the future.