Advertisement
Canada markets open in 7 hours 57 minutes
  • S&P/TSX

    21,873.72
    -138.00 (-0.63%)
     
  • S&P 500

    5,071.63
    +1.08 (+0.02%)
     
  • DOW

    38,460.92
    -42.77 (-0.11%)
     
  • CAD/USD

    0.7304
    +0.0007 (+0.09%)
     
  • CRUDE OIL

    82.93
    +0.12 (+0.14%)
     
  • Bitcoin CAD

    87,751.68
    -3,660.48 (-4.00%)
     
  • CMC Crypto 200

    1,387.85
    -36.25 (-2.55%)
     
  • GOLD FUTURES

    2,329.20
    -9.20 (-0.39%)
     
  • RUSSELL 2000

    1,995.43
    -7.22 (-0.36%)
     
  • 10-Yr Bond

    4.6520
    +0.0540 (+1.17%)
     
  • NASDAQ futures

    17,426.25
    -238.25 (-1.35%)
     
  • VOLATILITY

    15.97
    +0.28 (+1.78%)
     
  • FTSE

    8,040.38
    -4.43 (-0.06%)
     
  • NIKKEI 225

    37,636.75
    -823.33 (-2.14%)
     
  • CAD/EUR

    0.6818
    -0.0001 (-0.01%)
     

Google is combining still images from Street View to animate real life

google street view
google street view

(Christian Charisius/Reuters)

In the not-so-distant future, a new deep learning algorithm from Google could change the worlds of virtual reality and film.

Today, directors and developers can only create a shot based on how many cameras they have set up at different angles. Bringing deep learning into the field — turning limited information into real-world structures, essentially with educated guesses — could help them achieve otherwise impossible vantage points.

The algorithm, developed using still images from Google Street View, uses multiple pictures in order to "learn" how the images should fit together.

ADVERTISEMENT

It can then assemble the images into a single fluid animation, as if the viewer were watching a movie.

Developers call it DeepStereo. Have a look.

The algorithm takes each pixel in a given picture and compares the colors and depth to the corresponding pixels in related images.

Using five different vantage points, it then recreates a full picture of the world. The upper righthand corner shows the images collected by Street View.

Sometimes it still has trouble putting the pieces together. As you can see here, in the algorithm's capture of the Acropolis Museum, a sculpture on the lefthand side comes into view piece by piece.

Google's developers say it is a known problem that will work itself out once they can incorporate more vantage points into their existing model.

The technology could eventually be used in cinematography, teleconferencing, virtual reality, and image stabilization, the researchers write in their report on the algorithm.

"To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery," they explain.

Whether that breakthrough causes motion sickness will be another matter. San Francisco's Lombard St., seen in the GIF below, isn't for the faint of stomach.

​​

NOW WATCH: Here are all of Google's awesome science projects — that we know about



More From Business Insider