Google has decided to take immersive media technology to the next level, showing a practical system for light field video. Wide field of view scenes can be recorded and played back with the ability to move around within the video after it has been captured, revealing new perspectives.
Developed by a team of leading research scientists and engineers, the new research shows the ability to record, reconstruct, compress, and deliver high-quality immersive light field videos lightweight enough to be streamed over regular Wi-Fi, advancing the state of the art in the rapidly emerging field of immersive augmented reality (AR) and virtual reality (VR) platforms.
“In recent years, the immersive AR/VR field has captured mainstream attention for its promise to give people a truly authentic experience in a simulated environment. Want to really feel like you’re standing among the Redwoods at Yosemite rather than sitting in the living room? Or watch an artist create a sculpture as if you’re with them in the studio? That could be possible with immersive AR/VR technology”
Although the field is still nascent, the team at Google has addressed important challenges, making major research headway in immersive light field video. The research team, led by Michael Broxton, Google research scientist, and Paul Debevec, Google senior staff engineer, plans to demonstrate the new system at SIGGRAPH 2020.
The conference, which will take place virtually this year, brings together a wide variety of professionals who approach computer graphics and interactive techniques from different perspectives and continues to serve as the industry’s premier venue for showcasing forward-thinking ideas and research.
Another breakthrough in this work involves data compression. The idea is not only to develop a system capable of reconstructing video for a truly immersive AR/VR experience but also to access the experience via consumer AR and VR headsets and displays, and even in a web browser.
The new system compresses light field video while still preserving its original visual quality, and it does so using conventional texture atlasing and widely supported video codecs. In essence, they have succeeded at bootstrapping a next generation media format off of today’s image and video compression techniques.