U bent hier
Video camera and screens manufacturers are investing big time in the development of equipment supporting the latest video formats such as 3D, (ultra HD) 4K and 360° video.
New video formats create numerous software challenges. Not only must new technologies be developed to capture those images, there is also a need for more efficient transportation of heavy video files over the existing Internet infrastructure. Another question that needs to be answered is how those new video formats can help us create an optimal viewing experience.
These are challenges close to iMinds’ heart. iMinds researchers at the universities of Brussels, Ghent, Hasselt and Leuven have closely collaborated for the past eighteen months to realize important breakthroughs. They recently shared their findings with a number of Flemish industrial partners. A few remarkable results are hereby listed.
300 times faster
Using multiview screens, you can watch video images from different perspectives by moving back and forth in front of the screen, without the need for any special glasses.
“In theory you would need a separate camera for each of those perspectives, which is impossible in practice. The Holografika screen we use for our experiments, for instance, can accommodate some seventy views,” says Jan Aelterman (iMinds - IPI - Ghent University). “Those views are captured by a limited number of cameras and the transitions between them are calculated mathematically. But this requires massive amounts of processing power and time: a whopping 181 hours for a 45 second video. Thanks to iMinds, these calculations can now be processed 300 times faster. What used to take 181 hours is now done in 37 minutes.”
“Furthermore, our approach allows for more flexibility in terms of camera set-up and provides higher-quality transitions between viewing angles,” adds Steven Maesen (iMinds - EDM - Hasselt University).
According to some studies, 80% of internet traffic will consist of video by 2019. And that number will only increase once 4K and 360° videos are launched.
“In order to avoid that our communication networks collapse under this strain, we developed a number of solutions,” says Glenn Van Wallendael (iMinds - Data Science Lab - Ghent University). “One research track started from the assumption that forwarding complete 360° video streams to each individual user does not make sense; instead it is much more efficient to transmit only the parts that are actually being watched. As it stands, this track has resulted in unique algorithms that ‘understand’ what someone is watching and that optimize the video stream accordingly.”
In addition, we developed algorithms that interpret a video stream’s content to assess the required bandwidth. It means that during a video conference, for instance, the channel used to project static slides will be granted less bandwidth than the channel transmitting the actual video,” says Rui Zhong (iMinds - ETRO - VUB). “This dynamic approach allows for optimum usage of scarce bandwidth.”
Curved screens make for a whole new viewing experience and bring the cinema feel into our living rooms. However, projecting images onto a curved screen is not at all obvious and existing methods come with several downsides. For instance, some parts of the image may not fit the screen and others may be stretched.
“We are actually one of the first research groups to properly project images onto a curved screen,” says Ruxandra Florea (iMinds - ETRO - VUB). “Our approach is context and depth-aware, which means that no objects are cut out or get stretched. This technology can already be applied automatically to photographs; what’s next is 2D video.”
“Ever since Thomas Edison recorded one of the first videos with his kinetograph in 1891, not much has changed in the way we handle video: the images we view have been captured as such by our video cameras,” states professor Peter Schelkens (iMinds - ETRO - VUB). “But that is about to change: in the future, captured images will be subject to numerous calculations – computational imaging as we call it. In fact, we will no longer see (only) what our cameras have recorded; a great many images will be created synthetically. And that is the scope of the groundbreaking research that we have been conducting in the past few months.”