Esegui ricerca
13 March 2014

Chen Sagiv: crowdsourcing for creating 3D videos

Aumenta dimensioni testoDiminuisci dimensioni testo

Advanced graphics processors, new algorithms and advanced mathematics will soon make a new 3D video technology gathering feed from multiple sources possible

As smart phones are becoming ubiquitous, they increasingly serve as a link to social networks. Networks sharing of large concerts or sporting events with 3D imagery will soon be made possible thanks to a new technology developed under SceneNet, an EU funded project, due to be completed in 2016. Chen Sagiv is an applied mathematician, co-founder and CEO of image and signal processing company SagivTech, based in Ra'anana, Israel, who is also the project coordinator. She talks to youris.com about the promises of this emerging technology, based on heavy-duty image processing that require mathematical wizardry.

What novel technology have you been involved in developing?
The main idea of the EU project is to use the power of the crowd to generate 3D scenes. For example, at rock concerts you see many people taking videos with their smartphones. At the end of the day each person is left with their own video, which was taken from a specific angle, and perhaps the quality is also poor. We are going to take the input of many participants, taken from different viewing angles. We will upload these videos to a server where they are synchronised, and thus obtain a 3-D reconstruction of the scene. This will allow people to view the scene from any angle. They will then be able to share the video in social networks, maybe by creating ad hoc communities. 

What has been the main challenge towards creating 3-D videos?
For videos of static objects we use tools that already exist. One of our main challenges is to produce a viewer for dynamic scenes. Let's say we have the basic version of the entire pipeline, the sequence of operations leading to the 3D video. We have clients running on mobile devices for capturing images and then uploading them to the server. On the server we have the software to synchronise the different videos and to produce the 3D scene. We need a lot of computing power to do this. We have a preliminary proof of concept, but at this point, it is an internal demo, which will be refined for a presentation at the end of the project.

What are the remaining challenges?
One of our main challenges is to actually produce a viewer for the dynamic scenes. Actually, it would be nice to present it as a hologram, maybe this is an interesting idea. It would however be very complex, and require very sophisticated optics. We would not create such a display system ourselves. We plan to use technology that already exists.

What kind of expertise is required to develop such technology?
We are experts in GPU computing, and this is our leverage in this project. GPUs are graphic processing units, processors that are very fast for graphics processing. We also use them for mathematical computation. Our main aim is to develop the technology by looking at the mathematical aspects, and by developing new algorithms; we are developing new software for synchronising dynamic scenes with a lot of movement. 

What are the potential applications of such technology?
Our project is part of the future emerging technologies EU funding scheme. It could be applied in many areas, such as surveillance or tourism. At this point we are looking at the music community. On the advisory board of the EU project, we have a musician and someone from a company which organises events. As a test case we are using the concept of a musical event due to be organised around January 2016.

youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content.