In science fiction, robots acts like human beings. A close look at the state of the art robotics would tell us that the technology is not likely to emulate science fiction yet. But there has been much progress in the field. One of the largest gap between the reality of a robot and that of a human is perception. Today, biological systems vastly outperform conventional digital ones. Giorgio Metta, the director of the iCub Facility at the Istituto Italiano di Tecnologia in Genoa, worked on a EU-funded research project, called Emorph, which aimed at overcoming some digital systems’ limitations. Metta talks to youris.com about the latest advance in robotics made thanks to this project.
What problematic did you tackle?
We focused on the development of specific sensors that would give robots a better ability to see. As a consequent, it would also improve their ability to interact with their environment. These sensors are cameras. They can detect the variations of light at a much higher frequency than the ones normally used in robotics. Light measurements are fundamental in some tasks, such as catching objects and avoinding obstacles while moving around. In addition to developing new sensors, we also worked on the interface and the algorythms that the robots need to manage all the relevant data for such tasks.
What are your main achievements?
For the first time, we used sensor technology in real robots. And not only in a small application. We worked on a robot that is able to move around in a specific environment, avoiding large moving obstacles like a person. This is possible because the light variations are measured at a very high frequency, like in the human eye. Thus, we call these sensors ‘neuromorphic chips.’ They mimic the functioning of human neural system. The robot is also able to tell the difference between a sheet of paper laying on the floor and an object that it needs to avoid.
What else would you like to investigate?
We have demonstrated that it is possible to make a complete system using those high-performing sensors, instead of the traditional static camera. We also developed the new neuromorphic algorithms that was required to make this system work. Ultimately, we aim to try to replicate this approach with much less power consumption. At the moment, we still need very powerful computers. This is a drawback in terms of speed. But I think we need a dedicated project for that.
What kind of new development stem from this research?
The future sensors we are working on have a higher definition. This does not mean just more pixels for each image. We are also trying to measure the local gradient. This means that we are trying to make the robot understand what the light of a single pixel is in relation to the surrounding ones. This would allow the system to understand if in a specific area of the image there is an edge and if it is moving. Ultimately, it would allow it to get more detailed information about what it sees.
At the moment we are also trying to use the same neuromorphic technology we developed during the project for the sight to mimic tactile sensations. For now we only have a preliminary chip, but the results are encouraging.
youris.com provides its content to all media free of charge. We would appreciate if you could acknowledge youris.com as the source of the content.