The Future of Robotics20 years on: How will we be living with our robots?
Transcendence, Her and Metropolis – numerous science fiction films claim to show us what a world with self-learning robots might look like. What is already out there, and what can we expect in the future? Read More
Driving cars, communicating, making music – none presents a challenge to robots these days. In autumn 2013, the first opera to feature performing, musical robots premiered in Philadelphia and New York.
Rolf Lakämper, Professor for Robotics at Temple University in Philadelphia, worked with composer Maurice Wright to create the opera. Almost 30 years ago, the mathematician founded Germany’s first gaming company “Magic Bytes” and the 8-bit game Mission Elevator. Today he develops autonomous robots. Rolf talked to us about what robots can do these days, what they will be able to do in future – and what they probably won’t.
Rolf, you research and develop autonomous robots. They roll around the Temple University campus in Philadelphia and talk to students. Do you tell them where to go – like with a remote controlled car – or do they decide for themselves where they want to go?
The robots steer themselves autonomously, of course. I take them to campus and tell them to roam around to explore their world. They have to locate themselves in space. Each one has a built-in camera and a laser scanner to create a map of the area and to identify its location on the map at the same time. It explores the university campus completely on its own.
After the robot has moved around the area for a while, could you ask it where the dining hall is?
Yes, I could do so. But to answer such a question, the robot needs to name or to label the newly discovered places. That’s why, by the time it locates a new item in the environment, it will ask a student about its name – and now it knows where the dining hall is, for example. Hence, from then on, it can solve navigation tasks, for example finding the best way to get to the dining hall.
How does a robot talk to students?
It has a laptop with an integrated microphone and speaker that runs language and language recognition software. There is nothing particularly special about it; the software has been around for a while.
Seeing and speaking are no problem, but understanding and recognition are still tough.
What is currently the biggest challenge in robot development?
Seeing and speaking are no problem, but understanding and recognition are still tough. In other words: evaluating the content and recognizing connections is the challenge. Robots initially just see a lot of tiny pixels they need to unify. Understanding an environment is quite another task. A step to it is classification and inference – for example, when a robot classifies the data it sees as "street", it will also expect cars driving on it.
How do you build your robots, do you have your own workshop?
I don’t build the robots, nor do I develop the language recognition. Those are off the shelve. I just create the bit that goes in behind the eyes, the programming of algorithms for visual understanding. My robots do not look particularly impressive: they are more like a box on three wheels with technical looking lasers and cameras on top. However, now that we have 3D printers, I also sometimes print out smaller robots myself using MakerBots, the plastic printers you see everywhere. I print out the plastic components and install tiny motors and computers.
“Now that we have 3D printers, I also sometimes print out smaller robots myself.”
You don’t just work on autonomous robots for fun; you have a serious objective: your research projects are designed to develop robots that can find victims after a catastrophe. Why is that important?
This is the “search and rescue” direction of robotics. It really got into the spotlight after the 9/11 attacks. Back then, 20 to 30 remote controlled robots were driven into the wreckage to search for survivors. They were controlled via cable and joystick. The attempt was a complete failure though: Not only did they not find a single victim, the cables tore and the robots went missing. That is why we are working on creating robots that can move autonomously and know how to get into and out of spaces themselves. The US government is very interested in this research, with funding coming from, for example, from government agencies like the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA) grants.
In dangerous situations in particular, sending robots in first to check out the situation is a huge advantage. In what other kinds of situations can they be used as well?
When a tsunami destroys a whole city, for example, robots can tell rescue workers where people need to be rescued, and the safest way to get to them. Japan is a forerunner in this kind of research because of the heightened danger of earthquakes and tsunamis. Germany is fairly far along as well because researchers here have developed quite advanced autonomous robots.
What abilities does the most advanced autonomous robot have today?
It can drive through the US on its own, transporting people from A to B and is known as the Google Self-Driving Car. Google is the industry leader in this area thanks to German researcher Sebastian Thun. The self-driving cars look like cute little VW beetles. They currently can drive fully autonomous along roads in California and Arizona. In these states, self-driving cars are even legal by now.
Doesn’t the car have to be able to recognize other cars in addition to people, street signs and traffic lights?
Yes, and, most importantly, it has to be able to anticipate what will happen next: It has to assess how a pedestrian is likely to move, or if another car is just about to park. The robots we used in the opera run similar software, by the way.
How did this opera come about? Combining robots and classical music seems to be a very unusual idea.
I developed the opera with Maurice Wright, a well-known composer of electronic music. I am very interested in music and he likes technology. Maurice saw my robots on campus and wanted to get them on stage. He has also always wanted to compose a classical opera using only electronic media. So Galatea Reset was born, an opera for three autonomous robots, 5 singers, dancers, and a choir. The robots have a chassis with a speaker mounted on it. During the piece they semi-autonomously generate music.
Don’t the robots have to respond to people during the opera, or have they been pre-programmed?
That was the big challenge. There were up to 25 people on stage at once, doing different things each time. The robots have to be able to react. They have to recognize where they are in a piece of music, where they need to come in and fade out, and what actions they need to perform. They also have to understand instructions during rehearsals. When the director says “Let’s take it again from bar 25”, then they have to return to their position for bar 25.
So they got the same instructions as the singers?
Exactly. We programmed an electronic score and a kind of built-in synthesizer that produced the music. The computer knew the musical commands, like what notes should be played. But during the piece, the robots decided for themselves what sound and expression to give the notes. So parts of the music were played differently during each performance, depending on the mood of the audience and singers. At times they even recorded laughter from the audience and played it back where they found it appropriate.
When does a robot feel it is appropriate to replay laughter?
That is based on an underlying algorithm. Of course robots cannot "feel" the music in a human sense, but at least they can pretend to express a certain feel for music: they can analyze their environment and react to changes. A robot might deem it appropriate when the volume drops or when something funny happens. It looks very convincing, but such reactions are pretty far away from what a human expects them to be.
Unlike human beings, who have to rehearse a lot to perform an opera, robots don’t make mistakes. Are robots better opera singers?
If you want music to be defined through precision: yes. I personally do not, so the answer is: most definitely not. They don’t have what we would call a soul; that is, if ever, still a long way off.
Your robots play music with opera singers and talk to students. How do people respond to such intelligent robots?
The actors on stage were a little shy initially, but they got over it quickly. In the beginning, the star of the show always left a one- to two-meter safety zone between the robot and herself, but after three performances she was standing right next to it and even petting it.
Did she still perceive it as a computer or more as an opera singer?
She said she had developed a personal relationship with it. That tiny bit of unique personality you can program into a robot is often enough to make people believe there is a soul in there.
But can robots be trusted?
As a layperson, you should not fall too hard for these supposed personalities or believe what the science fiction films say. But that doesn’t mean you shouldn’t trust technology. The self-driving car is probably safer than any human driver on the road in the US today. Films often depict robots as a threat that could take over the world. But that is really still very far off, as robots have yet no understanding of the world at all. Right now they are rather on a path towards becoming helpers when it comes to well defined, repetitive tasks.
"The self-driving car is probably safer than any human driver on the road in the US today."
Robot folding towels
Self-learning robots have become a widely discussed issue right now. How intelligent have robots become?
The internet offers a huge data pool robots can draw from. We have a small humanoid robot in the lab, which we gave the knowledge about its body features – you could say it knows that it looks human. Now when we tell it the name of a human pose, like the yoga “warrior pose” for example, it is able to use the google image search function to look for pictures of people in that pose. It filters out the most common ones that seem significant – and after around 10 seconds, my robot has learned the pose and is able to perform it. Getting robots to develop an understanding of the functionality of objects in their environment will go very quickly. We will make huge strides here over the next 5 to 10 years.
What will robots be to us in the future: more a good friend, or a household helper?
Household robots represent a giant movement in robotics right now. There is a robot, for example, that can independently find recipes on the internet, then go to the fridge, retrieve all the ingredients, and prepare the dish. The same robot can also fold towels; the only problem is that it still takes three hours to fold just one towel. But there is just as much research into the social aspects – how robots interact with people – as there is into their physical progress. This is important for household robots too: a robot that empties the dishwasher is not very useful if it talks to you in such a stiff manner that you don’t want to use it. It is extremely important that robots seem human in order to make us accept to work with them.
What will our interactions with robots look like twenty years down the road – will they be living in our homes?
I really hope that a robot will be helping me out of the bathtub in a retirement home by then! Robots will have made inroads into elder care in particular, into medicine, our homes and transportation. For instance, I am sure no one will be driving him/herself around anymore by then.
"I really hope that a robot will be helping me out of the bathtub in a retirement home by then!"
You said that the military is also interested in the technology. Whose hands should this technology never fall into?
Although personally I am in favor of education and against regulation, at this point anything having to do with weapons has to be regulated and should not be in anyone’s hands really. It is important that the government take preemptive action here. Every new technology can also become a threat if not handled responsibly.
Measurement in Motion
In addition to his research in Philadelphia, Rolf Lakämper is currently working on a start-up “Measurement in Motion”. Together with Jan Elseberg and Prof. Andreas Nuechter, he develops a device that can very quickly and precisely create 3D scans. This allows it to digitize real places, depicting them on a computer in 3D. It can survey entire cities, the interior of factories, or archeological excavation sites.
A laser scanner measures distances. It emits rays and measures the time the light takes to return so it can localize specific points. This scanner can be combined with a conventional camera or heat sensor. Together with a heat sensor, for example, it can create images of cities and show where buildings are poorly insulated. With a normal camera it can create complete 3D images of interiors that can be used to identify passage ways. Users can carry the device through a place, mount it underneath a drone, or onto a robotic chassis. It also delivers information for localization.
Unlike technologies for 3D scans up to now, Measurement in Motion provides measurements that are exact down to the millimeter, which makes it particularly interesting for engineers. But not only for them: In Troy, for example, there is great interest in using the device to survey the slowly decaying archeological sites and depict them on the internet. Measurement in Motion is funded by the University of Würzburg and the German government’s Exist Program.