Daniel Lee is having way too much fun at work. In fact, he’s pretty much living the fantasy of any kid who ever loved robots. He builds them.
But it’s not all fun and games. Lee brings a slew of heavyweight credentials to this dream job. He got his PhD in condensed matter physics from MIT, he’s a professor at the University of Pennsylvania’s School of Engineering and Applied Science, and is director of the school’s General Robotics, Automation, Sensing and Perception (GRASP) Laboratory. Now that’s a resume.
Under his supervision, students and faculty at the GRASP Lab created Thor and Trooper, two robots who will be competing in the finals of the Defense Department’s DARPA Robotics Challenge.
Lee would have been a physicist were it not for an incident in a nuclear reactor when he was a grad student. He was working in the reactor at 3 a.m. when there was a sudden emergency shutdown. Just as the door was about to automatically seal the place shut, he managed to escape. His first thought (after “Phew!”) was “Do I really want to spend the rest of my life doing experiments inside a nuclear reactor?”
Not so much. Instead, he decided to apply the principles of physics to studying human brain — and that’s what led him to robotics.
We spoke with him about why he made that career change, and why building a better robot is all about understanding how our brains work.
Drive the District: Why did you go from studying the brain to robotics?
Dan Lee: Well, we still don’t have much experimental data from the brain because it’s pretty hard to do experiments. So I thought, “If I want to understand the brain, one way to do it is to take it apart. Another way is to build it.” And that naturally lead to robotics and machine learning and trying to make computational models for robots.
DTD: In the DARPA Trials, the robots could do so many things we do: climb ladders, use a saw to cut through walls, even drive a car. They already seemed so human.
DL: It’s very easy to anthropomorphize these kinds of machines, the humanoid ones. But the intelligence is lacking. It might have the body of a 20-year-old, but it has the brains of a one-year-old. Kind of like an idiot savant robot.
The same technologies that have enabled smart phones and all sorts of fancy gadgets have made robotics a lot easier. We have better sensors, and we have better computers that are a lot faster and are much smaller and use less power. But we still don’t understand the brain, so we don’t know what to put in the software of these robots.
DTD: One of the difficulties at the DARPA Trials is that all the two-legged robots were tethered, and even then they took some spills. Is it important to build robots that look like us, or do we just get a kick out of it?
DL: If you think about traditional robotics, they started off on the factory floor, like car-making robots. And that’s a place designed specifically for the robot. But you would not want to be on the factory floor when all these robots are whizzing around trying to put together the car. So the issue now is that we want robots to be in places where humans are and to use things that humans would use, like tools and stairs and doors. And if a robot looks like a human, it can basically use the same types of motions that a human does to get around in a world designed for humans.
DTD: What will it take to have them walking completely on their own?
DL: Humans actually use their brains all the time to balance. We’re constantly using the vestibular system in our inner ear. And we use our touch sensors on the bottom of our feet. We also have something called proprioception, which is our muscles telling us exactly how much tone we need and at what angle our joints are. Our brain has to constantly integrate that information so that we stay upright. In robotics, we don’t yet know how to do this control and sensing as well as humans do.
DTD: Aside from balance, what are you aiming for in terms of robot intelligence?
DL: Typically, the way we program a computer is with a set of what we call “If-then-else” statements. If you see this, then do this. So the typical way a robot can open a door is, “If you see this round doorknob, then look at it, put your hand this way, turn the hand clockwise, and then it should open.”
Humans and animals, on the other hand, are very good at learning from experience and generalizing from that experience. If you show us a doorknob we’ve never seen before, we’re able to figure out what to do with it. That’s what computers and robots are not able to do at this point. That’s the creativity we don’t know how to put in a machine yet.
DTD: How do you go about even attempting to make a robot think for itself?
DL: The issue is how do we build mathematical models or algorithms that replicate how humans and animals learn to adapt to new environments.
Humans learn by getting feedback from the environment about what’s working well and what’s not, and then we modify our behavior accordingly. When we get a reward, we try to do more of that action so that we can get more reward. And we try to put the same ideas into robotic algorithms so that the robot can essentially learn from this experience.
DTD: Are you having any success in teaching robots to learn through reward and punishment?
DL: To a certain extent. But not as well as a human does.
The robot in the PBS video [below] is named “Darwin-OP” and it learns using reinforcement. When we started, it fell down quite a bit. But each time it fell, it got a small punishment, whereas if it kept on its feet, it was rewarded. Then gradually over time, it was able to learn a set of controller weights that allows it to react properly to pushes. What you see in the video [after several falls] is its behavior after learning.
DTD: Realistically, where do you think we might be with robotic intelligence in five or ten years?
DL: I think we’re seeing a lot of progress in sensing and control, and we’re doing better in the cognition part. But there’s this high-level reasoning and learning that humans do, and we still haven’t touched that space yet.
I would say that building this higher-level intelligence is not going to be incremental. There has to be some new breakthrough in terms of understanding. And that’s harder to predict. I don’t know. Maybe tomorrow someone will publish a paper that has the secret. to understanding how human’s reason. We don’t have that knowledge yet.
DTD: You were also involved in a DARPA challenge for cars a few years ago.
DD: Yes, the DARPA Urban Challenge. I’m also director of University Transportation Center, which is a joint effort between Carnegie Mellon and Penn. We both built self-driving cars at that time. GM was a big sponsor of the Carnegie Mellon team that won [U Penn worked with a different car company]. GM is now putting some of that technology into things like driver safety systems, collision avoidance, lane keeping, and there’s something called Super Cruise coming along that they’re working on. It’s maturing quite quickly. And Raj Rajkumar runs the GM Lab at Carnegie Mellon, and they’re funding him to commercialize some of the autonomous-vehicle work that was started with the DARPA challenge.
DTD: What other kinds of skills are your teams working on at the lab?
DL: Here at Penn in the Robotics Center we have lots of cool projects. We have faculty working on robotic surgery, using the DaVinci robot, which doesn’t feel the patient at this point. It just sees it. So how do you get the [robotic] surgeon to feel the patient?
We have a faculty member who does the flying UAVs [unmanned aerial vehicles]. This is Vijay Kumar. He had a famous TED talk about these little flying robots, quadrotors.
Another faculty member is working on what’s called modular robots. Right now robots are put together for a particular form or function, but imagine having a robot that attaches itself like a set of Lego bricks to do something, and then explodes the bricks and reforms in another shape.
DTD: Wait a minute. You mean like a Transformer.