GW Magazine nameplate

The Body Robotic

Form, function, and the future as seen through the eyes and handiwork of three GW roboticists.


feature spread
download PDF
To test perception algorithms being designed for high-profile robots like Boston Dynamics' AlphaDog and an autonomous car by Toyota, Dr. Sibley's lab has built five "gymnastic, parkour-style" robots, including this one, nicknamed Herbie.

By Danny Freedman

Photography by Jessica McConnell Burt

One of them aspires to be on the battlefield. Another, in the home as a domestic aide. Others aim to perform surgeries and to chauffeur the tired, the busy, and the infirm.

They are unseasoned immigrants to these parts of the workforce, though, and are about as green as they come. Numb to social cues and the tug of common sense, they bring only a heat-seeking determination.

"In the last couple decades robots started roaming out of the industrial corridor," says Pinhas Ben-Tzvi, an assistant professor of mechanical and aerospace engineering. "They're swimming, they are flying, they are walking."

But in order for robots to fully merge into society's fast lane, as they are being coaxed to do, they will need a depth that is almost more than the sum of their parts: They will need to perceive the world around them and make complicated decisions that power sophisticated maneuvering. And they'll need to do it all inside the tangled and fragile mess called daily life.

Here, three modern-day Geppettos working on the future of robotic perception, control, and mechanics open their workshops and share visions of a new generation of robots: ones that will walk alongside U.S. soldiers, pull victims from rubble, and lend a hand—or something like a hand—around the house.


Gabe Sibley

Department of Computer Science
Sample projects: robotic perception for Boston Dynamics' LS3 robot and for an autonomous car by Toyota


Herbie and the other "ninja cars" are built from off-the-shelf remote-controlled cars, which over the years, Dr. Sibley says, have become rugged, light, and inexpensive.

We look for algorithms that are the underpinning of perception and action and use them to build autonomous machines that can perform useful tasks.

Using cameras and computer vision we try to teach robots how to "see"—how to understand the spatial and semantic context they share with us. So that's: Where am I? Where and what are objects of interest here?

We have to compute these solutions quickly, fast enough so that, for example, if I'm a robot running through the woods I can avoid hitting trees and stepping in the wrong place.

To do that the robot has to build a mental picture of the world, like an internal Pixar movie. That's then used to make decisions—specifying an action and simulating the physics to predict the result. It's planning over that mental model, like a gymnast visualizing the flip and tumble before they do it; imagining what would happen and using the results to execute the move.

My work focuses on that fast and accurate perception of the world, as well as the high-speed planning and control. It's called perception-driven model-predictive-control, and we use it for fast and agile ground vehicles that jump, slide, and bounce over complicated 3D terrain.

For Toyota's autonomous cars we're working on robust perception, planning, and control algorithms capable of negotiating degraded conditions: driving at night, driving in the rain, or snow.

Cars that drive themselves will need to understand the road in difficult situations. It's not always going to be a sunny drive in California—sometimes it's going to be sliding sideways at night in Sweden. In these radical situations we still want to come up with the right answer. To do that, our goal is to develop perception, planning, and control systems that supersede even Mario Andretti's.

We're doing essentially the same thing for platforms like LS3, nicknamed AlphaDog. [More formally, that is Boston Dynamics' Legged Squad Support System, a four-legged, all-terrain pack mule, being built for the military, that will be capable of traveling autonomously, either alone or trailing soldiers.]

It's a very similar problem. Robust perception is so hard from a platform that can bound through very rough terrain at excessive speeds, making it difficult to see what is going on from the on-board camera. So we have to take that very blurry, noisy image data, inertial data, and other sensor data and make sense of them to build accurate internal world models.

As a test platform for these algorithms we have these small, fast, and robust ground vehicles that can handle high-speed jumps. Imagine putting a small robot inside a skate park and turning it loose.

The students call that project "ninja car" but it's really about the perception, the modeling and tracking of what it's seeing. And much more, even, it's about the planning side of that: coming up with the choices to make in order to hit that jump, and to land at the right spot, at exactly the right angle, so it can hit the next jump in order to go off the half-pipe at just the right velocity and not crash.

Autonomous cars are going to completely change the way we relate to the automobile. When people are driving about 55 miles per hour, only 12 percent of the highway is occupied by a vehicle—that's because people don't want to get too close to each other. Because it's scary. We're really bad at driving in these machines. And we can build autonomous systems that don't have to look in mirrors to see backwards. They're looking 360 degrees all the time.

Fifty years from now our children will say: Wait a second—did you really get in those metal boxes and push and pull levers and gears to make them move? Wasn't that dangerous? Why didn't you have the computer do it? And they'll be right. When you grab the wheel your insurance premium will just go up.

Of course, there are all sorts of fun sociological issues, legal issues, and policy issues. There's just no practical way to go from kind-of autonomous— where we are now, with systems for things like lane-keeping and parking—to fully autonomous. What we have to do is make cars that are capable of driving in human traffic.

When you commute you can take a nap or maybe do your email. If you're impaired, you don't have to be chauffeured around. It's going to be more efficient. It's going to be better for the environment. Autonomous highways are going to revolutionize society.


Pinhas Ben-Tzvi

Department of Mechanical & Aerospace Engineering
Sample projects: mobile robotics, including work for the military's Defense Advanced Research Projects Agency (DARPA), and robotic surgery


Dr. Ben-Tzvi's novel hybrid mechanism allows both this robot's arm and traction mechanism to be dual-purpose, with either capable of aiding movement or manipulation. A pop-up vision system and a gripper (both visible at right) retract into the arm, enabling the robot to be flat and flippable. Front and back cameras, lights, and sonar are embedded near the tracks to guide a remote operator.

My lab conducts fundamental and applied research in robotics and mechatronics (the synergy of mechanics, electronics, and computer control in an integrated design). The beauty of transforming that fundamental research into different applications is what drives me.

For instance, we are working on developing mobile robotic systems for search-and-rescue applications, for reconnaissance, for inspection, for monitoring; they could also be used for handling improvised explosive devices and bomb disposal. We are also working on medical applications, like robotic systems for surgery.

The goal is to benefit from robots, to have a better, safer, healthier, and easier life.

I was motivated by 9/11. I read a lot of articles about how robots were able to do some useful tasks but weren't dexterous enough or robust enough to reach deeper into the rubble.

The prevailing design of mobile robots is based on a traction mechanism for locomotion, with a separate arm attached on top for manipulation. But if a robot like that is used on rough terrain and it flips, the arm could break.

So I came up with this idea that I call the hybrid mechanism. It has links that can be used interchangeably for locomotion and manipulation—and also do both simultaneously.

It gives the system far greater capabilities. For example, if you're trying to lift something, imagine being able to use your leg as an arm. You'd be able to lift a lot more. The robot has tracks on the bottom so it can travel on rough terrain, and the arm can be used as leverage for things like climbing obstacles, going up and down stairs or over ditches and other terrains. An integrated hand can be used to open doors. The robot is fully symmetrical when the arm is folded down, so it can flip and still continue operating.

It is tele-operated now but we're working on another DARPA project to do those things autonomously. We're using a vision-based navigation system we developed and other sensors—like a camera in the palm of the hand, stereo vision for depth perception, laser scanners for accurate measurements of what's in front of the robot—that allow you to plan the motions needed, for example, in order to climb difficult obstacles.

Search-and-rescue robots also have to be able to act and react safely. If a robot has a significant amount of force and acts without feedback on an object—or a person—it's going to do more damage than good.

We're working on continuum mechanisms for these mobile platforms. Typical robot arms are made up of articulated rigid links. That would be replaced by an arm that is continuously flexible—it's a flexible unit with multiple segments that allow it to bend like a snake or an elephant trunk.

If the robot exerts too much force, the continuum arm will inherently bend around the object without damaging it. (Rigid-link mechanisms can be made "smart," too, but the added sensors and controls to do that can complicate the system and make it more prone to failures.)


Employing the brawn of a ready-made PR2 robot, nicknamd Pepe, Dr. Drumwright's lab is focusing on the brainpower needed to make robots more useful in the home, particularly for elder care.

The same idea, only scaled down, can apply to robotic surgery—using the flexible arm to prevent inadvertent damage to organs, for instance, from sudden movements. I'm working with surgeons at GW on developing designs for its use in colorectal surgeries.

Another project is called STORM, for Self-Configurable and Transformable Omni-Directional Robotic Modules. These are robotic pieces that can assemble into a larger robot and later disassemble. [For example, three small modules could link together to create one robot that looks similar to the hybrid mobile robot.]

If you want to tackle a rubble pile you could send a fixed-configuration robot, but what happens if it doesn't fit or encounters a fence? It's stuck. Mission failed.

This robot, using a vision system or other sensors, would see it can't fit and then start disassembling. The robotic pieces would swarm inside the void and communicate with each other wirelessly. If one finds a person stuck under a concrete slab, it can call for the others and connect into a larger configuration capable of handling larger payloads. Once the task is performed, they disassemble and find something else to do.

Robotics research is very multi-disciplinary, so I've created a group that reflects that. Having the ability to do all the disciplines in-house while the students interact and learn from one another, I think that's unique.

I like to see the big picture. You can have a skeleton but, without the tissues, the muscles, the senses, and the brain, it can't do anything. I see robotics in a similar way: the mechanism, the motors and actuators, the sensors, the wire connections, and the processor work together to bring the robotic system to life.


Evan Drumwright

Department of Computer Science
Sample projects: manipulation, efficient control, and balance


Robots opening doors autonomously is one task Dr. Drumwright is trying to crack. Faced with a new type of door, "we want it to push and poke until it figures out how the mechanism works."

We don't have many robots in the home.

We've got the Roomba, and the same company also has a similar thing called Scooba for cleaning floors, and we've got some toy robots. But we really don't have anything that can help us that much in the home. That's one of the things that I'm really interested in.

I use a PR2 [manufactured by Willow Garage], which is a very capable robot. The creator wanted a robot that could make him breakfast. My vision is a robot to help me get around when I'm old and have just a couple marbles rolling around in my head—to help care for me so I don't put that burden on family or have to pay people who don't have a real incentive to do a great job taking care of me.

It may not necessarily be doing health care, but more like changing a light bulb or doing some mild cleaning—you'd be amazed at the number of things that elderly people need help with.

One of the things I've been working on has been manipulation, trying to get the robot to do things dexterously, like a human would do, and to do them at higher speeds and in dynamic environments.

There's a famous video where a PR2 folds towels. It takes, I want to say, about 30 minutes for the robot to fold one towel. People see the promise in what the robot's doing but it's so slow they start to joke: OK, well a $400,000 robot folds one towel in 30 minutes, so this will be practical … when?

The other problem is that the robots are always doing these things in controlled environments. PR2 stands for Personal Robot 2. It's meant to be in your house doing things alongside you.

We don't want to focus on getting the robot to do one thing at a time and creating a library of tasks. More importantly: Is there one thing we can do here that can indicate we can do 10 other things?

One important task we're focusing on is opening doors. It's something that all robots will need to do and there's been limited success. If you have a particular kind of door—one that the robot has been specially programmed to open—it can open it. It's not anywhere as smooth as a human, but they can do it.

But think of all the different kinds of doors we encounter. You've got doors where it's not clear even to humans whether you push them or pull them. I was just in a Starbucks in Japan and thought a door was locked, but it turned out to be a sliding door.

What I'd like for the robot to be able to do is open any door autonomously. It's a problem that's part perception (Have I seen this door before?), part modeling (How do I think the mechanism behind the door operates?), and part mechanical.

We want the robot to be able to learn from doors that it's opened before. And if the robot hasn't seen a type of door before, we want it to push and poke until it figures out how the mechanism works.

We've also been working on balance. If somebody gave me skis and goggles and a hat—or in the context of helping the elderly, groceries—how would I carry them? How would a robot do that?

One of the ways we did this was through force feedback in the robot's arms. If you're going to pick up an opaque vase you have an idea of how much that vase weighs. If somebody had unexpectedly put water into it, then all of the sudden you'd get thrown for a loop. Where you determined that things went awry, that was because of force feedback in your arms.

We used that to balance objects on the PR2's arms and we ended up being able to do it really well, whereas in the past you would have used the robot's vision, which is just not as well suited for it.

We've also been working on efficiency in control. One of my Ph.D. students was just in Italy for three months working with a quadrupedal robot to test our method of inverse dynamics control.

Basically, the idea behind that is accurately determining what forces need to be applied to the robot's motors to make something happen, without using too much energy. If I'm picking up that vase and I know how much water is in it, how much force do I need to apply to pick it up? If I've got a powerlifter picking up a tiny little vase with all his strength, it's not going to be good for energy usage or for accuracy.

It turns out to be a really hard problem to solve because determining what forces to apply to the robot's motors is coupled to determining the friction forces that are at the robot's feet. The two have to be figured out simultaneously, and our method does that.

This could help improve the accuracy of movement for any robot that comes into contact with its environment—so, basically everything except flying robots—including movement on sticky surfaces, like asphalt, and slick surfaces, like ice and metal.