USC
Photo by Tim Rue

Issue: Summer 2006

Kinder, Gentler Robots

Computer scientist Maja Mataric is developing a new breed of robots programmed to infiltrate our schools, hospitals, even retirement homes. Forget the Terminator. These machines aren’t killers. They’re care-givers.

By Katie Sweeney

Wearing a white lab coat, Clara approaches the hospital bed. She introduces herself to the patient, and, in a friendly voice, explains that she’ll be leading him through a series of breathing exercises using a spirometer – a tube that measures air intake.

Clara watches carefully as the patient breathes into the tube; she slowly counts out loud as he completes each breath. “Good job!” she praises, after the 10th and final breath is exhaled. With the exercise completed and the results recorded, Clara says goodbye and leaves the room.

A typical nurse-patient interaction? Hardly.

Clara isn’t a nurse. She isn’t even a human being. She’s a robot. And while she’s just a test robot, and the above scene, only a trial run, Clara is part of a vision of the future that could not only redefine robotics but affect the lives of millions of everyday people.

Making this vision a reality is the mission of Maja Mataric, professor of computer science and founding director of USC’s Center for Robotics and Embedded Systems. In Mataric’s idea of the future, robots will play one-on-one supportive roles with the elderly, patients recovering from strokes and heart attacks, even children with social disorders such as autism or attention-deficit disorder.

“This isn’t about replacing nurses or teachers or any human beings with robots,” she stresses. “Far from it. But right now, there simply aren’t enough people out there to provide one-on-one care for every individual who needs it. So we’re looking at ways that technology can help.”

A woman in a field long-dominated by men, Mataric and her long, wavy blond hair can’t help but stand out. But it’s her ideas, not her looks, that are turning heads.

The notion of using robots to help the human race is hardly new. Robots made the leap from science fiction to established science a long time ago. They’ve been called upon to do everything from disarming bombs at an airport to exploring the surface of Mars to sequencing the human genome. Closer to home, more than a million Roomba robotic vacuums are sweeping our floors; their cousins are popping up in everything from cars to dishwashers. Robotics has even found its way into physical therapy devices, which strap on limbs to aid a patient’s movement.

Mataric’s robots, however, will be fundamentally different from all these. For the past three years, she has pioneered a new field called “socially assistive robotics.” Such machines operate not through physical contact or by performing some physical task, but through sheer social interaction.

“We’ve never done this before,” says George A. Bekey, professor emeritus of computer science, electrical engineering and biomedical engineering and the founder of USC’s robotics program – now one of the largest in the country. “The idea that a robot can be a companion and an encourager, and in a non-contact way, is new ground,” he says.

But aren’t these roles better suited to computers and videos? Is a sociable robot really needed? That’s one question Mataric and her team are actively studying. Though the jury is still out, she strongly suspects that in the final analysis, there’s something about the way a robot shares the physical environment with humans that’s necessary in an electronic care-giver.

“This is kinder, gentler technology that you have a relationship with,” she explains. “It’s more like a pet or a friend. That doesn’t mean you think it’s alive. But it’s something you like interacting and engaging with.”

One area Mataric and her team have been studying intensely is how robots could assist stroke patients in their rehabilitation. Each year, according to the American Heart Association, about 700,000 people in the United States suffer strokes, which typically cause weakness or paralysis on one side of the body. To regain their strength, patients require up to six hours a day of rehabilitation exercises for the first two to three months of their recovery. For the average patient, those exercises are currently “supervised” by a care-giver for only about 39 minutes a week – or about 5 minutes a day.

The robot wouldn’t replace a therapist. But it could offer a patient a lot more supervision to ensure that the needed exercises get done and are performed properly – thus helping patients to recover more fully and have a much better chance of regaining their pre-stroke abilities.

In a preliminary experiment, Mataric and her team introduced a basic, three-wheeled mobile robot to stroke patients. The robot, which Mataric describes as “completely non-cool-looking,” stood knee-high in front of the patients, using its camera and laser range-finder to monitor their exercises. In some cases, it would beep when the patient did the exercise correctly; in others it would give verbal praise or simply move around; in still others, it would do a combination of all three in an effort to “reward” the patient.

How did elderly stroke patients – many of whom had never used a computer – react to a machine coaching them through painful exercises?

“They really liked it,” Mataric says. “I don’t want to say that everybody went up and hugged the robot. But it was surprisingly well accepted, and that gives me a great deal of optimism that this is the right direction to go in.”

When Mataric arrived at USC in 1997 from Brandeis University, using robots to help stroke patients was the farthest thing from her mind. Her expertise was in robotic teams. In fact, she had been the first to use so-called “swarms” of robots. As a doctoral student at MIT, she had attracted a fair amount of attention with the “Nerd Herd” of 20 mobile robots perpetually at her heels.

But in 1999, she had her first child, Helena, and three years later a son, Nicholas. (She is married to Richard Roberts, an associate professor of chemistry, chemical engineering and material science at USC.) Mataric continued to work, but her view of the world started to shift. As Helena grew, she began to ask questions like: “What do you do at work?”

“I realized that I needed to have a good answer for her,” Mataric recalls. “And I knew that pretty soon, she was going to ask me, ‘Why do you do it?’ And I just couldn’t see the answer being, ‘Mommy builds killer robots.’”

Around the same time, Mataric found herself serving on a Provost’s Committee developing USC’s strategic plan. Among the plan’s main tenets is the assertion that research at the university should aim to help society solve problems. That made perfect sense to Mataric. The more she thought about it, the more she wanted to shift her own research to have a more direct impact on people’s lives. Could robots, she wondered, be designed to act in more socially beneficial ways?

Her “epiphany,” as she describes it, has now overtaken her research. And it’s given her work new meaning.

“She was always passionate, but before it was about the intellectual ideas,” says Rodney A. Brooks, her adviser at MIT and director of MIT’s Computer Science and Artificial Intelligence Lab. “Now there’s this whole new layer there, which is about doing good for people.”

Mataric’s current research is defining a whole new field in robotics. It’s that desire to be on the forefront that separates her from many other researchers, says Gaurav Sukhatme, who co-directs USC’s Robotics Research Laboratory with her.

“Maja has the sort of personality that thrives on challenges,” explains Sukhatme, associate professor of computer science and electrical engineering. “She’s not afraid of breaking new ground. There’s a lot of risk-taking involved when you chart new territory like this, and she’s very good at it.”

Anyone who coaches or teaches knows there’s a tension between being liked and being effective. You have to know how and when to encourage and praise a student – and how and when to push the envelope. It can be a delicate balancing act, even for humans. So how can a robot be expected to manage?

Mataric knew a “whip-cracking nurse robot” would be a failure. “It’s a robot, after all; you can turn it off,” she says. The trick, she says, is to establish a relationship where the patient “will want to do what the robot says: because it’s fun, it’s a game and it’s a challenge.”

One area Mataric and her students are looking at is how a robot might be programmed to read a patient’s frustration level and respond differently as it escalates. Of course, a robot won’t be able to read a person’s facial expressions or other subtle cues that signal frustration. So Mataric’s team is breaking this information down to data a robot can understand. Team member Emily Mower, a doctoral student in electrical engineering, is now correlating how physiological responses – such as heart rate, body temperature, sweat levels – correspond to frustration level. A wireless arm-band worn by the patient could transmit this kind of data to the robot, and help it gauge a patient’s level of frustration.

“A lot of robots have one set way of interacting,” Mower explains, but this isn’t good enough in a care-giver robot. As a patient’s emotional state changes, the robot must adapt to keep the interaction interesting and appropriate. “That’s really important for stroke patients,” Mower says, “because they are trying to do things that were once incredibly easy and are now insanely difficult for them.”

Another line of the research has to do with robot personality types. There are probably few stroke patients who would want to hang out with a robot as gloomy and depressing as Marvin the Robot from The Hitchhiker’s Guide to the Universe. But just how cheery should a robot be? How strict? How nurturing? Would different robot personalities complement the different dispositions of individual patients? How will extroverts versus introverts react to different robot personality types?

“This is completely redefining what we know about human-machine interaction,” Mataric says. “No one knows how people will respond or how to stay within the bounds of what’s ethical.” She points to an obvious dilemma: “If we fully succeed in matching robots with humans, and a recovering patient forms a bond with a robot – well, you can’t just take the robot away, now can you?”

Growing up in Belgrade in what was then called Yugoslavia, Mataric had no interest in robots. True, her father was an engineer. But her mother was a professor of English, and Mataric herself was more attracted to art, psychology and languages.

“I am definitely not one of those people who tinkered in their basement and knew at age 3 that I wanted to be a roboticist,” she says, her voice touched with only the faintest hint of an accent.

In 1981, when she was 16, her father died of a brain tumor, and she and her mother moved to the United States. At the urging of her uncle, an engineer in the United States, Mataric decided to study computers. She earned her bachelor’s degree in computer science, with a minor in cognitive neuroscience, at the University of Kansas, and went on to MIT’s computer science program for graduate school, focusing on artificial intelligence.

It was an exciting time. Brooks, her adviser, was turning the robotics world upside down with a new approach to programming called Subsumption Architecture. Traditionally, computer scientists had used a sort of “top-down” AI-inspired approach, giving robots specific goals and then furnishing them with the complex reasoning abilities they would need to form a plan of action. The problem is, in the real world making plans often takes too long – especially as the situation keeps changing around the robot.

Brooks’ approach was more biologically inspired and “bottom-up,” a sort of minimalist method that gives robots only the rules they need to just react to their environment. Such robots were inspired by the simplicity and robustness of insects. But obviously, you won’t find ants and termites playing chess – so such robots were considered limited. For her master’s thesis, Mataric showed how it was possible for robots to learn maps of their world and find the shortest routes and paths, using the same philosophy of simple rules and behaviors, instead of symbolic reasoning.

Today, while classical robotics control is still used with certain applications, Brooks’ bottom-up approach is well established in the AI world. It’s especially important to Mataric’s work with socially assistive robots.

“Robots that need to interact with people don’t really have the time to make big plans,” she explains. “People act very dynamically. It’s not like an assembly line. You can’t predict what will happen from one moment to the next. So the robot needs to be agile and dynamic in its responses and behaviors.”

That creates another problem, though: getting a robot to learn the right way to behave. It makes little sense for robots to learn by trial and error: the process is too slow and hardly effective for social interaction. People aren’t inclined to forgive a human who makes too many social faux pas. What chance would there be for a maladroit robot?

Instead, Mataric and her team have worked on getting robots to learn by imitation – by mimicking people or even other robots. This is no simple task. With the exception of chimpanzees and dolphins, no other species besides man learns by imitation, scientists believe. But endowing robots with the ability to ape others has two big advantages: it lets them learn fast, and more importantly, it makes them fun to be with.

“People find it very engaging when a robot can do even the simplest little imitation of them, just as parents love it when their young children imitate them,” Mataric says. “This is what will make robots appealing to people. And they need to be appealing if they’re going to help people.”

There are children with autism who have never smiled at a human being, even their mother. And yet, these same children will smile at a robot.

Why is this? No one really knows, although some experts speculate that humans are too complex. Autistic children often respond more positively to simpler and more predictable forms, such as animals or robots. Whatever the explanation, if it can be clinically shown that autistic children respond differently to robots than other children do, then robots might serve as excellent diagnostic tools. That’s important because autism can be difficult to diagnose; and the earlier it’s diagnosed, the better.

Robots might also help autistic children socialize and better interact with the world. For example, a child could be matched with a robot companion that lives in the family home. This constant companion could teach the child about socialization, and even slowly habituate the child to being more comfortable around certain kinds of human behavior.

Before this can happen, though, the attraction to robots has to be better understood. Mataric and her team are looking at why robots appeal to autistic kids. Is it something about the robot’s behavior that puts the child more at ease? Its appearance?

Once these puzzles are solved, Mataric envisions groups of robots serving as “teacher’s aides” in special-ed classrooms. Each robot could be assigned to help and encourage one particular child with schoolwork. Each robot’s programming could be customized to the preferred learning style and particular disabilities of its “pupil.”

“We’re trying to learn whether a robot can help a child learn some of these things that they otherwise have trouble learning,” Mataric says. “It has to go beyond just being fun. It has to make a real difference.”

So how long before socially assistive robots are routinely helping patients and children? No one knows, but Mataric believes it is not that far away.

“I really do think that it’ll happen in the next one or two decades,” she says. “Not everywhere, and not for everything. But I think we will start to make enough visible strides so that people’s lives will be positively affected, and then it will start to snowball.”

Ultimately, private industry will decide whether the technology is a viable investment. Brooks of MIT says one driving force will be demographic. In much of the developed world populations are aging, and fewer people of working age are available to take care of the elderly. That opens a door for technology to fill in the gaps.

“It’s going to be a tremendous market pull,” he says. “It is early days, but Maja has really carved out a unique place in this space.”

Whenever Mataric talks about her assistive robots, she finds a lot of people listening. Invariably someone asks when this technology will be available in hospitals or schools.

“I tell them, ‘Right now, it’s still in our lab,’” she says. “But we’re working on it. And it’s really not that far out of reach.”

Katie Sweeney ’92 is a freelance writer based in Los Angeles. Her article on congenital heart defects (“Tender Hearts”) appeared in the spring 2006 issue of USC Trojan Family Magazine.