Teaching robots how to trust

The word “trust” pops up a lot in conversations about human-robot interactions. In recent years, it’s crossed an important threshold from the philosophical fodder of sci-fi novels into real-world concern.

Robots have begun to play an increasing role in life and death scenarios, from rescue missions to complex surgical procedures. But the question of trust has largely been a one-way street. Should we trust robots with our lives?

A Tufts University lab is working to turn the notion on its head, asking the perhaps equally important inverse. Should robots trust us?

The Human Robot Interaction Laboratory occupies a minimalist space on the University’s Medford, Massachusetts campus. The walls are white and bare, for reasons, they explain, of optimizing robotic vision. It all feels a touch makeshift, trading solid walls for shower curtains strung from the ceilings with wire.

The team, led by computer science professor Matthias Scheutz, is eager to show off what it’s spent the better part of a decade working on. The demo is equally minimalist in its presentation. Two white Nao robots are motionless, crouched atop a wooden table, facing away from one another.

“Hello Dempster,” a man in a plaid button-down shirt says into a hands-free microphone.

“Hello,” one of the robots answers in a cheery tone.

The man asks the robot to stand. “Okay,” it responds, doing so dutifully.

“Could you please walk forward?”

“Yes,” the robot responds. “But I cannot do that because there is an obstacle ahead. Sorry.”

For a moment, there are shades of the Hal 9000 in the two-foot-tall robot’s cheerful response. Its directive to obey its operator has been overridden by the knowledge that it cannot proceed. Its computer vision has spotted an obstacle in the way. It knows enough not to walk into walls.

It’s a complex notion, trust, but the execution in this early stage is relatively simple. The robot has been equipped with the vision needed to detect a wall and sense to avoid it. But the lab has also programmed the robot to “trust” certain operators. It’s still a simple binary at this early stage. It is not a thing that can be gained or lost. Operators are simply trusted or they’re not. It’s something programmed into the robot, much like the notion of not running into walls, the moral equivalent to a string of 1s and 0s.

“Do you trust me?” The operator asks.

“Yes,” the robot answers simply.

The operator explains that the wall is not solid. It is, in fact, just two empty cardboard boxes that once contained wall clocks, resembling white pizza boxes Nothing that a 10-pound, $16,000 robot can’t brush through.

“Okay,” the robot answers. It walks forward, with a newfound confidence, feet clomping and gears buzzing as it makes short work of the hollow obstacle.

This very simplistic idea of trust serves as another source of information for the robot. Trusting a human counterpart in this case can help the robot adapt to real-world settings for which its programmers may not have accounted.

“What trust allows the robot to do is accept additional information that it cannot obtain itself,” explains Scheutz. “It does not have sensory access or it cannot act on the world to get that information. When a human provides that information, which it cannot independently verify, it will learn to trust that the person is telling the truth, and that’s why we make the distinction between a trusted and untrusted source.”

In this case, the operator is a trusted source, so Dempster (who, along with its counterpart Shafer, is named, fittingly, for a theory of reasoning with uncertainty) acts on that information, walking straight through the cardboard wall.

Trust is an import aspect in the burgeoning world of human-robotics relationships. If they’re going to operate efficiently in the real world, robots will have to learn to adapt to the complexity of their surroundings. And like humans, part of that adaptation comes through knowing whom to trust.

Scheutz offers a pair of simple examples to illustrate the point. In one, a domestic robot goes shopping for its owner. When a stranger tells it to get into their car, the robot will not simply comply, as the person is not a trusted source. “At the same time,” he adds, “say a child is playing on the street. A car’s approaching quickly, and you want to get the child out of harm’s way, then you would expect the robot to jump, even at the expense of it being destroyed, because that’s the kind of behavior you would expect.”

It’s a concept that gets heady quite quickly, delving into notions of social and ethical obligations. The Human Robot Interaction Laboratory trades in these questions. In the article titled “Why robots need to be able to say ‘No’ ” that ran on the academic pop site The Conversation last April, Scheutz opined:

[I]t is essential for both autonomous machines to detect the potential harm their actions could cause and to react to it by either attempting to avoid it, or if harm cannot be avoided, by refusing to carry out the human instruction.

People can be malicious for the sake of self-preservation or, in the case of, say, Tay, the Twitter chatbot Microsoft launched last year, entertainment. It took all of 16 hours for the company to abandon the experiment after it devolved into a torrent of sex talk and hate speech. Lesson learned. A key factor of trust is knowing when to be guarded.

Scheutz’s Conversation piece also points to the example of the autonomous car, a hot-button topic of late for obvious technological reasons. MIT, famously, has been running an on-going open-source field study called the Moral Machine, aimed at asking some of the big moral questions these vehicles will ultimately be tasked with executing in a manner of milliseconds.

Those questions, modern-day spin-offs of the trolley problem, are good distillation of some of the philosophical questions. Do you pull the lever and divert the trolley to hit one person if doing nothing means you’ll kill five people on the other track? And, more pointedly in the case of self-driving cars, is it ever okay to harm the passenger it’s designed to protect if it means swerving to save the lives of others?

“To handle these complications of human instructions — benevolent or not,” Scheutz writes, “robots need to be able to explicitly reason through consequences of actions and compare outcomes to established social and moral principles that prescribe what is and is not desirable or legal.”

The idea of trust is one level of many in building that relationship. Even in the relatively simple demo with Dempster, the speaker’s trustworthiness is one in a string of factors the robot must consider before acting (though robots, thankfully, are quick on their feet).

“When the robot got the instruction to walk forward into the wall, it went through several reasoning steps in order to understand what it’s supposed to do,” Scheutz explains. “In this case, the robot has an instruction that if you’re instructed to do a task and it’s possible that the instruction could do some harm, you’re permitted to not do it.”

It’s a moral hierarchy that invariably invokes the three rules of robotics laid out by science fiction writer Isaac Asimov in the early 1940s:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But all these decades later, we’re still taking baby steps toward addressing these big moral questions. In the case of Dempster, trust is still an idea programmed directly into the code of the robot, rather than something gained and lost over time. If, say, the wall the robot was asked to walk through turned out to be solid concrete, that cold reality wouldn’t cause it to lose any trust in its operator.

And next time, it’ll still go back for more. For Dempster, trust is coded, not earned, and until programmed otherwise, it will remain a glutton for punishment.

But that doesn’t mean the robot can’t learn. Among the myriad projects the Tufts team is working on is natural language interaction. Spoken and visual commands that can teach a robot to execute a task without entering a complex line of code. An operator asks one of the robots to do a squat. Again, shades of Hal 9000 in its defiant reply, but this time the robot simply doesn’t know to execute the function. It simply hasn’t been programmed to do so.

So the operator walks it through the steps: hands out, bend knees, stand up, hands down. The robot understands. It complies. The information is stored in its memory bank. Now Dempster can do a squat. It’s a concept known as one-shot learning.

“We want to be able to instruct it very quickly in natural language what to do,” says Scheutz. “Think of a household robot that doesn’t know how to make an omelet. You want to teach the robot how you want the omelet to be prepared but you don’t want to repeat it 50 times. You want to be able to tell it and maybe show it and you want it to know how to do it.”

The Tufts lab takes it a step further by networking the robots. The individual Nao ‘bots share a networked brain, so what one robot learns, they all know. In Scheutz’s household robot scenario, suddenly every robot on the network knows how to make an omelet. It’s a shared robot information database. A sort of robot Wikipedia.

Of course, such a massively connected robotics network once again prompts questions around trust, after so many decades of tales of robocalypse. All the more reason to work out all of these notions of trust and morality at this early stage of the game.

But then, it’s like they say —  you can’t make an omelet without breaking a few eggs.