Feature Story

Robots among us

by Laurence Cruz

Debating the ethics around robots and AI

Robotics panel debates form factors, ethics in autonomous vehicles and the pursuit of artificial general intelligence.

What's the practical value of robots mimicking the human form? Will ethics or legal policy be the guiding principle in robots' decision making? And what is AI anyway?

These were just some of the questions explored in a panel discussion on the topic "The Robots Among Us" at the Los Angeles World Affairs Council back in March. The panel featured a cross-section of robotics entrepreneurs, including makers of robotic toys, software for social robots, warehouse automation and farming robotics.

On the issue of whether robots should be human-like in form, the speakers were divided. Host August Bradley, creator of the weekly Mind & Machine interview show, suggested humanoid robots can help foster a sense of camaraderie with their wetware counterparts (that's nerd-speak for humans). That sense of connection could be especially useful in, say, scenarios where human trust is necessary for a robot to achieve its task, Bradley said.

It's also a major driver in the booming market for companion robots — not to mention robots that fulfill more exotic functions, said panelist Sabri Sansoy, founder and CEO of Orchanic. Did you know, for example, that "robot priests" are performing Buddhist funeral rites in Japan and giving blessings in Germany? And did you catch this awkward date with actor Will Smith and a robot called Sophia?

But other panelists pointed to the human tendency to anthropomorphize machines as evidence that robots can be effective in all shapes and sizes. For example, people name their cars and even their Roomba vacuuming robots.

"As robotics designers, I don't think we have to work particularly hard to make robots to which people attribute human-like traits. People will want to form an emotional relationship with them."
"As robotics designers, I don't think we have to work particularly hard to make robots to which people attribute human-like traits," said Rand Voorhies, CTO of inVia Robotics, which specializes in warehouse robots that bear no resemblance to humans. "People will want to form an emotional relationship with them."

Ross Mead, CEO of Semio, which makes software for social robots, agreed. "Robots need to be human-interpretable — not necessarily human-like," he said.

In the case of robotic toys for kids, however, eyes are an essential feature, along with sounds, said Chris Hardouin, who heads Electronic Development at Spin Master, maker of toys like the Meccanoid — winner of the Toy of the Year award. "Having that light and sound connection — eye and sound — adds a human quality," he said.

But Hardouin added that, since our environment is largely designed for humans, there's a case for robots to be human-like in order to interact with it. "In Fukushima, a robot with hands would be useful to turn a wheel," he said, referring to the tsunami-induced accident at the nuclear plant in Japan in 2011.

Scalable Ethics?

The panelists also explored issues around ethics in the context of autonomous vehicles. The issue took center stage in March after a fatal crash of a self-driving Uber taxi in Arizona. If an out-of-control self-driving car is faced with the option of plowing into an old man or a woman with a baby in a stroller, which should it do? Or should it do neither and instead kill the person in the vehicle? Voorhies noted that this kind of morality is being coded into the software brains of self-driving vehicles.

The panelists also explored, somewhat tongue in cheek, the complexities in this endeavor. For example, what happens when an out-of-control vehicle must choose between hitting a 40-year-old versus a 50-year-old person? Or will vehicles — much like a Netflix or social media algorithm — learn the operator's likes and dislikes, so that its choice of target will reflect the operator's prejudices?

"There will be scalable ethics," Hardouin said.

And what happens when the autonomous vehicle companies themselves act unethically but within the law? Will they opt to be guided by legal policy over ethics?

Artificial General Intelligence

The panelists also discussed artificial general intelligence (AGI) — a holy grail in robotics. AGI can be defined as the intelligence of a machine that could successfully perform any intellectual task that a human being can. Robots today are very far from this ideal. AI itself is a highly fragmented field, sometimes broken into 30-plus subcategories, such as machine learning, deep learning, reinforcement learning, robotics, computer vision, IoT and natural language.

The panelists themselves are engaged in vastly different applications of AI. Mead noted that Semio's focus is creating software that can make robots good at holding a conversation. That's worlds apart from inVia's "piece-picking" robots, which are designed to efficiently pick totes full of products off warehouse shelves — a key task for ecommerce companies that are feeling the pinch from the vast economies of scale at work in Amazon's warehouse automation technologies. It's also worlds apart from the agricultural robots that Orchanic is developing to pick fruits in collaboration with humans for farms facing labor shortages.

Hardouin recalls a recent presentation he took part in at CES in which speakers struggled to define AI.

"AI is so broad," he said. "It's like, what is love?"


The contents or opinions in this feature are independent and may not necessarily represent the views of Cisco. They are offered in an effort to encourage continuing conversations on a broad range of innovative technology subjects. We welcome your comments and engagement.

We welcome the re-use, republication, and distribution of "The Network" content. Please credit us with the following information: Used with the permission of http://thenetwork.cisco.com/.