When a robot calls FEATURE
author-img

In 2013, a telemarketer called Samantha West made headlines when it was believed she might be a robot. Could a real robot get away with it today?

The prospect of artificial intelligences (AIs) that can get away with fooling humans is leading experts to question whether it's time to regulate telemarketing phone calls from robots.

Wendell Wallach, author of A Dangerous Master and senior adviser to The Hastings Center, says new laws may be needed as the lines between humans and simulations of human activity get blurred.

"If you're basically intelligent you still should be able to deduce when you're talking with a bot," he says. "But perhaps that space is closing more quickly than we would think. I wonder whether we're going to need to signal that it's not a human."

Wallach believes it could still be some time before an AI could phone you up and dupe you into thinking it was a person for long. "There'll be all kinds of inversions of grammar," he says.

see also: robots taking on new jobs, even delivering to your door

But in 2013, a telemarketer calling herself Samantha West raised red flags with several Time magazine reporters. It didn't take long for them to suspect she might be a robot. "Something was fishy," wrote Zeke Miller and Denver Nicks.

"When … asked point blank if she was a real person, or a computer-operated robot voice, she replied enthusiastically that she was real, with a charming laugh. But then she failed several other tests."

Further research showed a human handled West's complex voice system, "much like a remote-controlled car."

Firms use the voice software in place of foreign call center workers, whose accents could be off-putting to some.

Nuance Communications, which makes voice software for call centers, says these systems are still being used… and are getting better. "Virtual assistants have certainly moved on significantly from 2013," says Sebastian Reeve, director of product management.

"However, the important role that humans play in the development of virtual assistants has become more refined, rather than being completely made redundant by bots operating as 'lone rangers'."

For now, Wallach hopes current laws and industry self-governance will keep things in check. But he's taking no chances. One of his primary projects is a global move to make sure the use of robots and AI will always be for the good.

New spin on robocall 

Many American households are already getting a taste of what could be in store thanks to the "can you hear me" scam. You may have received this call too. You answer your phone, there's a slight pause and a woman's voice says with an apologetic laugh that she's having problems with her headset. The only problem? It's a chatbot aimed at tricking the person on the other end of the line into saying "yes" so your voice recording can be used as proof you wanted something, when that couldn't be further from the truth. 

see also: NASA developing robots for Mars missions

There's no question that today's bots can very quickly become self-reliant in dealing with a great number of queries, says Reeve. And more so when these questions relate to a given topic, for instance when handling client service questions.

Chatbots for customer service questions

A case in point is the Swedish banking giant Swedbank. It has a Nuance virtual assistant called Nina. Within three months of going live, Nina engaged in 30,000 chats a month and could sort out eight out of 10 questions.

Even so, says Reeve: "We still believe that the most successful virtual assistants will be those that incorporate human assisted artificial intelligence, otherwise known as supervised AI. "In these cases, the human acts as a partner to the bot to accelerate machine learning and, more importantly, ensure that it is learning ‘the right things' from humans."

Just how vital that learning process might be was brought home in 2016 when Microsoft launched a Twitter chatbot called Tay.

Tay was built to learn from other Twitter users, but within hours of launch Tay picked up on comments from internet trolls and started spouting abuse. Microsoft had to pull the plug on the hapless bot.

Such fumbles may help smart humans tell machines from people for some time to come, Wallach says. "But there's perhaps a willingness for some of us to be duped," he says.

Like the kind of bond shown in the Oscar-winning movie Her, some people may see AIs as more lifelike than the machines really are. That could put people in danger of being hoodwinked or robbed, says Wallach. "These are areas of real potential corruption."

For now, Wallach hopes current laws and industry self-governance will keep things in check. But he's taking no chances. One of his primary projects is a global move to make sure the use of robots and AI will always be for the good.

Learning from science fiction 

The science fiction author Isaac Asimov tried to do this in literature with his Three Laws of Robotics. But doing it in practice is much harder, says Wallach, whose book Moral Machines looks at how to teach robots what is right and what is wrong.

"What people forget about Asimov's stories is that nearly every one of them is about a breakdown in the laws," Wallach says. "Rather than give us a formula, he gave us a pretty good lesson in why simple rule-based morality doesn't work."

Until the right kinds of protection are in place, all that stands in the way of an unethical AI and a flesh-and-blood victim is the latter's ability to spot what it means to be human. So, what do you do if the phone rings and you're not sure if it's a real person at the other end?

Wallach says you should look for a clearly human response, for instance by throwing the caller off their script. "Could you tell me what you thought of the Patriot's game?" might work, he says.

###

The contents or opinions in this feature are independent and may not necessarily represent the views of Cisco. They are offered in an effort to encourage continuing conversations on a broad range of innovative technology subjects. We welcome your comments and engagement.

We welcome the re-use, republication, and distribution of "The Network" content. Please credit us with the following information: Used with the permission of http://thenetwork.cisco.com/.

Share this article:

About Jason Deign @DeigninSpain

Jason Deign is a Barcelona-based business writer, journalist and author.