Feature Story

Why the future of AI needs ethical human judgement

The importance of human judgement in the programming of AI.
ai-human-judge-feature_800x450_thumb_081319-jpg-2011440-1-0

Imagine a fraud detection system checking your credit card transactions to see if someone is illegally using your card. Odds are the system is using an AI-powered prediction process. This is part one of fraud detection. Part two requires someone to decide what to do with the data and how to proceed. In other words, part two calls for human judgement. This blending of human intelligence and machine intelligence is the future of AI.

Today, AI is being used three different ways: automated intelligence, autonomous intelligence, and augmented intelligence—which helps humans become more efficient at tasks they’re performing.

Automated intelligence occurs when a specific task is replaced by AI. Autonomous intelligence takes place when an entire job is replaced by AI—for example, a driver’s job replaced with self-driving cars. Another AI example of when human judgement is not involved happens when AI converts speech to text.

“These are fringe cases,” says Manoj Saxena, executive chairman of CognitiveScale, a creator of software that brings together human and machine intelligence to enhance customer experiences, “The biggest value is pairing human and machine intelligence to solve complex problems. That is what we focus on with our Cortex software.”

CognitiveScale’s software does this across multiple sectors, including banking, insurance, healthcare, and digital commerce. Hospitals that use the CognitiveScale’s AI-powered Debt Risk Advisor solution can intervene if a patient’s account is unpaid and offer a payment plan. They are able to do this when the software ingests millions of patient account records and analyzes the patients’ past history. Using this data, hospitals are able to prioritize high-risk accounts.

AI, human judgment and gender equity

While the aforementioned instances show AI’s value, AI is not a surefire way to eliminate human flaws. “AI fails when we program our biases into it,” says Katica Roy, CEO of Pipeline Equity, a SaaS platform that supervises AI against human experts to compare outputs and identify patterns. 

Regarding gender equity, Roy says AI can be programmed to assess, address, and take action against the unconscious gender biases in the workplace that cost the U.S. $2 trillion each year.

“That’s a win-win-win situation for our economy, our labor force, and the families that depend on our labor force,” she shares.

Pipeline identifies gender bias by breaking down five pillars of talent including hiring, pay, performance, potential, and promotion. The company uses these pillars to organize the data hiding in a company’s HR platforms.

“Generally speaking, companies have at least two HR platforms: a core platform and an applicant-tracking platform,” explains Roy, “When the time comes for a manager to make a decision, such as submitting a pay proposal or writing a performance review, the AI will intercept the decision and provide a recommendation that’s best for the company. The recommendation will be both economically quantifiable and bias-free.” 

How can people, government and the private sector promote ethical AI?

Saxena says humans are already too dependent on AI, and that if society doesn’t make a point to be mindful of the harm AI can potentially cause if not created with sound ethics, more inequality is inevitable. 

“Every day there are hundreds of millions of people who put their hands in the life of an AI,” he shares, “The cellphone is beginning to make us into a whole new species. Everything from how you select music, to watch a movie, or to drive, you are dependent on AI. How you see it will determine if you’re dependent on it versus empowered by it.

Roy says that since data scientists are at the heart of AI and how the algorithms are programmed, the private sector and government need to both pinpoint highly-qualified, high performing data scientists and then screen their natural biases and ethical sensibilities. This includes ensuring diverse AI teams including gender, race/ethnicity, sexual orientation, and age.

“Data scientists hold the power in AI, so we must understand their own biases as well as their sense of ethics,” says Roy, “Beyond that, the private sector and government can develop checks-and-balances and quality review systems to help ensure that the AI algorithms are in fact achieving the positive effects they were designed to do. In the age of AI, we have a choice to use it for good or ill.  It could be the greatest equalizer we have seen if we use it correctly.”

###

The contents or opinions in this feature are independent and may not necessarily represent the views of Cisco. They are offered in an effort to encourage continuing conversations on a broad range of innovative technology subjects. We welcome your comments and engagement.

We welcome the re-use, republication, and distribution of "The Network" content. Please credit us with the following information: Used with the permission of http://thenetwork.cisco.com/.