Automation nations: equipping the next generation FEATURE
author-img

“Will robots inherit the earth? Yes, but they will be our children”

Popular culture has done a great disservice to the world of robotics and artificial intelligence (AI), by scaremongering and detracting from the many positive advances the technology is enabling today. So says Professor John Macintyre, one of the UK’s leading lights in adaptive AI and its real-world applications. “Since 2001: A Space Odyssey, movies have portrayed robots and AI as a threat,” he says. Not only is it extremely unlikely that machine intelligence will overtake that of humans, he argues, but the whole field is giving rise to exciting new skills and job opportunities as developments spill out of the labs and into the real world.

Macintyre is a Professor of Adaptive Technology and faculty dean at the University of Sunderland in England. AI has been his life’s work for the last quarter of a century. In this time the technology has evolved from decision-tree based reasoning (as used to take on Chess grandmasters) to more ‘natural’ approaches to problem-solving which involve learning from and adapting to new situations.

Social skills

Adaptive AI has led to breakthroughs in pattern recognition, giving way to advanced modelling and forecasting – for predicting everything from crimes and diseases to freak weather conditions and financial crashes. Sunderland University has been at the forefront of many such applications, including a system to predict emerging cases of diabetic retinopathy, a cause of blindness. Other applications involve optimization – eg. managing the dosing at water treatment plants, and improving mobile phone performance (here, adaptive AI learns the best combinations of satellites and cells to ensure a continuous signal - using algorithms that mimic the way ants forage for food).

Not only do these applications boost rather than threaten the human experience, they also rely heavily on people to program them.

“No one has yet produced a robot that can do everything, so humans will be needed for a long time yet,” says Ghislaine Boddington, creative director at:

BDS is a small UK design company, specializing in the physical interaction between humans and robots or avatars in any scenario which puts the human body at the center of a digital interaction. This requires a lot of attention to the social elements of robotics, to ensure that people accept the technology.

Boddington’s background is in the performing arts, and it was through this that she developed an interest in telepresence – enabling her to rehearse group dances at distance. Today her work takes her into a diverse range of automation applications, including those used in the caring professions – eg in attending to patients’ physical needs, or providing links to physicians who can’t be present in person.

“The whole area of touch is quite complex,” she notes. “In Japan, there are around 20 companies making robots for domestic care use which interact physically with people. They are still fairly limited in their functionality, but in future robots are likely to take on more of a social companion role so the human aspects do need to be addressed.”

In one of the projects Boddington has been involved with, recently demonstrated at FutureFest in London, a ‘Blind Robot’ feels people’s faces. The experiment is designed to see how people respond to being touched so intimately by an automaton.

But there are other psychological responses to be overcome too. These include fears about loneliness in care settings if the need for a real human presence is reduced, and about people’s livelihoods if more manual jobs are automated.

The counter argument is that robotics and AI will empower and work in harmony with people – so that carers have more time to talk and listen to their charges, for example. Robots will also be able to help by recording and relaying information about how a patient is and what they have been doing to care professionals and concerned family members.

The University of Hertfordshire in England has set up a secret ‘Robot House’. Here, researchers are testing robots in a real home environment on all manner of different tasks, and considering any emerging human issues.

The Kinect effect

In an industrial setting, the positive impact of robots is more obvious. In addition to assuming heavy and repetitive work so that employees can do more of the planning, thinking and creating, intelligent machines remove the risk of humans going into hazardous environments.

Advances in sensibility and dexterity have an important role here – ensuring that if a robot drops or mislays something, it can find it again, for example. German company Igus provides bionic/robotics arm joints which its customers use in all sorts of applications. Its grippers aren’t yet precise enough for micro-soldering, but achieve accuracy of +/- 1mm in picking and placing. Marry this with the sort of gesture control used in Microsoft’s Xbox Kinect, and the potential for use in contaminated or dangerous environments is considerable.

“Microsoft Kinect has opened a new landscape of opportunities in robotics,” notes Dr. Antonio Espingardeiro, a senior member of IEEE, the international association for technical professions. “Using the Kinect sensor, a robot is capable of perceiving human silhouettes, detecting colors, building maps and recognizing features which allow it to navigate autonomously. Perception is one of the biggest challenges in robotics and Kinect has helped immensely.” 

Using high-powered image-processing algorithms combined with low-cost sensors and commodity hardware, UK product design and development firm Cambridge Consultants has come up with robotic systems that can perform complex picking and sorting tasks involving irregular items - eg sorting fruit and vegetables, for example, or locating and removing specific weeds among crops in a field.

Mobile robots at Imperial College London are learning how to map and localize themselves in environments they are not used to – so an automated vacuum cleaner can clean a room without knocking over a priceless vase, or devices can investigate inaccessible parts of a building in detail. “It’s not just about knowing where things are, but what they are,” explains Dr. Stefan Leutenegger, a lecturer on robotics and deputy director of Imperial’s Dyson Robotics Laboratory.

Training robots’ future masters

The challenge now is finding people skilled in the right combination of disciplines to keep taking robotics and AI to the next level. “There is a shortage, and it is affecting the industry,” Leutenegger notes. “There is currently a big drive to get people interested.” Imperial College is doing its bit by going into schools; it also hosts an annual science fair.

MeArm Robotics, a British start-up crowdfunded via Kickstarter, is on an education mission too. It provides pocket-sized industrial robotic arms that schoolchildren and other enthusiastic learners can program. Founder Ben Grey describes MeArm as the logical next step from Raspberry Pi. “It’s a valuable learning tool and we’d like to see it become part of the STEM [science, technology, engineering, and mathematics] education syllabus,” he says.

Without investment in the next generation, the risk is that countries like the UK (a leading nation on artificial intelligence in the 1970s and 1980s) will lose ground. For the last two decades, Sunderland University’s Macintyre has edited the scientific journal Neural Computing and Applications. This has given him a unique view of how AI is advancing globally (today the publication receives around 1,500 submissions a year, with just about every region represented now - from Scandinavia to South America). “The real powerhouse though now is China,” he says. “That’s not only because of its big industrial need to modernize, but because China’s education system produces very highly skilled mathematicians.”

But robotics and AI need more than one set of skills, and there are opportunities across a range of disciplines - beyond the ability to develop complex algorithms. In addition to mechanics, electronics and computer science, specific expertise is needed in image processing and sensing, collaboration and decision making, and the social aspects of machines.

“The potential applicability is immense, from healthcare diagnostics to self-driving cars, to buying and selling items in the cloud,” notes Dr. Espingardeiro at the IEEE. Another growing area is ‘swarm’ (collaborative) robotics, he adds. “Interesting work here has emerged from EPFL in Switzerland and the University of Bristol in the UK, around coordinating groups of robots to achieve a common goal.”

The opportunity now is to take up these developments and run with them, so that more lab-based breakthroughs can solve real human problems today. But that requires experts who can create the applications, using the more accessible technology platforms that are now available.

“Programming skills are absolutely crucial,” Espingardeiro concludes. “Software will become omnipresent in human life. From Microsoft, Google, Apple and Facebook to small/medium size companies, everyone is hiring software developers.”

Far from replacing humans, then, robotics and AI are creating a wide spectrum of new requirements as potential adopters and visionaries push the boundaries of what’s possible. All of which means that jobs should be in plentiful supply – as long as upcoming generations are equipped with the right skills.

# # #

The contents or opinions in this feature are independent and may not necessarily represent the views of Cisco. They are offered in an effort to encourage continuing conversations on a broad range of innovative technology subjects. We welcome your comments and engagement.

We welcome the re-use, republication, and distribution of "The Network" content. Please credit us with the following information: Used with the permission of http://thenetwork.cisco.com/.

Share this article:

About Sue Tabbitt

Sue Tabbitt is a technology journalist who covers IT and telecommunications.