Artificial Intelligence | Emerging Trends | Policy/ Regulatory Reform | Regulation
May 9, 2017

AI and ethics — where to draw the line?

By Mike Hinchey

After a 75-year incubation, Artificial Intelligence (AI) has become a household word, reflected in popular culture through books, movies and even music.

From self-driving vehicles and interactive robots to Apple’s Siri concierge and IBM’s Watson, which is increasingly being used to solve business problems, AI technology is playing a growing role in our day-to-day world.

While true AI systems are still far less common than most people think — often what we call “AI” is simply pre-programmed rules that the software reinforces in different contexts — impressive advances are continually being made in autonomous, adaptive and AI systems that will see them having a greater impact over time.

Ensuring trustworthy AI systems

As President of IFIP, the global federation of information and communication technology (ICT) professional societies, I’m conscious that the work our members and others engage in to program these systems is critical to their performance and their trustworthiness.

To ensure that the impacts of AI systems remain positive and constructive, it is essential that we build in certain standards and safeguards.

Take the example of autonomous cars, which rely both on their self-driving functions as well as the ability to access and interpret information from their surroundings to safely navigate their environment.

While automation functions enable the car to start, accelerate, make turns and brake, the way the system interprets additional information from its environment (other vehicles, speed limits, terrain etc.) creates the impetus for decisions about when and how to make those actions.

Currently, most autonomous vehicles respond to different situations in a predetermined way. For example, if the car in front brakes, they also will slow. And if the car behind accelerates at the same time as the car in front brakes, they will attempt to change lanes as their sensors provide input about the other vehicles’ behaviour. But what happens if changing lanes means hitting another car, a wall or worse still, a pedestrian?

In circumstances such as these, a human driver might take any one of a number of options, (aggression, caution, freezing or evasion) many of which could result in an accident.

The reality is that self-driving cars won’t really be practical until all vehicles are self-driving and the unpredictable human factor has been removed from the equation. But then, given that the true test of an AI application is its ability to learn and make unprogrammed decisions, one wonders how unpredictable AI might be in such a context.

Building in safeguards

Most of my work in autonomous and adaptive systems has related to space exploration through my involvement with NASA and other space agencies. Here, where there are so many unknowns, there is a limit to the number of situations we can predict and, thus, for which we can program.

The solution in this and other cases involving artificially intelligent systems is to define the range of actions or decisions they can make and where they must defer to human judgement.

If we want a system to be truly adaptive, we must give it a range of actions it can take without specifying exactly what it must do while also prohibiting certain actions. For example, in our self-driving car example, a vehicle with a prime directive to save human life might shut down to avoid causing an accident. While this might be an appropriate action if the car were driving on a back street, it could be catastrophic if the vehicle were driving on a busy highway.

It’s also important for AIs and other autonomous systems to incorporate appropriate security and privacy measures to ensure they operate ethically and within the law, as well as protecting them from external hacks or other intrusions.

As more and more decisions are made without human involvement, it’s important that we specify a range of behavioural rules that society will accept from AIs and those that we won’t.

In order for humans to accept and trust AI systems and their actions, we need to build some predictability, or at least boundaries into their behaviour, beyond which they cannot go.

Asimov’s Prime Directive might be the stuff of stories, but it provides a sense of certainty that will be a prerequisite for most people to be willing to incorporate AI systems into their daily lives, particularly when it relates to safety-critical functions.

Mike Hinchey
​​Mike Hinchey is President of IFIP (International Federation for Information Processing) and Vice-Chair (and Chair-Elect) of IEEE​ UK & Ireland section. Hinchey holds a B.Sc. in Computer Systems from University of Limerick, an M.Sc. in Computation from University of Oxford and a PhD in Computer Science from University of Cambridge. He is a Chartered Engineer, Chartered Engineering Professional, Chartered Mathematician and Charted Information Technology Professional, as well as a Fellow of the IET, British Computer Society and Irish Computer Society.

© Photographer: Anthony Kwan/Bloomberg via Getty Images

  • Was this article Helpful ?
  • yes   no
© International Telecommunication Union 1865-2018 All Rights Reserved.
ITU is the United Nations' specialized agency for information and communication technology. Any opinions expressed and statistics presented by third parties do not necessarily reflect the views of ITU.

AI and ethics — where to draw the line?

Send this to a friend