Artificial Intelligence | Emerging Trends | ICT4SDG
June 8, 2017

How can we enhance the privacy, security and ethics of Artificial Intelligence?

By ITU News

Twenty years ago, Artificial Intelligence (AI) made headlines when IBM’s Deep Blue won a prized chess match against the world’s leading (human) chess player, Garry Kasparov.

And notably last year, Google’s AI beat the top player at the complex game of Go.

In the two decades between those two victories, AI has come a long way from the basements and back rooms of Computer Science departments to the forefront of global discussions at the United Nations, such as this week’s AI for Good Global Summit in Geneva, Switzerland.

Indeed, AI is now poised to impact nearly every area of society. But as we prepare to reap the massive benefits of this  “Golden Age” of AI in which 62% of organizations will be using AI technologies, experts have warned that it is critical that privacy, security, and ethical questions are brought to the forefront.

That’s why the AI for Good Global Summit has brought together top industry, government, and academic leaders to focus on those issues in a series of “Breakthrough” sessions that propose paths forward for safe, trusted, ethical AI solutions to make the world a better place.

‘Breakthrough’ sessions to guide an ethical framework

Many of the key breakthrough sessions at the Summit focused on the need for a guiding ethical framework and code of conduct, a theoretical “Hippocratic oath for AI” to direct the design, production, and use of AI and its applications for robotics.

Delegates in the sessions debated key questions around AI applications for autonomous vehicles and drones, biomonitoring, healthcare robotics — and even for robots responsible for the maintenance of public order. And they recognized the challenge of anticipating key societal issues raised by emerging AI technologies that are changing at an exponential pace.

“The difficulty is whether we recognize the inflection points as they come along and we take those opportunities to shape the technology,” said Wendel Wallach, scholar at Yale University’s Interdisciplinary Center for Bioethics, at the breakthrough session on ​Ethical Dev​elopment of AI.

“The difficulty is whether we recognize the inflection points as they come along and we take those opportunities to shape the technology.” — Wendel Wallach, Yale University

But ethics in the context of AI is more complex than identifying the where problems reside. Algorithm bias, for example, could potentially undermine the outputs of AI analysis.

“We have to look at how we understand the implicit biases and inputs to understand what the biases are in the outputs,” said Wallach. “Can we do that technologically? If we can’t, what kind of restriction should we put on the deployment of the technology?”

When machines make decisions, where do you draw the line

AI will soon be helping CEOs, doctors and surgeons make “better” decisions. “We’re delegating decisions to machines and, that is one of the biggest ethical questions: is it possible to draw a line?” asked Luka Omladič from UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST.)

Another point raised by Wallach is that new technologies are usually developed in the “rich north” and benefits are skewed in favor of those countries. “Those that have acquired the benefits are not necessarily those who are paying the price,” said Wallach. The emerging countries are usually the victims of economic disruption following technological advancements, he and others pointed out.

As AI powered technologies can self-advance, there is a level of uncertainty in applying security measures and standards.

In the breakthrough session on privacy and safety of AI, Virginia Dignum, from Delft University of Technology proposed the adoption of “ART principles” for AI, which include: Accountability – who is held to account; Responsibility – use and stewardship of data; and Transparency – being able to see beyond the “black box” and question the technical algorithms of AI.

The development of AI is progressing at an exponential rate. The emerging technology could bring tremendous economic growth and social impact, but it could also cause social and economic disruption. Policymakers, innovators and researchers should recognize the potential risks and impact of AI to maximize its benefits for everyone, participants agreed.

  • Was this article Helpful ?
  • yes   no
© International Telecommunication Union 1865-2017 All Rights Reserved.
ITU is the United Nations specialized agency for Information and Communication Technologies.