Artificial Intelligence | Emerging Trends
July 20, 2017

Deploying Artificial Intelligence technologies in humanitarian action is not without risks

By Anja Kaspersen

An Artificial Intelligence (AI) algorithm called Libratus is showing great promise as a poker player. The skills Libratus has taught itself – such as bluffing in a context of imperfect information – are potentially applicable to a wide range of real-world situations, including conflict negotiations and warfare.

Imagine a future in which conflict negotiations will see parties represented by an algorithm like Libratus. Humans will not be able to follow the negotiations, and the algorithm will not be able to explain the reasoning that led to its identified solution.

Our thinking about the risks posed by AI is not as well developed as our thinking about its benefits. This needs to change.

RELATED: Reality check: ‘We are not nearly as close to strong AI as many believe’

AI challenges to moral and legal responsibility

Deep learning algorithms can now diagnose some forms of cancer more accurately than experienced human oncologists, but they cannot explain exactly how they arrived at their diagnosis. They taught themselves to do this by analysing vast datasets of images of tumours, but exactly what they learned remains mysterious. Self-driving vehicles relying on algorithms that teach themselves to drive or navigate through observation, but they are unable to share exactly what they observed as a basis for their course of action.

Both examples demonstrate that creating an ‘explainable AI’ will become more difficult as self-improving AI becomes more advanced and ubiquitous and our means to design ‘explainable systems’ potentially insufficient. This ‘black box’ nature of AI reasoning poses profound ethical, and in some cases operational, concerns.

Imagine a deep learning algorithm that proves more capable than humans in distinguishing combatants from civilians. Knowing it will save more civilians than human decision-makers, do we have an ethical obligation to allow this algorithm to make life-and-death decisions? Or would this be morally unacceptable, knowing that the algorithm will not be able to explain the reasoning that led to a mistake, rendering us incapable of remediating its ability to repeat that mistake?

When such an algorithm does make mistakes, who should take moral and legal responsibility: the decision-maker(s) who gave the algorithm the authority to make decisions, the programmer who wrote the initial algorithm, or the person who chose the dataset from which the algorithm taught itself?

“AI-powered systems are likely to transform modern warfare as dramatically as gunpowder and nuclear arms.” – Anja Kaspersen, Head of Strategic Engagement and New Technologies, ICRC

AI will complicate warfare

The potential benefits of AI look to be immense, but we are also on the brink of an AI-powered global arms race.

Advances in robotics and the digital transformation of security have already changed the fundamental paradigm of warfare, and AI-powered systems are likely to transform modern warfare as dramatically as gunpowder and nuclear arms.

‘Lethal autonomous weapons’ are one of the highest-profile challenges to be addressed in this arena, but we must be careful not to overlook the nefarious potential of commercial AI applications. These (increasingly connected) applications could be weaponized not only by states, but also by non-state actors, small groups and even individuals, especially as costs decrease and software is democratized by the open-source movement.

AI will create new capabilities to wage cyberwarfare, acting as a powerful tool to search out and exploit weaknesses in connected systems. AI will also make it easier to weaponize narratives, creating and sustaining the spread of misinformation to cause confusion about what is happening and who is responsible.

These threats can be counteracted by developing AI applications to defend against cyberwarfare and detect attempts at misinformation by verifying information from multiple sources. But how these AI arms races will play out is not easy to predict.

RELATED: How can we enhance the privacy, security and ethics of Artificial Intelligence?

The way forward

Understanding AI is fast becoming a key priority to public and private-sector decision-makers. United Nations agencies are beginning to identify ways in which AI could help to achieve the Sustainable Development Goals, exemplified by the AI for Good Global Summit organized by ITU in June.

Momentum is growing in global efforts to address the risks posed by the deployment of AI technologies, but these efforts are at an early stage and the research and development of AI-based technologies is developing much faster than our understanding of it and the shape of appropriate governance mechanisms.

How we might create a regime of agile governance suited to an AI-powered world is a question calling for broader and more inclusive dialogue.

The humanitarian community will need to clarify how AI technologies might assist in delivering aid and engaging with stakeholders more efficiently. Should some of the functions of humanitarian action be automated, and if so, which? And what are the new skills required by the humanitarian workforce to fully integrate AI technologies in an accountable, responsible and transparent manner?

There is a need to improve understanding of the ethical concerns related to the use of AI technologies in humanitarian action. What accountability mechanisms should govern the use of machine learning in humanitarian action? How transparent can we be, and how transparent should we be, with respect to our use of data and advanced algorithms?

One reason AI technologies are becoming more powerful is that we are generating more data for them to analyse. How might we ensure that data is collected and managed transparently and that AI-based technologies are interoperable and protected against adopting the biases of their creators?

There is no doubt AI technologies can be a force for good. But understanding, knowing and appropriately governing its deployment is critical to building trust as AI technologies become a common, integrated and useful part of our daily existence. To build trust there is an urgent need to address the risks to ensure AI for good.

Anja Kaspersen
Since 2016, Head of Strategic Engagement and New Technologies at the International Committee of the Red Cross. Previously with the World Economic Forum as a member of the Executive Committee. Professional affiliations with the Hastings Centre, the World Policy Institute, the IEEE and the Harvard Future Society AI-Initiative. Former positions include a long and varied career, spanning across several continents, with the Norwegian Government and the UN, international organisations, diplomacy, research, academia and business. ​​
  • Was this article Helpful ?
  • yes   no
© International Telecommunication Union 1865-2017 All Rights Reserved.
ITU is the United Nations' specialized agency for information and communication technology. Any opinions expressed and statistics presented by third parties do not necessarily reflect the views of ITU.

Deploying Artificial Intelligence technologies in humanitarian action is not without risks

Send this to a friend