AI ‘facts’ are recounted from three perspectives: AI’s technological promise, AI’s dystopian potential, and AI’s ethical implications. Together, these accounts of AI promise us everything from self-driving cars to the delivery of UN Sustainable Development Goals (SDG) to the annihilation of the human species.
It is difficult to cut through AI claims and counterclaims, but it becomes easier when we remember that AI has yet to deliver self-driving cars, killer robots or magic-bullet solutions to SDGs. What are presented as scientific facts are often uncertain predictions for the future. That’s why AI stories should also be considered as broader political constructs informed by fear, hope, and a desire for freedom.
Fear drives a dystopian account of AI. In this version of the story, states, corporations or even super-intelligent Artificial General Intelligences (AGIs) compete with each other for power and control. Proponents of this version of the story are often focus on issues such as the development of killer robots, the fear-driven pursuit of an AI global arms race and the potential of an AGI to annihilate the human species.
Hope informs an ethical account of AI. In this version of the story, AI is controlled by morally-driven actors who form an international community and have a cooperative ethos. Because adherents of this narrative believe that AI is for the greater human good, AI must be made safe for humans, and must help humans achieve a better life. Hope proponents devise ethical principles and applications to ensure AI will benefit humans. This is what Tegmark’s Beneficial AI Movement and the ITU’s annual ‘AI for Good’ Global Summit do.
The desire for freedom provides an economically and politically techno-optimistic account of AI. Techno-entrepreneurs strive to transform high-tech AI into high freedom. Motivated by a ‘can do, will do’ attitude, they create AI applications to free us from menial tasks like housework and driving by promising us AI cleaners and self-driving cars. The wealth and power techno-entrepreneurs accrue from their inventions also gives them relative freedom from government control, which is the freedom some of them most seek.
These AI political stories overlap in complicated ways. For example, Elon Musk is developing self-driving cars and backing the Beneficial AI Movement, while warning us that WWIII will result from an AI arms race. AI political stories driven by fear, hope and a desire for freedom also overlap with other kinds of political stories around specific issues or events — such as sexual orientation and gender identity politics.
The 3 lenses through which we view AI appear in various reactions to a disputed recent Stanford University study that claimed AI facial recognition technology could more accurately determine a person’s sexual orientation than the human eye.
People’s reactions to the Stanford Study were determined in part by how they were taken up in fear, hope and freedom stories. Fear stories emphasized the potential to criminalize ‘machine-read gays and lesbians’, hope stories emphasized the desire to prove the biological existence of gays and lesbians to affirm their human rights, and freedom stories emphasized how the very applications techno-entrepreneurs create limit privacy in an age of widespread state surveillance.
Without resituating AI ‘facts’ and their fantasy applications within all political stories that make them meaningful, we all risk misunderstanding competing claims about AI, confusing AI opportunities with AI risks, and championing dangerous norms, standards and regulations around AI, or, even worse, neglecting to adopt any norms or regulations at all.
At this moment when life on every scale is being reimagined through AI, the challenge for global policymakers is to map how factual and fictional terrains of AI are produced not just through science and technology but also through political stories about AI. This is the way forward to create sound, safe, and fair AI policies.
*Any opinions expressed by third parties do not necessarily reflect the views of ITU.