Ahead of the ITU Plenipotentiary Conference 2018 (PP-18) – the top policy-making body of the International Telecommunication Union, taking place from 29 October to 16 November in Dubai – ITU News is highlighting some important and emerging areas of ITU’s work. The following is an ITU Plenipotentiary Backgrounder, the original can be found on the PP-18 website here.
Software has become significantly smarter in recent years.
The current expansion of AI is the result of advances in a field known as machine learning. Machine learning involves using algorithms that allow computers to learn on their own by looking through data and performing tasks based on examples, rather than by relying on explicit programming by a human.
A machine-learning technique called deep learning, inspired by biological neural networks , finds and remembers patterns in large volumes of data. Deep-learning systems perform tasks by considering examples, generally without being programmed, and out-perform traditional machine-learning algorithms.
Big Data, referring to extremely large data sets that can be analysed computationally to reveal patterns, trends and associations, together with the power of AI and high-performance computing, are generating new forms of information and insight with tremendous value for tackling humanity’s greatest challenges.
Below are just a few examples showing how AI can be applied for good:
While the opportunities of AI are great, there are risks involved.
Datasets and algorithms can reflect or reinforce gender, racial or ideological biases . When the datasets (fed by humans) that AI rely on are incomplete or biased, they may lead to biased AI conclusions.
Humans are increasingly using deep-learning technologies to decide who gets a loan or a job. But the workings of deep-learning algorithms are opaque, and do not provide humans with insight as to why AI is arriving at certain associations or conclusions, when failures may occur, and when and how AI may be reproducing bias.
AI can deepen inequalities by automating routine tasks and displacing jobs.
Software, including the software that runs cell phones, security cameras, and electrical grids, can have security flaws. These can lead to thefts of money and identity, or internet and electricity failures.
New threats to international peace and security can also emerge from advances in AI technologies. For example, machine learning can be used to generate fake video and audio to influence votes, policy-making and governance.
The development and adoption of relevant international standards, and the availability of open-source software, will provide a common language and tool for coordination that will facilitate the participation of many independent parties in the development of AI applications. This can help to bring the benefits of AI advances to the entire world, while mitigating its negative effects.
Indeed, it is vital that a diverse range of stakeholders guide the design, development and application of AI systems. Accurate and representative AI conclusions require datasets that are accurate and representative of all. Furthermore, safeguards need to be put in place to promote the legal, ethical, private and secure use of AI and Big Data.
Increased transparency in AI, with the aim to inform legal or medical decision-making, will allow humans to understand why AI is arriving at certain associations or conclusions. This, in turn, will encourage people to use their expertise, experience and intuition to validate conclusions or make a different decision than the one proposed by the machine. While the machine analyses and arrives at conclusions at much greater speed and accuracy than before, it is still humans who have the power to question the machine´s conclusions and make final decisions.
To balance the consequences of AI on employment and benefit from the new job opportunities that AI offers, it is essential to create environments that are conducive to acquiring digital skills, be it through formal education or training at the workplace. In particular, AI will bring employment opportunities to people who have the advanced digital skills needed to create, manage, test and analyse ICTs.
Efforts that protect the safety, privacy, identity, money, and possessions of the end-user need to be deployed to address AI-related security challenges in areas as diverse as e–Finance, e-governance, smart sustainable cities, and connected cars.
Facilitating conducive policy and regulation
As the United Nations´ specialized agency for information and communication technologies, ITU brings together stakeholders representing governments, industries, academic institutions and civil society groups from all over the world to gain a better understanding of the emerging field of AI for good.
Building on the success of ITU´s first AI for Good Global Summit, the 2018 Summit collaborated with 32 UN family agencies and other global stakeholders to identify strategies to ensure that AI technologies are developed in a trusted, safe and inclusive manner, with equitable access to their benefits. The Summit spawned more than 30 pioneering ‘AI for Good’ project proposals on expanded and improved health care, enhanced monitoring of agriculture and biodiversity using satellite imagery, smart urban development and trust in AI.
ITU maintains an AI Repository where anyone working in the field of artificial intelligence can contribute key information about how to leverage AI for good. This is the only global repository that identifies AI-related projects, research initiatives, think tanks and organizations that aim to accelerate progress on the 17 United Nations Sustainable Development Goals (SDGs).
ITU regularly brings together heads of ICT regulatory authorities from around the world to share views and developments on AI and other pressing regulatory issues, address questions of governance and strengthen collaboration to use AI for good.
Moving forward, international standards—the technical specifications and requirements that AI and other technologies will need to fulfil to perform well—can help address the risks of AI by allowing machine learning to be ethical, predictable, reliable and efficient.
The ITU Focus Group on Machine Learning for Future Networks, including 5G, has been examining how technical standardization can support emerging applications of machine learning in fields such as Big Data analytics, as well as security and data protection in the coming 5G era. The Group will draft specifications to enable ICT networks and their components to adapt their behaviour autonomously in the interests of ethics, efficiency, security and optimal user experience.
Out of the 2018 AI for Good Global Summit came the call for more standardization for health, in the form of the newly created Focus Group on Artificial Intelligence for Health (FG-AI4H), which aims inter alia to create standardized benchmarks to evaluate Artificial Intelligence algorithms used in healthcare applications.
Send this to a friend