By ITU News
It all started with an idea: how might ITU create a community that can produce tangible impact in the field of AI and 5G for the communication industry?
After all, emerging and future networks hold the promise of an even more interconnected and intelligent world, with their potential to support a burgeoning global ecosystem of connected devices.
As more and more data and computing power emerge, it is important to identify and solve some of the real-world problems confronting network operators so that 5G can reach its full technical potential in terms of speed and efficiency.
The good news?
Solutions driven by artificial intelligence and machine learning (ML) can help optimize communication networks as the so-called “revolutionary evolution” unfolds.
These can range from supporting 5G networking functions, to dealing with interference, to evaluating the intelligence level of a network.
Over the course of the past year, more than 1,300 problem-solvers from 62 countries worked to solve different network problems using AI and ML.
23 problem statements were contributed by industry and academia in Brazil, China, India, Ireland, Japan, Russia, Spain, Turkey, and the United States, and these “regional hosts” offered resources and expert guidance to support participants in addressing their challenges.
The Challenge also provided an opportunity to apply the AI/ML toolkit provided by new ITU standards, as well as to demonstrate and validate these new ITU standards.
The ITU AI/ML in 5G Challenge was organized with the support of promotion partners LF AI & Data, NGMN and SGInnovate; Gold sponsor the Telecommunications Regulatory Authority of the United Arab Emirates; and Bronze sponsors Cisco and ZTE.
Throughout the year, participants had to deal with a series of obstacles, explained Thomas Basikolo, AI/ML Consultant at ITU. Different time scales, noisy and dynamic network environments, and limited computing resources are just a few of them, he said. One of the major hurdles had to do with data, a critical input for any AI/ML system. Which data and where? How to label? Is it trusted? Are real data sets available?
These are all questions challenge participants had to ask, explained Basikolo.
“In research they often use synthetic data – but real-world data can be difficult to find,” he added.
Fast forward to yesterday, when not one but two teams were awarded first prize. The 10 winning and runner-up teams each presented their innovative solutions tackling network challenges with AI/ML, achieving global recognition and also sharing in a prize fund of 20,000 Swiss francs.
The first-place prize was awarded to two teams from China Mobile Shandong and China Mobile Guizhou, both of whom developed innovative solutions to the problem statement on optimizing network topology.
“Existing network topology planning does not fully consider the increasing network traffic and uneven network link capacity utilization, resulting in difficult topology optimization and increasing investments in network construction,” said Zhang Yiwei from Team Weeny Wit, whose members include Han Zengfu, Wang Zhiguo, Wu Desheng and Li Sicong. They used traffic forecasting and topology optimization to drive their solution.
According to Xi Lin from Team No Boundaries, “telecommunications [have] become an indispensable part of our lives” but transmission remains a challenge “since some [links] are overloaded, and some have a lot of capacity.”
Together with Gang Zhouwei, Rao Qianyin, Feng Zezhong and Guo Lin, Lin created a solution based on the ITU Y.3172 architecture, the Breadth First Search (BFS) algorithm and a “greedy algorithm.” Their innovation was able to solve 16 overloaded links in Kaili City (Guizhou, China) Lin said, adding that his team’s proposal saves time, resources, and improves traffic management.
The silver prize was also awarded to two teams, the first being Team AI-Maglev from the Institute of Computing Technology at the Chinese Academy of Sciences. Team members Yuwei Wang and Sheng Sun worked on a problem statement about deep neural network (DNN) inference optimization and came up with an efficient dynamic partition algorithm.
The second-place prize was shared by Salzsburg Research, Austria, whose team members Martin Happ, Jia Lei Du, Matthias Herlich, Christian Maier, and Peter Dorfinger aimed to solve the prediction of mean per-packet delays across networks. They presented a RouteNet modification for estimating these network delays with algorithmic scheduling.
Taking the third-place prize, Team Imperial_IPC1 from Imperial College London presented their solution of ‘neural network-based mmWave beam selection utilizing LIDAR data.’ Team members Mahdi Boloursaz Mashhadi, Mikolaj Jankowski, Tze-Yang Tung, Szymon Kobus, and Deniz Gunduz collaborated to solve an important problem related to the physical layer of modern communication networks by improving beam selection.
Sharing the bronze prize was Team UT-NakaoLab-AI from the University of Tokyo, Japan. Teammmates Fei Xia, Aerman Tuerxun, Jiaxing Lu, and Ping Du presented their ‘analysis on route information failure in IP core networks by network functions virtualization (NFV)-based test environment.’
“We can automatically detect network and device failures caused by COVID-19,” said Fei Xia.
Their group focused on a solution that was high-performing, practical and reliable, he added.
The following runner up teams were awarded fourth prize:
• Team IEC_Research, Instituto Tecnologico de Santo Domingo (INTEC), Dominican Republic (Juan Samuel Perez, Wilmer Quinones, Amin Deschamps, Yobany Diaz)
Solution: Radio link failure (RLF) prediction using weather information.
Predicting radio link failure correctly can result in less network downtime and reduced service degradation for network subscribers, explained team lead Juan Samuel Perez. His team trained a decision tree-based model, opting for simple data pre-processing and highly interpretable predictions that offer actionable information for a network operator.
• Team BeamSoup (Matteo Zecchin, Communication Systems Department, Eurecom, France)
Solution: AI-aided mmWave beam selection for vehicular communication.
“We provide a ML model that combines different data modalities and predicts quality of the communication beams,” Zecchin said.
• Team ATARI, University of Antwerp and Universidad de Antioquia (Paola Soto, David Goez, Natalia Gaviria, Miguel Camelo)
Solution: A graph neural network approach for throughput prediction in next-generation wireless local area networks (WLANs).
The solution applies AI/ML to predict performance in view of the challenges created by high densities of WiFi users and the need to navigate increasingly crowded spectrum.
• Team Link Busters, NEC Corporation, Japan (Dheeraj Kotagiri, Anan Sawabe, Takanori Iwai)
Solution: An augmented model for radio link failure prediction
This team looked at weather forecasts, with lead Dheeraj Kotagiri suggesting that when it comes to real-world data, we cannot rely on AI/ML models alone.
“We have to augment [ML] with data pre-processing conventional knowledge on radio and networks to predict link failures,” explained Kotagiri.
His team’s approach involved 80 per cent data pre-processing and 20 percent random forest – a machine learning method that is often used to make predictions by constructing a multitude of decision trees.
This Grand Challenge enabled an atmosphere of collaboration and brings new opportunities for industry, academia, and especially small and medium-sized enterprises (SMEs) to influence the evolution of ITU standards, said ITU Secretary-General Houlin Zhao.
The Secretary-General referred to IMT-2020/5G systems as the “backbone of tomorrow’s digital economy.”
To continue nurturing the collaborative atmosphere, Vishnu Ram OV from the recently concluded ITU Focus Group on machine learning for 5G and future networks shared his wish list for 2021. He called for more open data, equal access to computing resources and AI/ML tools for machine training and testing, and a distributed ecosystem featuring “a bigger, better, and braver focus on the problems in the 5G arena.”
The data repositories used in the ITU AI/ML in 5G Grand Challenge can be found at: https://github.com/ITU-AI-ML-in-5G-challenge
During the Finale, ITU invited submissions to an upcoming special issue of the ITU Journal on Future and Evolving Technologies (ITU J-FET) focused on AI/ML in 5G and future networks. See the Call for Papers (deadline: 22 February 2021).
To learn more about the challenges and opportunities of adding AI/ML solutions to 5G and future and networks, and the value of new ITU standards in support, be sure to read the latest issue of ITU News Magazine – freely available here.