Big Data | Cloud | IoT | Network Management | Open Source
October 24, 2017

What is ‘cloud-native IoT’ and why does it matter?

By Omar Elloumi and Mahdi Ben Alaya

The value proposition of cloud computing is about opening access to sophisticated computing and storage capabilities to a massive number of users.

Cloud users gain access to computing and storage capabilities as a service; one that provides cost-optimization, fault-tolerance, horizontal scalability, geo-optimization and low latency. Edge computing also provides many of these features. Edge and cloud computing are not mutually exclusive, they often co-exist in many solutions to optimize end-to-end services delivery. Cloud infrastructures also provide service level agreements, security and isolation levels of enterprise and industrial-grade quality.

The technical and commercial success of cloud computing technology made it feasible to evolve the most demanding information and communication technology (ICT) infrastructures, such as communication networks, from specialized hardware and software to new software paradigms, referred to as ‘cloud-native’. Internet of Things (IoT) virtualization – IoT built on cloud-native principles – is to IoT platforms what Network Function Virtualization (NFV) is to communication networks.

IoT is about connecting devices and applications. One of its key benefits is automation.

IoT is already starting to revolutionize enterprise and industrial digital transformation. Connected devices and applications are being integrated into existing and evolving business processes. IoT is not only about cost reduction for existing operations, where the rationale for automation is clear, but also about new revenue streams achieved through enabling applications and services and monetizing related data.

Whether your company is ‘born in the cloud’ or rooted in an enterprise or industrial setting, cloud-native IoT is emerging as a business imperative for any IoT solution.

What are the most important attributes of cloud-native IoT?

Horizontal scalability: IoT software, built to support load balancing, has virtually no limit when it comes to the number of supported devices or gateways, apart from the number of computing instances that can be allocated by the cloud infrastructure. Cloud-native IoT allows automated scale-up and scale-down computing and storage capacity, according to administrator policies. The intent is to cope with sudden changes in generated IoT data volumes.

High throughput: IoT software must handle high volumes of data without any impact on system performance. It is important to recognize that IoT traffic can be ‘bursty’ and that data sources can be highly synchronized. Some would counter this argument by saying IoT is characterized by low data throughputs generated by constrained, low-power devices. But, even in this case, the resource and application monitoring logs generated by a large number of IoT gateways could be very substantial – typically several Gigabytes of data. These logs need to be transferred, correlated with other logs and acted upon to detect or isolate failures as well as make recommendations for predictive maintenance, for example. Here we see why cloud-native tools such as Apache Kafka use massive parallelism to collect and process high throughputs of datasets. Another important concept for high throughput is MapReduce, a programming model that allows the splitting of large datasets into smaller ones and the movement of processing closer to data.

Low latency: Low latency is essential to applications such as autonomous driving or industrial automation. Edge computing drives latency lower by moving computing resources closer to the field domain where an action takes place. This must be matched by system architecture designs that are able to support low latency. There is no single mechanism to achieve low latency – programmers and system architects need a toolbox so that they can mix-and-match tools to meet the different requirements and traffic patterns associated with applications. Examples of such tools include brokers capable of routing IoT application messages in (near) real time. There are also evolutions of MapReduce that make massive use of in-memory databases capable of meeting low-latency requirements.

No single point of failure: The software components of an IoT server could be optimized around tasks performed for specific purposes, referred to as microservices. Microservices communicate using message brokers. Both microservices and brokers run on virtual machines or containers, the latter providing a lightweight software environment to host individual processes. The design, orchestration and administration of the microservices that make up an IoT platform, as well as that of the broker used for communications, must be done in such a way that avoids single points of failure (a point of failure with serious ramifications for the operation of the entire IoT service). This is where an open-source container orchestration system such as Kubernetes or Docker Swarm comes into play. Kubernetes, in particular, enables the monitoring of an IoT platform and subsequent recovery from failures independently of the cloud infrastructure. Should parts of the cloud IoT platform suffer a sudden hardware or software failure, this will not impact the overall operations of the IoT service.

What does this mean for my business?

The specifications of IoT standards and protocols have traditionally dealt with interface specifications and related data models, such as device-to-cloud interfaces. The task of developing IoT platforms using cloud-native principles has often been left to IoT implementers. The industrialization of IoT has put system architects up against significant challenges as they grapple with the complexity of moving from prototypes to resilient, operations-grade IoT implementations.

Cloud-native IoT combines the benefits of cloud-native tools and IoT. It presents both a technical and business value proposition relevant to the industrialization and large-scale deployment of IoT: achieving cost-optimized, highly scalable and resilient IoT.

The convergence of cloud and IoT is just around the corner.  Understandably, this convergence is of great interest to architects aiming to build IoT solutions able to reach massive scale to support IoT deployments like smart cities.

The market is still lacking guidelines for IoT virtualization or ‘containerization’. We expect that industry groups dealing with IoT and its applications in different domains will take increasing responsibility for the development of such guidelines. Those would need to focus not only on functionality but also operational issues such as cloud portability, a key requirement to avoid provider lock-in.

By Omar Elloumi and Mahdi Ben Alaya
Omar is an IoT passionate and thought leader, oneM2M Technical Plenary Chair, AIOTI SB member, Nokia Bell Labs and CTO; and Mahdi Ben Alaya, IoT entrepreneur, Founder and CEO of Sensinov, oneM2M Test Working Group Vice-Chair

Acknowledgments: The authors would like to thank Sharon Oddy of iconectiv, Chris Meering of HPE, Alain Louchez of Georgia Tech and Nicolas Rebierre and Emmanuel Marilly of Nokia for their help in developing this article.

  • Was this article Helpful ?
  • yes   no
© International Telecommunication Union 1865-2017 All Rights Reserved.
ITU is the United Nations' specialized agency for information and communication technology. Any opinions expressed and statistics presented by third parties do not necessarily reflect the views of ITU.