All Posts By

LF Edge

EdgeX Foundry Launches Chinese Website

By Blog, EdgeX Foundry

By Melvin Sun (Sr. marketing manager in Intel IoT Group, EdgeX TSC member, co-founder & maintainer of EdgeX China Project) and Kenzie Lan (Operations specialist of EdgeX China community)

EdgeX Foundry, a project under the LF Edge umbrella, is a vendor-neutral open source project on edge computing. 

To support the growing community of EdgeX Chinese developers and ecosystem partners, the project established the EdgeX China project in Q4’19 to drive collaboration in the region.  Since then,  the project community in China has set up local meetups, working groups, and events. In addition, local and language-specific developer and social media channels were created to generate awareness about the project and improve the adoption and experience of Chinese users and developers. 

As of today, the EdgeX China community has gained more than 300,000 followers on mainstream developer platforms of CSDN and OSChina, as well as six WeChat groups! In addition to developers, users and ecosystem partners, there are  ~10 commercial offerings from China based on EdgeX. 

To further support the EdgeX China community growth, the project officially launched its Chinese website (https://cn.edgexfoundry.org/) on June 30th, 2021 to provide awareness of EdgeX Foundry for new developers and users and to keep the local community up to date on events and news. 

The EdgeX Foundry Chinese site is divided into “Home Page”, “Why EdgeX”, “Getting Started”, “Software”, “Ecosystem”, “Community”, “Latest”, and “EdgeX Challenge China 2021”. This site provides a systematic introduction to EdgeX’ s architecture, methods of use, services, community, latest news, and overview and information on the ongoing 2021 China EdgeX Challenge for EdgeX users to compete for prizes by creating solutions based on EdgeX. Visitors can easily jump from the English website (https://www.edgexfoundry.org/)to the Chinese through the link between them.

CTOs, architects, developers, students, and business folks all are welcome to learn and exchange practical experience in the EdgeX China community. The EdgeX community will continue to grow and improve in collaboration with members around the world.

Thanks to all involved in creating and maintaining this China-focused version of the EdgeX Foundry website!

Baetyl Issues 2.2 Release, Adds EdgeX Support, New APIs, Debugging, and More

By Baetyl, Blog

Baetyl, which seamlessly extends cloud computing, data and services to edge devices, enabling developers to build light, secure and scalable edge applications, has issued its 2.2 release. Baetyl 2.2, based on community contributions, now contains more  advanced functions. Still based on cloud native functionality, Baetyl’s new features continue build an open, secure, scalable, and controllable intelligent edge computing platform.

Specific new features in the 2.2 release include:

Support for working with EdgeX Foundry

Baetyl v2.2 has updated compatibility with LF Edge sister project, EdgeX Foundry. Through Baetyl’s remote management suite,  “Baetyl-cloud,” users can deliver all 14 EdgeX services to the edge. The delivered EdgeX services will be submitted by Baetyl to local Kubernetes clusters for deployment and monitoring synchronized to the cloud.

New API definition, which is needed to support edge cluster environments

In the industrial IoT scenario, there are often many industrial control boxes to form an edge cluster scenario. Baetyl defines open multi-cluster management APIs. By implementing these APIs, the entire cluster can be reflected on the cloud console. Users can easily deploy applications to defined edge clusters, and specify edge node affinity within those clusters.

Support for DaemonSet load type applications

In the context of supporting clusters, a new load method is needed to support deployments like the function of monitoring the status of each node in the cluster, so Baetyl also supports DaemonSet. Through this load type, the service can be a single replica launched on every node in the matched cluster, and it will be automatically scaled to new added nodes and vice-versa.

New API definitions for remote debugging and remote log viewing of deployed applications

To facilitate debugging or log viewing operations on edge devices, Baetyl has established an open remote debugging API that can connect with multiple cloud control systems in the future.

New API definitions for GPU monitoring and sharing functions

Support for the GPU mainly includes two aspects:  one is the monitoring of the use of the GPU, and the other is the support for the GPU sharing. Through the GPU monitoring module, Baetyl-core can obtain the current GPU memory usage, temperature, energy consumption and other information in real time. With the GPU sharing function, multiple applications can share the GPU resources of the device. At present, the definition of the GPU support interface has been completed, and only a module containing the GPU share function needs to be provided on the end side to use. 

More official modules

There will be more official system modules are also provided:

  1. baetyl-init: Responsible for activating edge nodes to the cloud, initializing and guarding baetyl-core, after the task is completed, it will continue to report and synchronize the core state.
  2. baetyl-rule: can realize the message flow of baetyl framework end-side, and exchange messages in baetyl-broker (end-side message center), function service, IoT Hub (cloud MQTT broker).

In addition to these new features, Baetyl 2.2 also provides many other functional details for optimization and mechanism improvement, such as the optimization of the installation process; the system application can now be configured according to needs, and the transaction execution interface and task queue interface are defined.

All these new features will be available immediately with the release of Baetyl 2.2. More information is available here: https://github.com/baetyl/baetyl.

Where the Edges Meet and Apps Land: Akraino Release 4 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix

Introduction

The Public Cloud Interface (PCEI) blueprint was introduced in our last blog. This blog describes the initial implementation of PCEI in Akraino Release 4. In summary, PCEI R4 is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S) and demonstrates

  • Deployment of Public Cloud Edge (PCE) Apps from two Clouds: Azure (IoT Edge) and AWS (GreenGrass Core),
  • Deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API Server.
  • End-to-end operation of the deployed Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Before describing the specifics of the implementation, it is useful to revisit PCEI architectural concepts.

The purpose of Public Cloud Edge Interface (PCEI) is to implement a set of open APIs and orchestration functionalities for enabling Multi-Domain interworking across Mobile Edge, Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. Interfaces between the functional domains are shown in the figure below:

The detailed PCEI Reference Architecture is shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

There are some important points to highlight regarding the relationships between the Public Cloud Core (PCC) and the Public Cloud Edge (PCE) domains. These relationships influence where the orchestration tasks apply and how the edge infrastructure is controlled. Both domains have some fundamental functions such as the underlying platform hardware, virtualization environment as well as the applications and services. In the Public Cloud context, we distinguish two main relationship types between the PCC and PCE functions: Coupled and Decoupled. The table below shows this classification and provides examples.

The PCC-PCE relationships also involve interactions such as Orchestration, Control and Data transactions, messaging and flows. As a general framework, we use the following definitions:

  • Orchestration: Automation and sequencing of deployment and/or provisioning steps. Orchestration may take place between the PCC service and PCE components and/or between an Orchestrator such as the PCEI Enabler and PCC or PCE.
  • Control: Control Plane messaging and/or management interactions between the PCC service and PCE components.
  • Data: Data Plane messaging/traffic between the PCC service and the PCE application.

The figure below illustrates PCC-PCE relationships and interactions. Note that the label “O” designates Orchestration, “C” designates Control and “D” designates Data as defined above.

The PCEI implementation in Akraino Release 4 shows examples of two App-Coupled PCC-PCE relationships: Azure IoT Edge and AWS GreenGrass Core, and one Fully Decoupled PCE-PCC relationship: an implementation of ETSI MEC Location API Server.

In Release 4, PCEI Enabler does not support Hardware (bare metal) or Kubernetes orchestration capabilities.

PCEI in Akraino R4

Public Cloud Edge Interface (PCEI) is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S). PCEI Release 4 (R4) supports deployment of Public Cloud Edge (PCE) Apps from two Public Clouds: Azure and AWS, deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API App, as well as the end-to-end operation of the deployed PCE Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Functional Roles in PCEI Architecture

Key features and implementations in Akraino Release 4:

  • Edge Multi-Cloud Orchestrator (EMCO) Deployment
    • Using ONAP4K8S upstream code
  • Deployment of Azure IoT Edge PCE App
    • Using Azure IoT Edge Helm Charts provided by Microsoft
  • Deployment of AWS Green Grass Core PCE App
    • Using AWS GGC Helm Charts provided by Akraino PCEI BP
  • Deployment of PCEI Location API App
    • Using PCEI Location API Helm Charts provided by Akraino PCEI BP
  • PCEI Location API App Implementation based on ETSI MEC Location API Spec
    • Implementation is based on the ETSI MEC ISG MEC012 Location API described using OpenAPI.
    • The API is based on the Open Mobile Alliance’s specification RESTful Network API for Zonal Presence
  • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge
  • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

 

End-to-End Validation

The below description shows an example of the end-to-end deployment and validation of operation for Azure IoT Edge. The PCEI R4 End-to-End Validation Guide provides examples for AWS GreenGrass Core and the ETSI MEC Location API App.

 

Description of components of the end-to-end validation environment:

  • EMCO – Edge Multi-Cloud Orchestrator deployed in the K8S cluster.
  • Edge K8S Clusters – Kubernetes clusters on which PCE Apps (Azure IoT Edge, AWS GGC), 3PE App (ETSI Location API Handler) are deployed.
  • Public Cloud – IaaS/SaaS (Azure, AWS).
  • PCE – Public Cloud Edge App (Azure IoT Edge, AWS GGC)
  • 3PE – 3rd-Party Edge App (PCEI Location API App)
  • Private Interconnect / Internet – Networking between IoT Device/Client and PCE/3PE as well as connectivity between PCE and PCC.
  • IoT Device – Simulated Low Power Wide Area (LPWA) IoT Client.
  • IP Network/vEPC/UPF) – Network providing connectivity between IoT Device and PCE/3PE.
  • MNO DC – Mobile Network Operator Data Center.
  • Edge DC – Edge Data Center.
  • Core DC – Public Cloud.
  • Developer – an individual/entity providing PCE/3PE App.
  • Operator – an individual/entity operating PCEI functions.

The end-to-end PCEI validation steps are described below (the step numbering refers to Figure 5):

  1. Deploy EMCO on K8S
  2. Deploy Edge K8S clusters
  3. Onboard Edge K8S clusters onto EMCO
  4. Provision Public Cloud Core Service and Push Custom Module for IoT Edge
  5. Package Azure IoT Edge and AWS GGC Helm Charts into EMCO application tar files
  6. Onboard Azure IoT Edge and AWS GGC as a service/application into EMCO
  7. Deploy Azure IoT Edge and AWS GGC onto the Edge K8S clusters
  8. All pods came up and register with Azure cloud IoT Hub and AWS IoT Core
  9. Deploy a custom LPWA IoT module into Azure IoT Edge on the worker cluster
  10. Successfully pass LPWA IoT messages from a simulated IoT device to Azure IoT Edge, decode messages and send Azure IoT Hub

For more information on PCEI R4: https://wiki.akraino.org/x/ECW6AQ

PCEI Release 5 Preview

PCEI Release 5 will feature expanded capabilities and introduce new features as shown below:

For a short video demonstration of PCEI Enabler Release 5 functionality based on ONAP/CDS with GIT integration, NBI API Handler, Helm Chart Processor, Terraform Plan Executor and EWBI Interconnect please refer to this link:

https://wiki.akraino.org/download/attachments/28968608/PCEI-ONAP-DDF-Terraform-Demo-Audio-v5.mp4?api=v2

Acknowledgements

Project Technical Lead: 

Oleg Berzin, Equinix

Committers:

Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Contributors: 

Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks

Embedded IoT 2021 – Intro of the research insight from the presentation based on LF Edge HomeEdge

By Blog, Home Edge

The blog post initially appeared on the Samsung Research blog

By Suresh LC and Sunchit Sharma, of SRI-Bangalore

Introduction

Every home is flooded with consumer electronic devices which have capability to connect to internet. These devices can be accessed from anywhere in the world and thus making the home smart. As per a survey the size of smart home market would be $6 Billion by 2022. As per Gartner report 80% of all the IoT projects to include AI as a major component.

In conventional cloud based setup following two things need to be considered with utmost importance:

Latency – As per study there is latency of 0.82ms for every 100 miles travelled by data

Data Privacy – Average total cost of data breach is $3.92 Million

With devices of heterogeneous hardware capabilities the processing which were traditionally performed at cloud are being moved to devices if possible as shown below.

These lead us to work towards developing an edge platform for smart home system. As an initial step the Edge Orchestration project was developed under the umbrella of LF Edge.

Before getting into the details of Edge orchestration, let us understand few terms commonly used in this.

Edge computing – a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth

Devices : Our phones, tablets, smart TVs and so that are now a part of our homes

Inferencing – act or process of reaching a conclusion from known facts.

ML perspective – act of taking decisions / giving predictions / aiding the user based on a pre trained model.

Home Edge – Target

Home Edge project defines uses cases, technical requirements for smart home platform. The project aims to develop and maintain features and APIs catering to the above needs in a manner of open source collaboration. The minimum viable features of the platform are below:

Dynamic device/service discovery at “Home Edge”

Service Offloading

Quality of Service guarantee in various dynamic conditions

Distributed Machine Learning

Multi-vendor Interoperability

User privacy

Edge Orchestration targets to achieve easy connection and efficient resource utilization among Edge devices. Currently REST based device discovery and service offloading is supported. Score is calculated based on resource information (CPU/Memory/Network/Context) by all the edge devices. The score is used for selecting the device for service offloading when more than one device offer same service. Home Edge also supports Multi Edge Communication for multi NAT device discovery.

Problem Statement

As mentioned above the service offloading is based on score of all the devices. Score is in-turn is calculated using CPU/Memory/Network/Context. These values are populated in the db initially when home edge is installed in the device. And hence these are more static values.

Static Score is a function of

Number of CPU cores in a machine

Bandwidth of each CPU core

Network bandwidth

Round trip time in communication between devices

For example say at time t1, device B has available memory ‘X’ and hence this value is stored in db. Say at time t2 device A requests device B for score and at t2 the available memory in device B is ‘X-Y’. But the score is calculated using ‘X’ which has been stored initially.

Say in another scenario device A requests device B for service offloading. Device B starts executing the service. But say in the mid of execution moves away from the network. In such cases it would be unsuccessful offloading.

We could see in the above situations that the main purpose of Home Edge is gone for a task.
And these lead to the proposal of dynamic score calculation which is more effective and efficient.
Properties of a home based Distributed Inferencing System

We propose an effective method for score calculation. The proposed method for score calculation is based on following points:

1. Model Specific Device Selection

2. Efficient Data Distribution

3. Efficient Churn Prediction

4. Non Rigid and Self Improving

Based on the above four properties we defined a feedback based system that can help us achieve optimal data distribution for parallel inferencing in a heterogeneous system.

High Level System

Following diagram shows the high level architecture of the system. Let us take a use case and see the flow of tasks. For example when a user queries to Speaker “Let me know is any visitors today?”. The User query is indigested by the query manager to understand the query. The user query is to identify the number of visitors who visited today. The Data Source needs to be identified for query execution say the Doorbell Camera/Backyard CCTV. Then the model required for this purpose is identified and the devices which have the service (model). Based on the past performance, static score, trust of the device and number times previously the model has ran on the device, the devices are selected. Priority of the devices are calculated and based on the same the data is split among the devices. The result from the devices are aggregated to calculate the final output.

Understanding Score

Total Score is a function of Static score and Memory usage of the device

Changes in Score can help us predict estimated runtimes at the current time

Understanding Time Estimates

Time Estimates are calculated based on previous time records, average and current score

Data Points which are stored :

1. Average Score

2. Average time taken per inference

3. Number of times a model has been deployed on a device

4. Success probability of a model running on a device – Trust

Estimating runtime on Devices

Runtime estimation is done post sufficient data points being collected in the system

Delta Factor – A liner relationship factor to approximate the change in runtimes when score changes is calculated

Estimated time – using the Delta factor, Average Score and Average Runtime and the Current score of the device

Understanding Churn Probability Estimation

Logical buckets for every 5 minute frames is created to know the availability of devices

In case a bucket is polling a device for nth time and the was available k times before the new probability would be

1. (K+1)/(n+1), if device is available

2. K/(n+1), if device unavailable

Calculating Priority

Post estimation of runtimes, The probability of devices moving out of the system are checked and devices whose probability falls low are not kept in the consideration list

All devices in the consideration list are sorted based on the product of the trust and inverse of runtimes

Top K devices out of N are selected and data is distributed among them

Data Distribution

Data on the devices is distributed in the inverse ratio of the estimated runtimes to minimize the total time the system takes to make the complete inference

Feedback

Post execution the actual runtimes and scores of the devices are used to adjust the average in the databases

In case of failure next device is selected and the trust value of the device is adjusted in database

Test Run on Simulator

Aim – To demonstrate heuristic based data distribution among devices
To Test – Note the devices that work best for a model, checking the selection irrespective of device scores

Web based simulator was developed to demonstrate the proposed method. Following is a sample screenshot of the simulator

 

In the due course of our work we also found few models execute better in specific devices. To check up on this we had run two use case – Camera Anomaly detection and Person identification. And we found that the models used in both the scenarios gave better execution results in different devices as shown below.

AgStack joins with LF Edge and EdgeX Foundry to Create Agriculture IoT Solutions

By Blog, EdgeX Foundry

This post originally published on the AgStack website

The newest Linux Foundation project, AgStack, announces a liaison with LF Edge and EdgeX Foundry to dock existing EdgeX software for IoT into AgStack’s IoT Extension. AgStack has also agreed to participate in the EdgeX Foundry Vertical Solutions Working Group to explore the adoption and use of the EdgeX IoT/edge platform as an extension to its design of digital infrastructure for agriculture stakeholders.  Through mutual LF project cooperation, the two projects hope to create a complete edge-to-enterprise open-source solution as part of AgStack that can help the global agricultural ecosystem.

The goals of the cooperation between the two projects are to:

  • Utilize the existing EdgeX Foundry framework to quickly accelerate AgStack’s reach into the agriculture edge – providing a universal platform for communicating with the ag industry’s sensors, devices and gateways.
  • Extend the EdgeX framework to handle agricultural edge use cases and unique ag ecosystem protocols, models, data formats, etc.
  • Jointly work on edge-to-enterprise market ready solutions that can be easily demonstrated and used as the foundation for real world products benefiting ag industry creators and consumers.
  • Setup an exchange (as fellow LF projects) to mutually assist and share lessons learned in areas such as project governance, devops, software testing, security, etc.

“We are in the early stages of defining and building the AgStack platform and we prefer not to have to start from scratch or reinvent the wheel as we build our industry leading open-source platform” said Sumer Johal, Executive Director of AgStack.  “EdgeX Foundry gives us the opportunity to leapfrog our IoT / edge efforts by several years and take advantage of the ecosystem, edge expertise, and lessons learned that EdgeX has acquired in the IoT space.”

“As a versatile and horizontal IoT/edge platform, we are excited to partner with AgStack who can help to highlight how EdgeX can be used in  agriculture IoT use cases and how the AgStack and EdgeX communities can collaborate to scale digital transformation for the agriculture and food industries” said Jim White, Technical Steering Committee Chairman of EdgeX Foundry.  “Even though a fellow LF project, we view AgStack as one of our vertical customers – applying EdgeX to solve real world problems – and what better place to demonstrate that than in an industry that feeds the world.”

 About AgStack

  AgStack was launched by the Linux Foundation in May 2021.  AgStack seeks to improve global agriculture efficiency through the creation, maintenance and enhancement of free, re-usable, open and specialized digital infrastructure for data and applications.  Founding members of AgStack include Hewlett Packard Enterprise, Our Sci, bscompany, axilab, Digital Green, Farm Foundation, Open Team, NIAB and Product Marketing Association.  To learn more visit https://agstack.org.

 

Summary: Akraino’s Internet Edge Cloud (IEC) Type 4 AR/VR Blueprint Issues 4th Update

By Akraino, Blog

By Bart Dong

The IEC Type 4 focuses on AR VR applications running on the edge. In general, the architecture consists of three layers:

  • We use IEC as IaaS Layer.
  • We deploy TARS Framework as PaaS Layer to governance backend services.
  • And for the SaaS layer, AR/VR Application backend is needed.

There are multiple use cases for AR /VR itemized below. For Release 4, we focused on building the infrastructure and virtual classroom application, which I highlighted here:

Let’s have a quick look at the Virtual Classroom app. Virtual Classroom is a basic app that allows you to live a virtual reality experience simulating a classroom with teachers and students.

Generally, it has two modes, the Teacher mode, and the Student mode.

  • In Teacher mode
    • You will see the classroom from the teacher’s view.
    • You can see some students in the classroom listening to your presentation.
  • In Student mode
    • You will see the classroom from a student’s view.
    • You can see the teacher and other students on the remote side.

The whole architecture, shown below, consists of three nodes: Jenkins Master, Tars Master, and TARS Agent with AR/VR Blueprint.

  • For the Jenkins Master, we deployed a Jenkins Master for our private lab for testing.
  • For the TARS Master, we deploy a Tars Platform for serverless use case integration.
  • For the TARS agent, we deployed the Virtual Classroom back-end on this node and two front-end clients as the Virtual Classroom teacher and student on KVM.

It’s not a very difficult architecture. As I mentioned before, the TARS Framework plays an important role as PaaS Layer to governance backend services. Then let’s go a little bit further with TARS.

TARS is a high-performance microservice framework based on name service and TARS protocol, an integrated administration platform, and an implemented hosting service via a flexible schedule. TARS adds support for Arm, x86, and multiple platforms, including macOS, Linux, and Windows.

TARS can quickly build systems and automatically generate code, taking into account ease of use and high performance. At the same time, TARS supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python. TARS can help developers and enterprises quickly build their own stable and reliable distributed applications in a microservices manner to focus on business logic to improve operational efficiency effectively.

TARS is working in PaaS. It can run on physical machines, virtual machines, and containers, including Docker and Kubernetes. You can store TARS service data in Cache, DB, or a file system.

TARS framework supports Tars protocol, TUP, SSL, HTTP 1/2, protocol buffers, and other customized protocols.
 TARS protocol is IDL-based, binary, extensible, and cross-platform. These features allow servers to use different languages to communicate with each other using RPC. And TARS supports different RPC methods like synchronous, asynchronous, and one-way requests.

TARS has multiple functions to govern services and can integrate other DevOps Tools, such as Jenkins as the IEC Type 4.

For release 4, we updated  TARS to version 2.4.13, which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.

We provided a serial of TARS API to the Akraino API map, including application management APIs and service management APIs.

And last year, on March 10th, TARS evolved into a non-profit microservices foundation under the Linux Foundation umbrella.

The TARS Foundation is a neutral home for open source Microservices projects that empower any industry to quickly turn ideas into applications at scale. It’s not only TARS but a microservices Ecosystem. It will solve microservices problems with TARS characteristics, such as:

  • Agile Development with DevOps best practices
  • Built-in comprehensive service governance
  • Multiple languages supported
  • High performance with Scalability

Here is a summary of what we have done in release 4:

  • We focused on building the infrastructure and virtual classroom application, which I mentioned before.
  • We deployed the AR/VR Blueprint in Parserlabs, and used Jenkins to make CI/CD available in the Blueprint.
  • For release 4, we updated TARS to version 2.4.13 which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.
  • We then passed the security scan and the validation lab check. 

For the coming Release 5,  we are planning to deploy IEC Type 4 AR/VR on Kubernetes by K8STARS.

K8STARS is a convenient solution to run TARS services in Kubernetes.

It maintains the native development capability of TARS and automatic registration and configuration deletion of name service for TARS.

It supports the smooth migration of original TARS services to Kubernetes and other container platforms.

The K8STARS is a non-intrusive design, no coupling relationship with the operating environment.

Service can be run as easy as one command like:

kubectl apply -f simpleserver.yaml

If you are interested in TARS Foundation or TARS, please feel free to contact us via email tars@tarscloud.org

Anyone is welcome to join us in building the IEC Type 4 AR/VR blueprint and microservices ecosystem!  

Innovations and Key Implementations of Kubernetes for Edge: A summary of Kubernetes on Edge Day at KubeCon Europe 2021

By Blog

By Sagar Nangare

Kubernetes is the key component in the data centers that are modernizing and adopting cloud-native development architecture to deliver applications using containers. Capabilities like orchestrating VMs and containers together make Kubernetes the go-to platform for modern application infrastructure adopters. Telecom operators are also using Kubernetes to orchestrate their applications in a distributed environment involving many edge nodes.

But due to the large scale of telco networks that includes disparate cloud systems, Kubernetes adoption requires different architectures for different use cases. Specifically, if we look at a use case where Kubernetes is used to orchestrate edge workloads, there are various frameworks and public cloud managed Kubernetes solutions that are available that offer different benefits and give telecom operators choices to select the best fit. 

In recent Kubernetes on Edge Day sessions at KubeCon Europe 2021, many new use cases of Kubernetes for the edge have been discussed along with a showcase of cross-platform integration that may help enterprises adopting 5G edge and telecom operators to scale it to a high level.

 

Here is a high-level overview of some of the key sessions.

The Edge Concept

There are different concepts of edge that have been discussed so far by different communities and technology solution experts. But when Kubernetes is coming into infrastructure, IT operators need to clearly understand the key pillars on which the Kubernetes deployment will seamlessly deliver the low latency performance in telco or private 5G use cases. First, there should be a strong implementation of Kubernetes management at scale. Second, operators need to choose the lightweight K8s for edge solution which is preferably certified by CNCF. And third, there should have a lightweight OS deployed at every node from Cloud to the far edge.

Microsoft’s Akri Project: Microsoft’s Akri project is an innovation that will surely break into multiple Kubernetes-based edge implementations. It discovers and monitors far edge devices of brownfield devices that cannot have own compute – can be a part of Kubernetes cluster. Akri platform will let these devices exposed to the Kubernetes cluster.

AI/ML with TensorFlow: TensorFlow is a machine learning platform that takes inputs to generate insights. It can be deployed on cloud, on-premises, or edge nodes where ML operations need to perform. In one of the sessions, it has been shown that Kubernetes clusters deployed in the cloud and edge can host analytics tools set (Prometheus, EnMasse/MQQT, Apache Camel, AlertManager, Jupyter, etc) to process ML request with the lowest latency.

Architectures for Kubernetes on the edge: While deploying Kubernetes for an edge, there are many architecture choices that are varied per use case. And each architecture poses new challenges. But the bottom line is – there is no one-size-fits-all solution as various workloads have different requirements and IT teams are focusing on the connection between network nodes. So, the overall architecture needs to evolve centralized and distributed control planes.

Robotics: Kubernetes has also been implemented in Robotics. Sony engineers have showcased how the K8s cluster systems can be for distributed system integration of robots and perform specific tasks collaboratively.

Laser-based Manufacturing: Another interesting use case discussed by Moritz Kröger who is a Researcher at RWTH Chair for Lasertechnology that leveraged Kubernetes based distributed system. Kubernetes features like automation configuration management and flexibility in moving workloads in clusters give operational benefits to Laser manufacturing machines.

OpenYurt + EdgeXFoundry: OpenYurt is yet another open-source framework that extends the orchestration features of upstream Kubernetes to the edge. It is showcased that – it can integrate with EdgeXFoundtry in 5G IoT edge use cases where EdgeXFoundtry is used to manage the IoT devices and OpenYurt is used to handle server environments using OpenYurt plugins set.

Using GitOps: Kubernetes supports the cloud-native application orchestration as well as declarative orchestration. It is possible to apply the GitOps approach to achieve the Zero Touch Provisioning at multiple edges from the central data center.

Hong Kong-Zhuhai-Macao Bridge: Another use case discussed is – Kubernetes is implemented in edge infrastructure for managing applications that are managing sensors at Hong Kong-Zhuhai-Macao Bridge. The use case is unique as it focuses on how to define the sensor devices on the bridge as CRD in Kubernetes, how to associate each device with the CI/CD, and how to manage and operate the Applications deployed on edge nodes.

Node Feature Discovery: There are a vast number of end devices that can be part of thousands of edge nodes connected to data centers. Similar to Akri project, Node Feature Discovery (NFD) add-on can be used to detect and push into Kubernetes clusters to orchestrate with edge server as well as cloud systems.

Kuiper and KubeEdge: EMQ’s Kuiper is the open-source data analytics/streaming software that runs on edge devices that have low resource requirements. It can integrate with KubeEdge where we get a combined solution that leverage KubeEdge’s application orchestration capabilities and with streaming analytics. The combined solution delivers low latency, saving cost on bandwidth, ease in implementing business logic and operators can manage and deploy Kuiper software applications from the cloud.

EdgeX Foundry Reaches Four Years!

By Blog, EdgeX Foundry

In four short years, EdgeX has become the global standard at the IOT Edge; this blog examines why and looks to the future.

By Keith Steele, CEO IOTech, and member of LF Edge Board and EdgeX Technical Steering Group.

The emergence of Edge Computing

Back in 2017, “edge computing” was a forecast in a Gartner report, it was called “the mother of all infrastructures.” A new computing model, edge computing promised to alter how businesses interact with the physical world.

With edge computing, data from physical devices—whether it be a drone, sensor, robot, HVAC unit, autonomous car, or other intelligent device—is acquired, processed, analyzed, and acted upon by dedicated edge computing platforms. The processed data can be acted upon locally and then sent to the cloud as required for further action and analysis.

Edge computing helps businesses very rapidly and inexpensively store, process, and analyze portions of their data closer to where it is needed, reducing security risks and reaction times, and making it an important complement to cloud computing. It is, however, a complex problem.

As more devices generate more data, existing infrastructure, bandwidth restrictions, and organizational roadblocks can stand in the way of extracting beneficial insights. What is more, there is no one-size solution that fits everyone. Different applications require different types of compute and connectivity and need to meet a variety of compliance and technology standards.

EdgeX – A Strategic Imperative

This inherent complexity at the edge was recognized as a major barrier to market take- up at the edge; in the same way that common standards and platforms are applied across most of the IT stack, there was recognition that a common ‘horizontal’ software foundation at the edge was needed.

In June 2017, in Boston MA, under the auspices of the Linux Foundation, around 60 people from many technology companies, gathered from around the World, to constitute the EdgeX Foundry open-source project;  the attendees had one aim: To create EdgeX Foundry as the global open edge software platform standard!

.       

At the outset, the EdgeX team saw an open edge software platform as a strategic imperative.  EdgeX enables successful edge implementation strategies and makes the IT/OT boundary a key value-add in building new end-to-end IoT systems. Platforms like EdgeX support heterogeneous systems and ‘real time’ performance requirements on both sides of the boundary, promote choice and flexibility, and enable collaboration across multiple vendors.

Four years later with literally millions of downloads, thousands of deployments across multiple vertical markets, and a truly global ecosystem of technology and commercialization partners, we can   justifiably claim to have achieved our goal.

Over the years the project has had something like 200 code contributors, from companies as diverse as Dell, Intel, HP, IOTech, VMWare, Samsung, Beechwoods, Canonical, Cavium Networks, and Jiangxing Intelligence.  Some made small contributions, while others have stayed for the whole journey. This blog is a tribute to all who have contributed to the project’s continued success.

The Importance of Ecosystem

Open-source projects without a global ecosystem of partners to develop, productize, test, and even deploy stand little chance of success.

The EdgeX ecosystem was greatly enhanced in January 2019 when EdgeX, along with project Akraino, became one of two founding projects in LF Edge, which is an umbrella organization created by the Linux Foundation that “aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” This project aims to bring together complementary open-source technologies.

EdgeX standalone pre- 2019 was already of great value. However, as a founding project under the umbrella of LF Edge, it proved even more valuable as the momentum increased with additional global collaboration. The additional amplification and support across LF Edge projects, community, and members, helped turn EdgeX into a real high velocity project.

     

\EdgeX Fundamentals

There are several fundamental technical and business tenets we set out to solve with EdgeX:

Leveraging the power of Edge and Cloud Computing

Our starting point is that edge and cloud complement one another.

Cloud computing and data centers offer excellent support for large data storage and computing workloads that do not demand real-time insights. For example, a company may choose cloud computing to offload long-term data processing and analysis, or resource-intensive workloads like AI training.

However, the latency and potential costs of connections between the cloud and edge devices makes cloud computing impractical for some scenarios – particularly those in which enterprises need faster or real-time insights, or the edge produces massive amounts of data that should first be filtered to reduce cloud costs. In these cases, an edge computing strategy offers unique value.

Open vs. Proprietary

The idea behind EdgeX was to maximize choice so users did not have to lock themselves into proprietary technologies that, by design, limit choice.

Given the implicit heterogeneity at the Edge, ‘open’ at a minimum means the Edge X platform had to be silicon, hardware, operating system, analytics engine, software application and cloud agnostic. It would seem odd to embrace IoT diversity at the edge, but then be tied to a single cloud vendor, hardware supplier, application vendor or chip supplier.

Secure, Pluggable and Extensible Software Architecture

To offer choice and flexibility we chose a modern, distributed, microservices based software architecture, which we believe supported the inherent complexities at the edge. The other really big thing we defined was a set of open standard APIs that enable ‘plug and play’ of any software application at the Edge. Coming back to ‘product quality’ we also wanted these maintained in a way that any changes to the APIs did not mean huge rewrites for application providers.

Edge Software Application ‘Plug and Play’

A key promise of EdgeX is that it provides a standard open framework around which an ecosystem can emerge, offering interoperable plug-and-play software applications and value add services providing users with real choice, rather than having to deal with siloed applications, which may potentially require huge systems integration efforts.

EdgeX ensures that any application can be deployed and run on any hardware or operating system, and with any combination of application environments. This enables real flexibility to deliver interoperability between connected devices, applications, and services across a wide range of use cases.

Time-Critical Performance and Scalability

Many of the applications we want to run at the edge, including specialist AI and analytics applications, need access to ‘real-time’ data. These can be very challenging performance constraints, e.g., millisecond or even microsecond response times, often with absolute real-time predictability requirements. Edge systems can also be very large-scale and highly distributed.

The hardware available to run time-critical Edge applications is often highly constrained in terms of memory availability or the need to run at low power. This means edge computing software may need to be highly optimized and have a very small ‘footprint’.

Access to real time data is a fundamental differentiator between the edge and cloud computing worlds. With EdgeX we decided to focus on applications that required round trip response times in the milliseconds rather than microseconds.

Our target operating environments are server and gateway class computers running standard Windows or Linux operating systems. We decided to leave it to the ecosystem to address Time Critical Edge systems, which required ultra-low footprint, microsecond performance and even hard real time predictability. (My company – IOTech- just filled that gap with a product called Edge XRT. Its important real-time requirements are understood in full, as decisions taken can significantly impact success or failure of edge projects.)

Connectivity and Interoperability

A major difference between the edge and cloud is inherent heterogeneity and complexity at the edge. This is best illustrated in relation to connectivity and interoperability requirements, south and northbound:

  • Southbound: The edge is where the IT computer meets the OT ‘thing’ and there is a multitude of ‘things’ with which we will want to communicate, using a range of different ‘connectivity’ protocols at or close to real time. Many of these ‘things’ are legacy devices deployed with some old systems (brownfield). EdgeX provides reference implementations of some key protocols north and southbound along with SDK’s to readily allow users to add new protocols where they do not already exist; ensuring acquired data is interoperable despite the differences in device protocols. The commercial ecosystem also provides many additional connectors, making connectivity a configuration versus a programming task
  • Northbound: Across industry, we also have multiple cloud and other IT endpoints; therefore, EdgeX provides flexible connectivity to and from these different environments. In fact, many organizations today use multi-cloud approaches to manage risk, take advantage of technology advances, avoid obsolescence, obtain leverage over cloud price increases, and support organizational and supply-chain integration. EdgeX software provides for this choice by being cloud agnostic.

How does the EdgeX Vendor Ecosystem deliver customer solutions?

There are many companies offering value add products and services to the baseline open-source product, including mine, IOTech. There are also may examples of live deployments in vertical markets such as manufacturing and process automation, retail, transportation, smart venues, and cities etc.  See the EdgeX Adopter Series presentations for some examples.

Where Next for EdgeX?

The EdgeX project goes from strength to strength, with huge momentum behind its V1 Release and we will soon release EdgeX 2.0, a major release which includes all new and improved API set (eliminating technical debt that has incurred over 4 years), more message bus communications between services (replacing REST communications where you need more quality of service and asynchronous behavior), enhanced security services, and new device/sensor connectors.  The EdgeX 2.0 release will also emphasize outreach, including much more of a focus on users as well as developers.  With this release, the community launches the EdgeX Ready program.  The program is a means for organizations and individuals to demonstrate their ability to work with EdgeX.

Some Closing Thoughts

The full promise of IoT will be achieved when you combine the power of cloud and edge computing: delivering real value that allows businesses to analyze and act on their data with incredible agility and precision, giving them a critical advantage against their competitors.

The key challenges at the edge related to latency, network bandwidth, reliability, security, and OT heterogeneity cannot be addressed in cloud-only models – the edge needs its own answers.

EdgeX and the LF Edge ecosystem maximize user choice and flexibility and enable effective collaboration across multiple vertical markets at the edge helping to power the next wave of business transformation. Avoid the risk of getting left behind. To learn more, please visit the EdgeX website and LF Edge website and get involved!

 

Akraino’s Connected Vehicle Blueprint Gets an Update

By Akraino, Blog

By Tao Wang

Tencent Future Network Lab, one of Tencent’s laboratories that focuses on 5G and intelligent transport research, standardization and commercial rollout, has provided commercialized 5G Cooperative Vehicle Infrastructure System(CVIS) and relies on road-side unit (RSU), road sensing equipment (such as cameras), and 5G base stations to realize closed-loop  information among its connected vehicles. In addition, the edge cloud computing of the system is connected to a central cloud (traditional data center) through public or dedicated network lines (such as 4G/5G, CPE or carriers’ special lines), which brings large-scale network coverage with a 5G base station, makes up the lack of network coverage in roadside infrastructure, and leverages the integration of a vehicle-road-cloud-network by using the computing power of the central cloud.

Edge cloud computing infrastructure can improve data processing capacity of existing roadside infrastructure. With the deployment of an AI processing workstation, the edge cloud infrastructure can analyze all the object i.e. traffic participants & events via the video captured by road-side camera and other sensor equipment including vehicles, pedestrians, traffic accidents, etc.

The main functions are as follows:

1)Object recognition and detection: recognizes and detects road traffic participants and events including vehicles, pedestrians, scatters, traffic accidents, etc.

2)Object tracking: tracks road objects accurately, recognizes and excludes duplicated tracking when multiple cameras capture the same target within overlapped area.

3)Location, velocity & direction: locates the objects in real- time and calculate its speed and direction.

4)GNSS high precision positioning: based on the original GNSS position measured by the vehicle, the edge cloud can realize high- precision positioning without a special RTK terminal. Additionally, the positioning result can be delivered to connected vehicles and other modules of the edge cloud to determine the scene which would trigger certain CVIS functions and service functions.  Compared to the conventional approach using GPS as a positioning reference, the accuracy is greatly improved without increasing the cost.

5)Abnormal scene recognition: Recognition of scenes as shown in the following figures includes emergency braking, parking, abnormal lane change, left turn, reverse driving, large trucks, vulnerable traffic participants, scatters and lane level-congestion, etc.

visual recognition of emergency breaking

visual recognition of vehicle left turn

visual recognition of vulnerable traffic

 

In Tencent’s  Cooperative Vehicle Infrastructure System’s open platform for edge cloud and central cloud systems, we built a vehicle-road cooperative edge computing platform with good ecological openness based on Tencent’s cloud-edge collaborative system coined TMEC (Tencent MEC). TMEC is complied with the latest 5G standards specified by 3GPP and where Tencent Future Network Lab have contributed more than 400 standard contributions on 5G V2X, MEC and other technical area like cloud-based multimedia services since 2018.

Functionality includes:

1)Flexible integration of third- party system: the platform is completely decoupled from but friendly to integration with third-party systems.  Within the system, the raw data from road side collected by roadside infrastructure can be directly delivered to third- party systems via senseless forwarding. In addition, edge cloud and center cloud connect with third-party systems (such as traffic monitoring center) through specific open data interfaces.  The edge cloud can share the structured v2x information with third-party systems, while the center cloud can provide big data interface to third-party system and also provide big data processing and analysis functions for third-party applications.

2)Seamless connection to 5G / V2X network: the platform supports the connected vehicle’s service deployed by edge cloud to seamlessly connect to a 5G / V2X network, and it can simultaneously adapt v2x messages into standardized data format for transmission over 5G base station (Uu Interface) and C-V2X RSU (PC5 interface).  Additionally, this platform can support 5G Internet of vehicles and cloud game business under a high-speed vehicle scenario, using the latest 5G network slice technology which can be activated by application and device location.  AI-based weak network detection and MEC resource scheduling for Internet services and other technologies have been adopted in the platform to achieve seamless integration for 5G/V2X. The platform adopts Tencent’s innovated micro service framework to realize service-oriented governance of edge cloud businesses for the  third-party.  The micro service framework has been adopted by 3GPP as one of the four main solutions of 5G core network service architecture evolution (eSBA) (the other three are traditional telecom equipment provider solutions) and has been partially adopted by 3GPP standards TS23.501 and TS23.502.

3)Low-cost: due to the openness of the platform, it can make full use of the existing infrastructure and flexibly aggregate information from a third-party system and then reduce the cost for customers. The current system integrates Intel’s low-cost computing hardware in the edge cloud, which reduces the computing power cost by 50% and reduces the energy consumption of the edge cloud by 60%.  In addition, by using a cloud-based GNSS high-precision positioning approach, compared to the traditional RTK solution, the cost of connecting vehicle terminals can be reduced by about 90%.  The solution slightly reduces the positioning accuracy  but still meeting the requirements of lane level positioning.

To learn more about Akraino’s Connected Vehicle blueprint, visit the wiki here: https://wiki.akraino.org/pages/viewpage.action?pageId=9601601 .

Telco Cooperation Accelerates 5G and Edge Computing Adoption

By Blog

Re-posted with permission from section.io

Cooperation among telco providers has been ramping up over the last decade in order to catalyze innovation in the networking industry. A variety of working groups have formed and in doing so, have successfully delivered a host of platforms and solutions that leverage open source software, merchant silicon, and disaggregation, thereby offering a disruptive value proposition to service providers. The open source work being developed by these consortiums is fundamental in accelerating the development of 5G and edge computing.

Telco Working Groups

 

Working groups in the telco space include:

Open Source Networking

The Linux Foundation hosts over twenty open source networking projects in order to “support the momentum of this important sector”, helping service providers and enterprises “redefine how they create their networks and deliver a new generation of services”. Projects supported by Open Source Networking include the Akraino  (working towards the creation of an open source software stack that supports high-availability cloud services built for edge computing systems and applications), FRRouting (an IP routing protocol suite developed for Linux and Unix platforms, which include protocol daemons for BGP, IS-/S and OSF) and the IO Visor Project (which aims to help enable developers in the creation, innovation and sharing of IO and networking functions). The ten largest networking vendors worldwide are active members of the Linux Foundation.

LF Networking (LFN)

LFN focuses on seven of the top Linux Foundation networking projects with the goal of increasing “harmonization across platforms, communities, and ecosystems”. The Foundation does so by helping facilitate collaboration and high-quality operational performance across open networking projects. The seven projects, all open to participation – pending acceptance by each group, are:

  • FD.io – The Fast Data Project, focused on data IO speed and efficiency;
  • ONAP – a platform for real-time policy-driven orchestration and automation of physical and virtual network functions (see more below);
  • Open Platform for NFV – OPNFV – a project and community that helps enable NFV;
  • PNDA – a big data analytics platform for services and networks;
  • Streaming Networks Analytics System – a framework that enables the collection, tracking and accessing of tens of millions of routing objects in real time;
  • OpenDaylight Project – an open, extensible and scalable platform built for SDN deployments;
  • Tungsten Fabric Project – a network virtualization solution focused on connectivity and security for virtual, bare-metal or containerized workloads.

ONAP

One of the most significant LFN projects is the Open Network Automation Platform (ONAP). 50 of the world’s largest tech providers, network and cloud operators make up ONAP, together representing more than 70% of worldwide mobile subscribers. The goal of the open source software platform is to help facilitate the orchestration and management of large-scale network workloads and services based on Software Defined Networking (SDN) and Network Functions Virtualization (NFN).

Open Networking Foundation (ONF)

The Open Networking Foundation is an independent operator-led consortium, which hosts a variety of projects with its central goal being “to catalyze a transformation of the networking industry” via the promotion of networking through SDN and standardizing the OpenFlow protocol and related tech. Its hosted projects focus on white box economics, network disaggregation, and open source software. ONF recently expanded its reach to include an ecosystem focus on the adoption and deployment of disruptive technologies. The ONF works closely with over 150 member companies; its Board includes members of AT&T, China Unicom, and Google. It merged with the ON.Lab in 2016. The group has released open source components and integrated platforms built from those components, and is working with providers on field trials of these technologies and platforms. Guru Parulkar, Executive Director of ONF, gave a really great overview of the entire ONF suite during his keynote at the Open Networking Summit.

Open Mobile Evolved Core (OMEC)

One of the most notable projects coming out of ONF is the Open Mobile Evolved Core (OMEC). OMEC is the first open source Evolved Packet Core (EPC) that has a full range of features, and is scalable and capable of high performance. The EPC connects mobile subscribers to the infrastructure of a specific carrier and the Internet.

Major service providers, such as AT&T, SK Telecom, and Verizon, have already shown support for CORD (Central Office Re-architected as a Datacenter) which combines NFV, SDN, and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. This allows the operator to manage their Central Offices using declarative modeling languages for agile, real-time configuration of new customer services. OMEC has been specifically developed to cope with the “ensuing onslaught of devices coming online as part of the move to 5G and IoT”. It can be used either as a stand-alone EPC or via the COMAC platform.

A Shift Toward Container Network Functions

Network functions virtualization (NFV) has been an important factor in enabling the move towards a 5G future by helping to virtualize the network’s different appliances, including firewalls, load balancers, and routers. Specifically, it looks to enable network slicing, allowing multiple virtual networks to be created atop a shared physical infrastructure, empowering numerous use cases and new services. NFV has also been seen as important in the enabling of the distributed cloud, aiding in the creation of programmable, flexible networks for 5G.

Over the last couple of years, however, there has been a shift towards cloud-native network functions (CNF) and containerization as a lighter-weight alternative to NFV software. This follows a decision made by ONAP and Kubernetes working groups within The Linux Foundation to work together on a CNF architecture.

According to a report in Container Journal, “Arpit Joshipura, general manager for networking at The Linux Foundation, says that while telecommunications providers have been making extensive investments in NFV software based on virtual machines, it’s become apparent to many that containers offer a lighter-weight approach to virtualizing network services.”

Joshipura says CNFs offer greater portability and scalability and many organizations are beginning to load VNFs in Docker containers or deploy them using Kubernetes technologies like KubeVirt to make them more portable.

This has gone even further recently, according to ONF, with telecom operators increasingly adopting cloud-native solutions in their data centers and moving their operations away from traditional Central Office (CO). In a recent blog post, ONF says, “3GPP standardized 5G core will be cloud-native at start, which means telcos should be able to run and operate Containerized Network Functions (CNF) in their CO in the near future.” At a recent ONOS/CORD/K8s tech seminar in Seoul, eight telcos, vendors and research institutions demonstrated this in action, sharing stories about their containerization journeys.

Kubernetes Enabling 5G

News out of AT&T in February has led to industry speculation that Kubernetes is now set to become the de facto operating system for 5G. The carrier said it would be working in conjunction with open source infrastructure developer Mirantis on Kubernetes-controlled infrastructure to underlie its 5G network.

In an interview with Telecompetitor, Mirantis Co-Founder and Chief Marketing Officer Boris Renski said he believes that other carriers will adopt a similar approach as they deploy 5G, and described Kubernetes as the next logical step following cloud computing and virtual machines. As large companies like Google need to run their applications such as its search engine over multiple servers, Kubernetes enables just that.

Renski said that Kubernetes is becoming the go-to standard for telco companies looking to implement mobile packet core and functionality like VNFs because it is open source. A further key appeal of Kubernetes and containerization is that it offers the potential for a flexible, affordable and streamlined backend, essential factors to ensuring scalable and deployable 5G economics.

There are still a number of areas to be worked out as the shift to Kubernetes happens, as pointed out by AvidThink’s Roy Chua in Fierce Wireless recently. Network functions will need to be rethought to fit the architecture of microservices, for instance, and vendors will have to explore complex questions, such as how to best deploy microservices with a Kubernetes scheduler to enable orchestration and auto-scaling of a range of services.

Democratizing the Network Edge

Nonetheless, the significant amounts of cooperation between telcos is indeed helping accelerate the move towards 5G and edge computing. The meeting point of cloud and access technologies at the access-edge offers plenty of opportunities for developers to innovate in the space. The softwarization and virtualization of the access network democratizes who can gain access – anyone from smart cities to manufacturing plants to rural communities. Simply by establishing an access-edge cloud and connecting it to the public Internet, the access-edge can open up new environments, and in doing so enable a whole new wave of innovation from cutting-edge developers.

There are several reasons to be hopeful about the movement towards this democratization: first, there is demand for 5G and edge computing – enterprises in the warehouse, factory, and automotive space, for instance, increasingly want to use private 5G networks for physical automation use cases; secondly, as we have looked at here, the open-sourcing of hardware and software for the access network offers the same value to anyone and is a key enabler of 5G and edge computing for the telcos, cable and cloud companies; finally, spectrum is becoming increasingly available with countries worldwide starting to make unlicensed or gently licensed spectrum for 5G deployments available. The U.S. and Germany are leading the way.