Skip to main content
Category

Akraino

Akraino R5: Spotlight on Integrated Edge Cloud, Type2 — LightWeight Multi-Access Edge Cloud (MEC) on AWS

By Akraino, Blog

By Vinothini Raju

Integrated Edge Cloud (IEC) is a family of blueprints within the  Akraino project, which is developing a fully integrated edge infrastructure solution. This open source software stack provides critical infrastructure to enable high performance, reduce latency, improve availability, lower operational overhead, provide scalability, address security needs, and improve fault management within edge computing. The IEC project addresses multiple edge use cases and industries, (not just Telco). The group is  developing solutions and support for carriers, providers, and the IoT networks. The IEC Type2 blueprint, specifically, focuses on medium deployment of edge clouds.

Meanwhile, Multiaccess Edge Cloud (MEC) offers cloud computing capabilities at the edge of the network. Collecting and processing data closer to subscribers reduces latency, data congestion and improves subscriber experience by providing real-time updates. A cloud native implementation at the edge realises the full potential of cloud that allows developers to focus on writing scalable and highly reliable applications, instead of worrying about the operational constraints. Developing applications at the edge goes beyond the scalability requirements, as these edge native applications need real time processing as they are latency-sensitive and are hungry for high bandwidth.

  • Modern Developer platforms should offer a unified experience for building both cloud and edge native applications seamlessly. Using these cloud and edge native sandbox environments, developers should be able to simulate a real-time environment to test applications on mobility, test caching, performance etc. Out-of-the-box integrations like AI/ML frameworks like Kubeflow, data processing frameworks like EdgeXFoundry to synthesize data at the edge, monitoring and logging frameworks like Prometheus, Grafana, EFK/ELK stacks– along with no code/low code experience– can accelerate time- to- delivery and promote developer innovation. 

Apart from providing a rich developer experience, these sandbox environments should be lightweight, which can be provisioned really quickly, and later, extended to a production- ready environment. 

Additionally, leveraging public cloud providers like AWS for a distributed cloud at the edge can greatly reduce the CAPEX/OPEX to set up a MEC Cloud.

Considering these requirements, gopaddle team has proposed and developed an Akraino blueprint that provisions a lightweight Multi-Access Edge Cloud on AWS leveraging microk8s. 

A word about microk8s

Microk8s is a lightweight Kubernetes distribution from Canonical. It uses  snap manager to spin up a Kubernetes cluster in less than a minute. Snap installer consumes as little as 192 MB RAM, and K8s distribution consumes as little as 540 MB. It is fairly simple to spin up a single node cluster, which later can be extended to a multi-node cluster. Once there are 3+ nodes, a High Availability mode can be enabled, making the cluster production-ready. Microk8s offers out-of-the-box tool chains like Kubeflow for Machine Learning workloads, Prometheus for monitoring, in-build image registry, etc. 

While there are other lightweight Kubernetes distributions, microk8s offers better CPU/Memory/Disk utilization. Thus ,microk8s stands out as a good candidate for a developer sandbox environment which can be scaled to a production environment.

Reference: Sebastian B¨ohm and Guido Wirtz 

Distributed Systems Group, University of Bamberg, Bamberg, Germany

http://ceur-ws.org/Vol-2839/paper11.pdf

AWS Wavelength – Bringing Cloud Experience to the Edge

AWS has partnered with a few telecom carrier providers like Verizon, Vodafone, etc. to offer edge environments called AWS Wavelength zones in a few selected regions/zones. AWS VPCs can be extended to AWS Wavelength zones, bringing the AWS experience to the edge environment. 

Using AWS carrier gateway, developers can connect to the carrier network and use their 5G network for developing AI/ML or embedded applications. This gives a real-time development experience to validate their application performance, latency, and caching in real time. This also gives a unified experience for enterprise cloud development and edge development as well.

gopaddle – No Code for Cloud & Edge Native Development

gopaddle is a no code platform to build, deploy and maintain Cloud and Edge native applications across hybrid environments across cloud and edge. The platform helps in easy onboarding of applications using intelligent scaffolding, provides out-of-the-box DevSecOps automation by integrating with 30+ third party tools, pre-built ready-to-use application templates like EdgeXFoundy and provides a centralized provisioning and governance of Multi Cloud and Hybrid Kubernetes environments.

LightWeight MEC Blueprint

The blueprint leverages the three main building blocks  – microk8s, AWS and gopaddle to provision a light weight MEC that acts as a developer sandbox which can be extended for production deployments. The provisioning of the microk8s- based MEC environment on AWS is automated using terraform templates. These templates can be uploaded and centrally managed through gopaddle. Using these templates, multiple environments can be provisioned and centrally managed. 

Fore more information on the IEC Type 2 Akraino blueprint available with the R5 release, visit this page:  https://wiki.akraino.org/display/AK/IEC+Type+2+Architecture+Document+for+R5 

Akraino R5: Spotlight on the AI Edge — Federated Machine Learning at the Edge

By Akraino, Blog

This is the first in a series of brief posts focused on Akraino use cases and blueprints. Thanks to the many community members for creating the blueprints and providing these helpful summaries! 

By: Enzo Zhang  Zifan Wu  Haihui Wang, and Ryan Anderson

All attributes of each row data for Federated Machine Learning (ML) come from more than one edge provider.  Each side has no ability on its own to hold all of the required data to function as designed. Specific edges (or edge sides) can be pretty limited in terms of edge computing capabilities for a variety of reasons/conditions

The application must cover more than one edge while also collecting and exchanging data between edges. Meanwhile, a method must be applied to make sure the communication between these edges is secure enough for all participants.

Here are some examples of  AI edge application use cases to illustrate:  

  • Deploy more than one edge
  • No more extra storage space
  • A few computing units
  • Deal with data locally
  • Network ability
  • Share data each other
  • Work together with ML
  • Data security

Applications group data from edges based on security methods; but data privacy is still managed by the Federated Machine Learning framework.  Data, as an  input of a ML model,  is in a central site, and  exchange of data occurs after ML processes are done.  Required data gets sent back to the edge for its next process.

Many key components are required to make the case work well and machine learning run successfully:.

  1. Data transfer and group
  2. Data collect and distribute
  3. Federated Machine Learning framework
  4. Security encryption algorithm

This functionality, integrated with the framework, can enable edge applications to run right correctly and securely.  

For more information on  the Federated Machine Learning blueprint, please visit this section of the Akraino wiki.

Akraino Project Holds Inaugural Akraino Community Awards

By Akraino, Blog

The Akraino community recently held its first community awards program to highlight and recognize some of the outstanding work happening all across the project. 

The awards –which comprise four categories of three winners each– were presented virtually via private session, but you can read below to learn more about the award categories, winners, and work involved in earning these awards. 

 

 

Award Categories & 2021 Winners:

Women of the year

This award recognizes outstanding women within the (mostly-male) Akraino community. This category identifies the individual(s) who has been a champion for the Akraino community and has provided significant contribution as a committer, contributor, sub-committee, and/or TSC member.

Winners: Jane Shen, Tina Tsou (Arm), and Yolanda Robla Mota (RedHat)

Top Blueprints of the year

This is an exclusive award category for recognition of the Blueprint team that has generated a huge impact within the Akraino community. This award recognizes the Blueprints that showcase significant and rich technical features;  successful  collaborations (within the Akarino community or with Universities and NGOs); achieved landing applications and successfully showcased the features through PoC , field trails and production.

Winners:  ICN, PCEI, and Private LTE/5G 

Top Akrainoians

Champion of Akraino! This individual is a great help to the entire Akraino community in building and spreading awareness of Akraino blueprints within the industry. This person has really made an impact in the community and helped to drive additional interest and excitement, strengthen the existing community, and promote the value of becoming part of Akraino. 

Winners:  Tina Tsou (Arm),  Deepak Kataria (IP Junction),  and Randy Stricklin (Trane)

Top Committers

This award recognizes those who excel in their technical contributions to Akraino projects. The Top Committer has made significant contributions to  key features benefitting the Akraino project as a whole. These contributions are so important that it has  both uplifted the Akraino community, and created an external impact as well.

Winners: James Li (China Mobile), Deepak Kataria (IP Junction), and Ricardo Noriega De Soto (Red Hat)

Congratulations to the entire Akraino community for all of the hard work, with an extra shout-out to this year’s winners!

 

Where the Edges Meet and Apps Land: Akraino Release 4 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix

Introduction

The Public Cloud Interface (PCEI) blueprint was introduced in our last blog. This blog describes the initial implementation of PCEI in Akraino Release 4. In summary, PCEI R4 is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S) and demonstrates

  • Deployment of Public Cloud Edge (PCE) Apps from two Clouds: Azure (IoT Edge) and AWS (GreenGrass Core),
  • Deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API Server.
  • End-to-end operation of the deployed Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Before describing the specifics of the implementation, it is useful to revisit PCEI architectural concepts.

The purpose of Public Cloud Edge Interface (PCEI) is to implement a set of open APIs and orchestration functionalities for enabling Multi-Domain interworking across Mobile Edge, Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. Interfaces between the functional domains are shown in the figure below:

The detailed PCEI Reference Architecture is shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

There are some important points to highlight regarding the relationships between the Public Cloud Core (PCC) and the Public Cloud Edge (PCE) domains. These relationships influence where the orchestration tasks apply and how the edge infrastructure is controlled. Both domains have some fundamental functions such as the underlying platform hardware, virtualization environment as well as the applications and services. In the Public Cloud context, we distinguish two main relationship types between the PCC and PCE functions: Coupled and Decoupled. The table below shows this classification and provides examples.

The PCC-PCE relationships also involve interactions such as Orchestration, Control and Data transactions, messaging and flows. As a general framework, we use the following definitions:

  • Orchestration: Automation and sequencing of deployment and/or provisioning steps. Orchestration may take place between the PCC service and PCE components and/or between an Orchestrator such as the PCEI Enabler and PCC or PCE.
  • Control: Control Plane messaging and/or management interactions between the PCC service and PCE components.
  • Data: Data Plane messaging/traffic between the PCC service and the PCE application.

The figure below illustrates PCC-PCE relationships and interactions. Note that the label “O” designates Orchestration, “C” designates Control and “D” designates Data as defined above.

The PCEI implementation in Akraino Release 4 shows examples of two App-Coupled PCC-PCE relationships: Azure IoT Edge and AWS GreenGrass Core, and one Fully Decoupled PCE-PCC relationship: an implementation of ETSI MEC Location API Server.

In Release 4, PCEI Enabler does not support Hardware (bare metal) or Kubernetes orchestration capabilities.

PCEI in Akraino R4

Public Cloud Edge Interface (PCEI) is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S). PCEI Release 4 (R4) supports deployment of Public Cloud Edge (PCE) Apps from two Public Clouds: Azure and AWS, deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API App, as well as the end-to-end operation of the deployed PCE Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Functional Roles in PCEI Architecture

Key features and implementations in Akraino Release 4:

  • Edge Multi-Cloud Orchestrator (EMCO) Deployment
    • Using ONAP4K8S upstream code
  • Deployment of Azure IoT Edge PCE App
    • Using Azure IoT Edge Helm Charts provided by Microsoft
  • Deployment of AWS Green Grass Core PCE App
    • Using AWS GGC Helm Charts provided by Akraino PCEI BP
  • Deployment of PCEI Location API App
    • Using PCEI Location API Helm Charts provided by Akraino PCEI BP
  • PCEI Location API App Implementation based on ETSI MEC Location API Spec
    • Implementation is based on the ETSI MEC ISG MEC012 Location API described using OpenAPI.
    • The API is based on the Open Mobile Alliance’s specification RESTful Network API for Zonal Presence
  • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge
  • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

 

End-to-End Validation

The below description shows an example of the end-to-end deployment and validation of operation for Azure IoT Edge. The PCEI R4 End-to-End Validation Guide provides examples for AWS GreenGrass Core and the ETSI MEC Location API App.

 

Description of components of the end-to-end validation environment:

  • EMCO – Edge Multi-Cloud Orchestrator deployed in the K8S cluster.
  • Edge K8S Clusters – Kubernetes clusters on which PCE Apps (Azure IoT Edge, AWS GGC), 3PE App (ETSI Location API Handler) are deployed.
  • Public Cloud – IaaS/SaaS (Azure, AWS).
  • PCE – Public Cloud Edge App (Azure IoT Edge, AWS GGC)
  • 3PE – 3rd-Party Edge App (PCEI Location API App)
  • Private Interconnect / Internet – Networking between IoT Device/Client and PCE/3PE as well as connectivity between PCE and PCC.
  • IoT Device – Simulated Low Power Wide Area (LPWA) IoT Client.
  • IP Network/vEPC/UPF) – Network providing connectivity between IoT Device and PCE/3PE.
  • MNO DC – Mobile Network Operator Data Center.
  • Edge DC – Edge Data Center.
  • Core DC – Public Cloud.
  • Developer – an individual/entity providing PCE/3PE App.
  • Operator – an individual/entity operating PCEI functions.

The end-to-end PCEI validation steps are described below (the step numbering refers to Figure 5):

  1. Deploy EMCO on K8S
  2. Deploy Edge K8S clusters
  3. Onboard Edge K8S clusters onto EMCO
  4. Provision Public Cloud Core Service and Push Custom Module for IoT Edge
  5. Package Azure IoT Edge and AWS GGC Helm Charts into EMCO application tar files
  6. Onboard Azure IoT Edge and AWS GGC as a service/application into EMCO
  7. Deploy Azure IoT Edge and AWS GGC onto the Edge K8S clusters
  8. All pods came up and register with Azure cloud IoT Hub and AWS IoT Core
  9. Deploy a custom LPWA IoT module into Azure IoT Edge on the worker cluster
  10. Successfully pass LPWA IoT messages from a simulated IoT device to Azure IoT Edge, decode messages and send Azure IoT Hub

For more information on PCEI R4: https://wiki.akraino.org/x/ECW6AQ

PCEI Release 5 Preview

PCEI Release 5 will feature expanded capabilities and introduce new features as shown below:

For a short video demonstration of PCEI Enabler Release 5 functionality based on ONAP/CDS with GIT integration, NBI API Handler, Helm Chart Processor, Terraform Plan Executor and EWBI Interconnect please refer to this link:

https://wiki.akraino.org/download/attachments/28968608/PCEI-ONAP-DDF-Terraform-Demo-Audio-v5.mp4?api=v2

Acknowledgements

Project Technical Lead: 

Oleg Berzin, Equinix

Committers:

Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Contributors: 

Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks

Summary: Akraino’s Internet Edge Cloud (IEC) Type 4 AR/VR Blueprint Issues 4th Update

By Akraino, Blog

By Bart Dong

The IEC Type 4 focuses on AR VR applications running on the edge. In general, the architecture consists of three layers:

  • We use IEC as IaaS Layer.
  • We deploy TARS Framework as PaaS Layer to governance backend services.
  • And for the SaaS layer, AR/VR Application backend is needed.

There are multiple use cases for AR /VR itemized below. For Release 4, we focused on building the infrastructure and virtual classroom application, which I highlighted here:

Let’s have a quick look at the Virtual Classroom app. Virtual Classroom is a basic app that allows you to live a virtual reality experience simulating a classroom with teachers and students.

Generally, it has two modes, the Teacher mode, and the Student mode.

  • In Teacher mode
    • You will see the classroom from the teacher’s view.
    • You can see some students in the classroom listening to your presentation.
  • In Student mode
    • You will see the classroom from a student’s view.
    • You can see the teacher and other students on the remote side.

The whole architecture, shown below, consists of three nodes: Jenkins Master, Tars Master, and TARS Agent with AR/VR Blueprint.

  • For the Jenkins Master, we deployed a Jenkins Master for our private lab for testing.
  • For the TARS Master, we deploy a Tars Platform for serverless use case integration.
  • For the TARS agent, we deployed the Virtual Classroom back-end on this node and two front-end clients as the Virtual Classroom teacher and student on KVM.

It’s not a very difficult architecture. As I mentioned before, the TARS Framework plays an important role as PaaS Layer to governance backend services. Then let’s go a little bit further with TARS.

TARS is a high-performance microservice framework based on name service and TARS protocol, an integrated administration platform, and an implemented hosting service via a flexible schedule. TARS adds support for Arm, x86, and multiple platforms, including macOS, Linux, and Windows.

TARS can quickly build systems and automatically generate code, taking into account ease of use and high performance. At the same time, TARS supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python. TARS can help developers and enterprises quickly build their own stable and reliable distributed applications in a microservices manner to focus on business logic to improve operational efficiency effectively.

TARS is working in PaaS. It can run on physical machines, virtual machines, and containers, including Docker and Kubernetes. You can store TARS service data in Cache, DB, or a file system.

TARS framework supports Tars protocol, TUP, SSL, HTTP 1/2, protocol buffers, and other customized protocols.
 TARS protocol is IDL-based, binary, extensible, and cross-platform. These features allow servers to use different languages to communicate with each other using RPC. And TARS supports different RPC methods like synchronous, asynchronous, and one-way requests.

TARS has multiple functions to govern services and can integrate other DevOps Tools, such as Jenkins as the IEC Type 4.

For release 4, we updated  TARS to version 2.4.13, which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.

We provided a serial of TARS API to the Akraino API map, including application management APIs and service management APIs.

And last year, on March 10th, TARS evolved into a non-profit microservices foundation under the Linux Foundation umbrella.

The TARS Foundation is a neutral home for open source Microservices projects that empower any industry to quickly turn ideas into applications at scale. It’s not only TARS but a microservices Ecosystem. It will solve microservices problems with TARS characteristics, such as:

  • Agile Development with DevOps best practices
  • Built-in comprehensive service governance
  • Multiple languages supported
  • High performance with Scalability

Here is a summary of what we have done in release 4:

  • We focused on building the infrastructure and virtual classroom application, which I mentioned before.
  • We deployed the AR/VR Blueprint in Parserlabs, and used Jenkins to make CI/CD available in the Blueprint.
  • For release 4, we updated TARS to version 2.4.13 which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.
  • We then passed the security scan and the validation lab check. 

For the coming Release 5,  we are planning to deploy IEC Type 4 AR/VR on Kubernetes by K8STARS.

K8STARS is a convenient solution to run TARS services in Kubernetes.

It maintains the native development capability of TARS and automatic registration and configuration deletion of name service for TARS.

It supports the smooth migration of original TARS services to Kubernetes and other container platforms.

The K8STARS is a non-intrusive design, no coupling relationship with the operating environment.

Service can be run as easy as one command like:

kubectl apply -f simpleserver.yaml

If you are interested in TARS Foundation or TARS, please feel free to contact us via email tars@tarscloud.org

Anyone is welcome to join us in building the IEC Type 4 AR/VR blueprint and microservices ecosystem!  

Akraino’s Connected Vehicle Blueprint Gets an Update

By Akraino, Blog

By Tao Wang

Tencent Future Network Lab, one of Tencent’s laboratories that focuses on 5G and intelligent transport research, standardization and commercial rollout, has provided commercialized 5G Cooperative Vehicle Infrastructure System(CVIS) and relies on road-side unit (RSU), road sensing equipment (such as cameras), and 5G base stations to realize closed-loop  information among its connected vehicles. In addition, the edge cloud computing of the system is connected to a central cloud (traditional data center) through public or dedicated network lines (such as 4G/5G, CPE or carriers’ special lines), which brings large-scale network coverage with a 5G base station, makes up the lack of network coverage in roadside infrastructure, and leverages the integration of a vehicle-road-cloud-network by using the computing power of the central cloud.

Edge cloud computing infrastructure can improve data processing capacity of existing roadside infrastructure. With the deployment of an AI processing workstation, the edge cloud infrastructure can analyze all the object i.e. traffic participants & events via the video captured by road-side camera and other sensor equipment including vehicles, pedestrians, traffic accidents, etc.

The main functions are as follows:

1)Object recognition and detection: recognizes and detects road traffic participants and events including vehicles, pedestrians, scatters, traffic accidents, etc.

2)Object tracking: tracks road objects accurately, recognizes and excludes duplicated tracking when multiple cameras capture the same target within overlapped area.

3)Location, velocity & direction: locates the objects in real- time and calculate its speed and direction.

4)GNSS high precision positioning: based on the original GNSS position measured by the vehicle, the edge cloud can realize high- precision positioning without a special RTK terminal. Additionally, the positioning result can be delivered to connected vehicles and other modules of the edge cloud to determine the scene which would trigger certain CVIS functions and service functions.  Compared to the conventional approach using GPS as a positioning reference, the accuracy is greatly improved without increasing the cost.

5)Abnormal scene recognition: Recognition of scenes as shown in the following figures includes emergency braking, parking, abnormal lane change, left turn, reverse driving, large trucks, vulnerable traffic participants, scatters and lane level-congestion, etc.

visual recognition of emergency breaking

visual recognition of vehicle left turn

visual recognition of vulnerable traffic

 

In Tencent’s  Cooperative Vehicle Infrastructure System’s open platform for edge cloud and central cloud systems, we built a vehicle-road cooperative edge computing platform with good ecological openness based on Tencent’s cloud-edge collaborative system coined TMEC (Tencent MEC). TMEC is complied with the latest 5G standards specified by 3GPP and where Tencent Future Network Lab have contributed more than 400 standard contributions on 5G V2X, MEC and other technical area like cloud-based multimedia services since 2018.

Functionality includes:

1)Flexible integration of third- party system: the platform is completely decoupled from but friendly to integration with third-party systems.  Within the system, the raw data from road side collected by roadside infrastructure can be directly delivered to third- party systems via senseless forwarding. In addition, edge cloud and center cloud connect with third-party systems (such as traffic monitoring center) through specific open data interfaces.  The edge cloud can share the structured v2x information with third-party systems, while the center cloud can provide big data interface to third-party system and also provide big data processing and analysis functions for third-party applications.

2)Seamless connection to 5G / V2X network: the platform supports the connected vehicle’s service deployed by edge cloud to seamlessly connect to a 5G / V2X network, and it can simultaneously adapt v2x messages into standardized data format for transmission over 5G base station (Uu Interface) and C-V2X RSU (PC5 interface).  Additionally, this platform can support 5G Internet of vehicles and cloud game business under a high-speed vehicle scenario, using the latest 5G network slice technology which can be activated by application and device location.  AI-based weak network detection and MEC resource scheduling for Internet services and other technologies have been adopted in the platform to achieve seamless integration for 5G/V2X. The platform adopts Tencent’s innovated micro service framework to realize service-oriented governance of edge cloud businesses for the  third-party.  The micro service framework has been adopted by 3GPP as one of the four main solutions of 5G core network service architecture evolution (eSBA) (the other three are traditional telecom equipment provider solutions) and has been partially adopted by 3GPP standards TS23.501 and TS23.502.

3)Low-cost: due to the openness of the platform, it can make full use of the existing infrastructure and flexibly aggregate information from a third-party system and then reduce the cost for customers. The current system integrates Intel’s low-cost computing hardware in the edge cloud, which reduces the computing power cost by 50% and reduces the energy consumption of the edge cloud by 60%.  In addition, by using a cloud-based GNSS high-precision positioning approach, compared to the traditional RTK solution, the cost of connecting vehicle terminals can be reduced by about 90%.  The solution slightly reduces the positioning accuracy  but still meeting the requirements of lane level positioning.

To learn more about Akraino’s Connected Vehicle blueprint, visit the wiki here: https://wiki.akraino.org/pages/viewpage.action?pageId=9601601 .

Akraino Release 4 Enables Kubernetes Across Multiple Edges, Integrates across O-RAN, Magma, and More

By Akraino, Announcement
  • 7 New Akraino R4 Blueprints (total of 25+)  
  • Akraino is Kubernetes-ready with K8s- enabled blueprints across 4 different edge segments (Industrial IOT, ML, Telco, and Public Cloud)
  • New and updated blueprints also target ML, Connected Car, Telco Edge, Enterprise, AI, and more 

SAN FRANCISCO  February 25,  2021LF Edge, an umbrella organization within the Linux Foundation that creates an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced the availability of Akraino Release 4 (“Akraino R4”).  Akraino’s fourth release enables additional blueprints that support various deployments of Kubernetes across the edge, from Industrial IoT, to Public Cloud, Telco, and Machine Learning (ML). 

Launched in 2018, and now a Stage 3 (or “Impact” stage) project under the LF Edge umbrella, Akraino delivers an open source software stack that supports a high-availability cloud stack optimized for edge computing systems and applications. Designed to improve the state of carrier edge networks, edge cloud infrastructure for enterprise edge, and over-the-top (OTT) edge, it enables flexibility to scale edge cloud services quickly, maximize applications and functions supported at the edge, and to improve the reliability of systems that must be up at all times. 

“Now on its fourth release, Akraino is a fully-functioning open edge stack,” said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. “This latest release brings new blueprints that focus on Kubernetes-enablement across edges, as well as enhancements to other areas like connected cars, ML, telco and more. This is the power of open source in action and I am eager to see even more use cases of blueprint deployments.”  

About Akraino R4

Building on the success of R3, Akraino R4 again delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R4, Akraino blueprints enable additional use cases, deployments, and PoCs with support for new levels of flexibility that scale Public Cloud Edge, Cloud Native Edge, 5G, Industrial IoT, Telco, and Enterprise Edge Cloud services quickly, by delivering community-vetted and tested edge cloud blueprints to deploy edge services.  

In addition, new use cases and new and existing blueprints provide an edge stack for Industrial Edge, Public Cloud Edge Interface, Federated ML, KubeEdge, Private LTE/5G, SmartDevice Edge, Connected Vehicle, AR/VR, AI at the Edge, Android Cloud Native, SmartNICs, Telco Core and Open-RAN, NFV, IOT, SD-WAN, SDN, MEC, and more.

Akraino R4 includes seven new blueprints for a total of 25+, all tested and validated on real hardware labs supported by users and community members, and API map.

The 25+ “ready and proven” blueprints include both updates and long-term support to existing R1, R2, and R3 blueprints, and the introduction of seven new blueprints:

More information on Akraino R4, including links to documentation, code, installation docs for all Akraino Blueprints from R1-R4, can be found here. For details on how to get involved with LF Edge and its projects, visit https://www.lfedge.org/

Looking Ahead

The community is already planning R5, which will include more implementation of public cloud and telco synergy, more platform security architecture, increased alliance with upstream and downstream communities, and development of IoT Area and Rural Edge for tami-COVID19. Additionally, the community is expecting new blueprints as well as additional enhancements to existing blueprints. 

Upcoming Events

The Akraino community is hosting its developer-focused Spring virtualTechnical Meetings  March 1-3. To learn more, visit https://events.linuxfoundation.org/akraino-technical-meetings-spring/ 

Don’t miss the Open Networking and Edge Executive Forum (ONEEF), a virtual event happening March 10-12, where subject matter experts from Akraino and other LF Edge communities will present on the latest open source edge developments. Registration is now open! 

Join Kubernetes on Edge Day, co-located with aKubeCon + CloudNativeCon, to get in on the ground floor and shape the future intersection of cloud native and edge computing. The virtual event takes place May 4 and registration is just $20 US. 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

 

Kaloom’s Startup Journey (Part 1)

By Akraino, Blog

Written by Laurent Marchand, Founder and CEO of Kaloom, an LF Edge member company

This blog originally ran on the Kaloom website. To see more content like this, visit their website.

When we founded Kaloom, we embarked upon our journey with the vision of creating a truly P4-programmable, cloud- and open standards-based software defined networking fabric. Our name, Kaloom comes from the Latin word “caelum” which means “cloud.”

Armed solely with our intrepid crew of talented and experienced engineers, we left our good-paying, secure jobs at one of the world’s largest incumbent networking vendors and set off on our adventure. This two-part blog is the story of why we founded Kaloom and where we stand today.

In our previous blog, “SDN’s Struggle to Summit the Peak of Inflated Expectations,” Sonny outlined the challenges posed by the initial vision of SDN and the attempt to solve them with VM-based overlays. Upcoming 5G deployments will only amplify those challenges, disruptively creating an extreme need for lightweight container-based solutions that satisfy 5G’s cost, space and power requirements.

5G Amplifying SDN’s Challenges

With theoretical peak data rates up to 100 times faster than current 4G networks, 5G enables incredible new applications and use cases such as autonomous vehicles, AR/VR, IoT/IIoT and the ability to share real-time medical imaging – just to name a few. Telecom, cloud and data center providers alike look forward to 5G forming a robust edge “backbone” for the industrial internet to drive innovative new consumer apps, enterprise use cases and increased revenue streams.

These new 5G-enabled apps require extreme low latency which demands a distributed edge architecture that puts applications close to their data source and end users. When we founded Kaloom, incumbents were still selling the same technology platforms and architectural approach as they had with 4G, however, the economics for 5G are dramatically different.

The Pain Points are the Main Points – Costs, Revenues and Latency

For example, service providers’ revenues from smart phone users running 3G/4G were about $50 per month. However, connected cars will only generate about $1 or $2 per month and installing 5G requires extreme amounts of upfront investments (CAPEX), not only in antennas but also in the backend servers, storage and networking switches required to support these apps. So, it was clear to us that the cost point for deploying 5G had to be much cheaper than for 4G. Even if 5G costs were the same as 4G rollouts, service providers still could not economically justify their revenue models throttling down to $1 or $2 per month.

The low latency requirement is another pain point, mainly because 5G-enabled high speed apps won’t work properly with low throughput and high latency. Many of these new applications require an end-to-end latency below 10 milliseconds, unfortunately, typical public clouds are unable to fulfill such requirements.

To take advantage of lower costs via centralization, most hyperscale cloud providers have massive regional deployments, with only about six to 10 major facilities serving all North America. If your cloud hub is 1,000 miles away, even the speed of light (fiber optics) does not transfer data fast enough to meet the low latency requirements of 5G apps such as connected cars. For example, the latency from New-York to Amazon Web Services or Microsoft Azure in Northern -Virginia is greater than 20 milliseconds.

Today, the VM-based network overlay has dumb, or unmanaged, switches; then all the upper layer functions such as firewalling, load balancing and 5G UPF run on VMs in servers. Because you couldn’t previously run these functions at Tbps speeds on switches, they were put on x86 servers. High-end servers cost about $25,000 and consume 1,000 watts of power each. With x86 servers’ costs and power consumption, we could see that the numbers were way off from what was needed to run a business-viable environment for 5G providers. Rather than have switching and routing run via VMs on x86, Kaloom’s vision was to be able to run all networking functions at Tbps speeds to meet the power- and cost-efficient points needed for 5G deployments to generate positive revenues. This is why Kaloom decided to use containers, specifically open source Kubernetes, and the programmability of P4-enabled chips as the basis for our solutions.

Solving the latency problem required a paradigm shift to the distributed edge, which places compute, storage and networking resources closer to the end user. Because of these constraints, we were convinced the edge would become far more important. As seen in the figure below, there are many edges.

Many Edges

Kaloom’s Green Logo and Telco’s Central Office Constraints

While large regional cloud facilities were not built for the new distributed edge paradigm, service providers’ legacy Central Office (CO) architectures are even more ill-suited for the shift. Containing an odd mishmash of old and new equipment from each decade going back at least 50 years, these facilities are also typically near the limit of their space, power and cooling requirements.

There are about 25,000 legacy COs in the US and they need to be converted, or updated in some way, to sustain low latency networks that support 5G’s shiny new apps. Today, perhaps 10 percent of data is processed outside the CO and data center infrastructure. In a decade, this will grow to 25 percent which means telcos have their work cut out for them.

This major buildup of required new 5G-supporting edge infrastructure will have an extremely negative impact on the environment in terms of energy consumption. According to the Linux Foundation’s State of the Edge 2020 report, by 2028 it will consume 102,000 megawatts of power and over $700 billion in cumulative CAPEX will be spent within the next decade on edge IT infrastructure and data center facilities. The power consumption of a single 5G base station alone is three times that of its 4G LTE predecessor; 5G needs three times the base stations for the same coverage as LTE due to higher frequencies; and 5G base stations costs 4 times the price of LTE, according to mobile executives and an analyst quoted in this article at Light Reading. Upgrading a CO’s power supply involves permits, closing and digging up streets and running new high capacity wiring to the building, which is very costly. Unless all of society’s power sources suddenly turned sustainable, our founding team could see we were going to have to create a technology that could dramatically and rapidly reduce the power needed to provide these new 5G services and apps to consumers and enterprises.

Have you noticed that Kaloom’s logo is green?

This is the reason why. We have set out to solve all the issues facing telcos as they work to bring their infrastructure into the 21st century, including power.

P4 and Containers vs. ASICs and VM-Overlays

Where we previously worked, we were using a lot of switches that were not programmable. We were changing hardware every three to four years to keep pace with changes in the mobile, cloud and networking industries that constantly require more network, compute and storage resources. So, we were periodically throwing away and replacing very expensive hardware. We thought, why not bring the same flexibility by building a 100 percent programmable networking fabric. This, again, was driven by costs.

Before we left our jobs, we were part of a team running 4G Evolved Packet Gateway (EPG) using virtual machines (VMs) on Red Hat’s software platforms. Fortunately, we were asked to investigate the difference in the performance and other characteristics of containers versus VMs. That’s when we realized that, by using containers, the same physical servers could sustain several times more applications as compared to those running VM-based overlay networks.

This meant that the cost per user and cost per Gbps could be lowered dramatically. Knowing the economics of 5G would require this vastly reduced cost point, containers were the clear choice. The fact that they were more lightweight in terms of code meant that we could use less storage to house the networking apps and less compute to run them. They’re also more easily portable and quicker to instantiate. When a VM crashes it can take four or five minutes to resume, but you can restore a container in a few seconds.

Additionally, because VM-based overlays sit on top of the fabric so they provide no direct interaction with the network underlay below in terms of visibility and control. If there is congestion or a bottleneck in the lower hardware’s fabric then the overlays may not know when, where, or how bad the problem is. Delivering Quality of Service (QoS) for service providers and their end users is a key consideration.

Kaloom’s Foundations

As a member of LF Edge, Kaloom has demonstrated innovative containerized Edge solutions for 5G along with IBM at the Open Networking and Edge Summit (ONES) 2020. Kaloom also presented further details of this optimized edge solution to the Akraino TSC as an invited presentation. Kaloom is participating in the Public Cloud Edge Interface Blueprint work as it is interesting and well aligned with our technology directions and customer requirements.

We remain committed to address the various pain points including networking’s costs, the need to reduce 5G’s growing energy footprint so it has a less negative impact on the environment, and to deliver much-needed software control and programmability. Looking to solve all the networking challenges facing service providers’ 5G deployments, we definitely had our work cut out for us.

What do LF Edge, Akraino, OPNFV, ONAP, Fishing and Fairy Tales Have in Common?

By Akraino, Blog

Written by Aaron Williams, LF Edge Developer Advocate

What do LF Edge, Akraino, OPNFV, ONAP, fishing and fairy tales have in common?  They all came up during our interview with the new Chair and Co-Chair of LF Edge’s project Akraino.

LF Edge’s Akraino project recently wrapped up our semi-annual face to face and welcomed newly elected Technical Steering Committee Chairs.

After two terms of serving as a co-chair, Tina Tsou, Enterprise Architect at Arm, was elected to be the Chair this year and Oleg Berzin, Technology Innovation Fellow under the Office of the CTO at Equinix, was elected Co-Chair by the Akraino TSC members.  We sat down with Tina and Oleg to learn more about them, what they are looking forward to next year, and how they see Akraino growing in 2021.

Tina Tsou

Oleg Berzin

How did you first get involved with Akraino?  

[Tina Tsou] My introduction to Akraino happened when I was working on an Edge use case for one of our customers at Arm. But I’m no stranger to the open source communities and working groups. Before Akraino, I directed the OPNFV (Open Platform for Network Function Virtualization) Auto project as PTL, integrating ONAP onto OPNFV (upon both x86 and Arm architecture and hardware) with a focus on edge cloud use cases. I was also the Chair of the Open Source SDN Breckenridge Working Group.

I am currently an active member of the Linux Foundation Networking (LFN) Technical Advisory Committee (TAC) and also lead the VPP/AArch64 activities in FD.io representing Arm.

[Oleg Berzin] Akraino aligned with my interest in the Edge and more specifically in the multi-domain nature of the Edge (spanning from devices to networks, to aggregation, to data centers, to clouds). I was involved in the ONAP (Open Network Automation Platform) project in the past (as an operator/user). With the diverse and complex nature of the edge deployments, we need a community supported set of capabilities integrated as blueprints so customers/user can deploy these solutions with minimum friction.

What was the first blueprint that you worked on?

[Tina Tsou] For Akraino, I worked closely on the Integrated Edge Cloud(IEC) blueprint family which is part of the Edge Stack of Akraino. The blueprint intends to develop a fully integrated edge infrastructure solution for Edge Computing. This open source software stack provides critical infrastructure to enable high performance, reduced latency, improved availability, lower operational overhead, provide scalability, address security needs, and improve fault management. The IEC project will address multiple edge use cases beyond the Telco Industry. IEC intends to develop solutions and support the needs of carriers, service providers, and the IoT networks.

[Oleg Berzin] The first blueprint I worked on was the Public Cloud Edge Interface (PCEI). The idea behind PCEI is to enable interworking between mobile operators, public clouds and edge infrastructure/application providers so that operators have a systematic way to enable their customers access to public and 3rd party edge compute resources and applications via open APIs that facilitate deployment of telco and edge compute functions, interconnection of these functions as well as intelligent orchestration of workloads so that the expected performance characteristics can be achieved.

What is a Blueprint that you find interesting (I know that you don’t have a “favorite”)?

[Tina Tsou] You’re right. It is hard to choose a favorite blueprint since all of them are interesting and serve purposeful needs for many use cases. But here are two that I find very interesting:

School/Education Video Security Monitoring Blueprint: This belongs to the AI Edge Blueprint family and focuses on establishing an open source MEC platform that combined with AI capacities at the Edge. In this blueprint, the latest technologies and frameworks like microservice framework, Kata container, 5G accelerating, and open API have been integrated to build an industry-leading edge cloud architecture that could provide comprehensive computing acceleration support at the edge. This blueprint is a life saver and hence my favorite since it improves the safety and engagement in places such as factories, industrial parks, catering services, and classrooms that rely on AI-assisted surveillance.

IIoT at the Smart Device Edge Blueprint family: This blueprint family use case is for those devices that live at the Smart Device Edge are characterized by having a small footprint yet being powerful enough to be able to compute tasks at the edge.  They tend to have a minimum of 256 MB for a single node and can grow to the size of a small cluster.  These resources could be a router, hub, server, or gateway that are accessible.  Since these device types vary heavily based on the form factor and use case served, they have a very fragmented security and device standards on how the OS and firmware is booted. This is where Arm backed initiatives like Project Cassini and PARSEC helps to enable the standardization of device booting and platform security. I’m excited for this blueprint to see successful deployments of Edge based compute.

[Oleg Berzin] One thing that surprised me when I joined Akraino was the diversity of use cases (IOT, AI, Private 5G, Radio Edge Cloud) that reinforce the notion that edge is everywhere and that it is very complex. I am honored to work with the Akraino community and contributing to the development of blueprints. At this point in time my goal is to make progress in the PCEI blueprint.

Is there a Blueprint that you are looking forward to seeing develop?

[Tina Tsou] I prefer the IEC Type 3: Android cloud native applications on Arm servers in edge for Integrated Edge Cloud (IEC) Blueprint Family. It evolves from Anbox based for single instance in R3, Robox based for multiple instances in R4, and my hope is to have a support for vGPU in the near future.

[Oleg Berzin] I think the Radio Edge Cloud is a very interesting blueprint. If developed, it has a potential of revolutionizing how the radio infrastructure is managed and adopted to diverse use cases.

What is the biggest misconception that people have about Akraino?

[Tina Tsou] As with many open source projects and technologies, the common perception that users have is that these projects serve a very narrow use case. I believe Akraino suffers from a similar misconception that it is a Telco-oriented project only. The reality is quite different. Akraino project blueprints can be applied in many facets of the industry verticals from Edge, Cloud, Enterprises, and IoT.

[Oleg Berzin] I am relatively new to Akraino, and my own misconception was that it only focused on small edge devices and IoT. As a Co-Chair and now having been exposed to the breadth and depth of use cases, I can now see that Akraino is involved in a very diverse set of blueprints targeting enterprise, telco and clouds while also interworking with other organizations and communities, such as ORAN, 3GPP, CNCF, LF Networking, TIP.

What are your and Akraino’s priorities for 2021?

[Tina Tsou &Oleg Berzin] There are multiple that we can list but we would point to these top 3 priorities for 2021.

  1. Akraino Blueprints for O-RAN specifications (e.g., REC integration with RIC)
  2. Akraino Blueprint to support Public Cloud Edge interface
  3. Akraino Edge APIs

What do you like to do in your free time?

[Tina Tsou] I live in the sunny-California bay area and I love fishing during weekends. Anyone who is interested to join me and have a chat about the Akraino project can contact me. 🙂

[Oleg Berzin] Apart from being involved in the technology and networking industry for many years, I enjoy learning new languages and finding common roots in different cultures. I sometimes find inspiration and time to translate children fairy tales from Russian into English – you can find the tales that I translated on Amazon.

Anything that you want people to know about Akraino?

[Tina Tsou] Ever since its launch in 2018, Akraino has found great community support for innovative creation of deployable Edge solutions with work going in more than 30+ Blueprints. These Akraino blueprints are now globally deployed to address several Edge Use Cases. It is a vast community with many active users and contributors and here are few things to know of:

  • Akraino hosts sophisticated communities and multiple user labs to speed the edge innovation.
  • Akraino delivered fully functional new Blueprints for deployment in R3 to address edge use cases such as 5G MEC, AI Edge, Cloud Gaming at Edge, Android in Cloud, Micro-MEC and Hardware acceleration at the edge.
  • Created framework for defining and standardizing APIs across stacks, via upstream/downstream collaboration and published a whitepaper.
  • Akraino introduced tools for automated Blueprint Validations, security tools for Blueprint Hardening and Edge API’s in collaboration with LF Edge projects
  • Akraino community has participated in several industry outreach events that featured participation to foster collaboration and engagement on edge projects across the entire ecosystem.

[Oleg Berzin] The most important fact I want people to know about Akraino is the dedication and professionalism of the individuals who make up our community. The work they do on creating and proving the blueprints is done on a volunteer basis in addition to their primary jobs. It takes long hours, patience, respect for others and true trust to work together and move the edge technology forward.

XGVela: Bring More Power to 5G with Cloud Native Telco PaaS

By Akraino, Blog

Written by Sagar Nangare, an Akraino community member and technology blogger who writes about data center technologies that are driving digital transformation. He is also an Assistant Manager of Marketing at Calsoft.

A Platform-as-a-Service (PaaS) model of cloud computing brings lots of power to the end-users. It provides a platform to the end customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. In the roadmap to build a 5G network, telecom networks need such breakthrough models that help them focus more on the design and development of network services and applications.

XGVela is 5G cloud-native open-source framework that introduces Platform-as-a-Service (PaaS) capabilities for telecom operators. With this PaaS platform, telecom operators can quickly design, develop, and innovate the telco network functions and services. This way, there will be more focus on seizing business opportunities for telecom companies rather than diving into complexities of telecom infrastructure.

China Mobile initiated this project internally which was later hosted and announced by Linux Foundation as a part to support the initiative to accelerate the telco cloud adoption. Apart from China Mobile, other supporters of XGVela are China Unicom, China Telecom, ZTE, Red Hat, Ericsson, Nokia, H3C, CICT, and Intel.

Initially for building XGVela, PaaS functionality was brought from existing open-source PaaS projects like Grafana, Envoy, Zookeeper, and further enhanced with telco requirements.

Why need XGVela PaaS capabilities for the 5G network?

  • XGVela helps telecom operators to align with fast changes in requirements to build a 5G network driven by cloud-native backhaul. They need faster upgrades to network functions and services along with agility in new service deployments.
  • Telecom operators need to focus more on the microservices driven and containers-based network functions (CNFs) rather than VM based network functions (VNFs). But need to continue to use both concurrently. XGVela can support CNFs and VNFs both in the underlying platform.
  • Telecom operators want to reduce network construction costs by adopting an open and healthy technology ecosystem like ONAP, Kubernetes, OpenDaylight, Akraino, OPNFV, etc. XGVela adopted this to reduce the barriers that bring end-to-end orchestrated 5G telecom networks.
  • XGVela simplifies the design and innovation of 5G network functions by allowing developers to focus on application development with service logic rather than dealing with underlying complex infrastructure. XGVela provides standard APIs to tie many internal projects

XGVela integrates CNCF projects based on telco requirements to form General-PaaS. It is architected kept in mind the microservices design method for network function development and further integrated into telco PaaS.

XGVela is integrated with OPNFV and Akraino for integration and testing. In a recent presentation, by Srinivasa Adepalli and Kuralamudhan Ramakrishnan shown that feature set of Akraino ICN blueprints are like XGVela. It is seen that many like to deployed CNFs and application on same cluster. In those environments, there is a need to figure out what additional things to be done on top of K8s to enable CNF deployment, especially telco CNF, normal CNF and application deployments, and further need to evaluate how it worked together. This is where XGVela and Akraino ICN BP family are inclined to each other.

You can subscribe to the XGVela mailing list here to track the project progress. For more information about Akraino Blueprints, click here: https://wiki.akraino.org/. Or, join the conversation on the LF Edge Slack Channel #akraino-blueprints #akraino-help #akraino-tsc