Skip to main content
Category

Akraino

LF Edge Releases Industry-Defining Edge Computing White Paper to Accelerate Edge/ IoT Deployments

By Akraino, Announcement, Baetyl, EdgeX Foundry, eKuiper, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard, State of the Edge

Collaborative community white paper refines the definitions and nuances of open source edge computing across telecom, industrial, cloud, enterprise and consumer markets

 SAN FRANCISCO – June 24, 2022 –  LF Edge, an umbrella organization under the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced continued ecosystem collaboration via a new collaborative white paper, “Sharpening the Edge II: Diving Deeper into the LF Edge Taxonomy & Projects.” 

A follow-up to the LF Edge community’s original, collaborative 2020 paper which provides an overview of the organization and details the LF Edge taxonomy, high level considerations for developing edge solutions and key use cases,the new publication dives deeper into key areas of edge manageability, security, connectivity and analytics, and highlights how each project addresses these areas. The paper demonstrates maturation of the edge ecosystem and how the rapidly growing LF Edge community has made great progress over the past two years towards building an open, modular framework for edge computing. As with the first publication, the paper addresses  a balance of interests spanning the cloud, telco, IT, OT, IoT, mobile, and consumer markets.  

“With the growing edge computing infrastructure market set to be worth up to $800B by 2028, our LF Edge project communities are evolving,” said Jason Shepherd, VP Ecosystem, ZEDEDA  and former LF Edge Governing Board Chair. “This paper outlines industry direction through an LF Edge community lens. With such a diverse set of knowledgeable stakeholders, the report is an accurate reflection of a unified approach to defining open edge computing.” 

“I’m eager to continue to champion and spearhead the great work of the LF Edge community as the new board chair,” said Tina Tsou, new Governing Board chair, LF Edge.  “The Taxonomy white paper that demonstrates the accelerated community momentum seen by open source edge communities is really exciting and speaks to the power of open source.” 

The white paper, which is now available for download,  was put together as the result of broad community collaboration, spanning insights and expertise from subject matter experts across LF Edge project communities: Akraino, EdgeX Foundry, EVE, Fledge, Open Horizon, State of the Edge, Alvarium, Baetyl, eKuiper, and FIDO Device Onboard. 

ONE Summit North America 2022

Join the broader open source ecosystem spanning Networking, Edge, Access, Cloud and Core at ONE Summit North America, November 15-16 in Seattle, Wash. ONE Summit is the one industry event focused on best practices, technical challenges, and business opportunities facing decision makers across integrated verticals such as 5G, Cloud, Telco, and Enterprise Networking, as well as Edge, Access, IoT, and Core. The Call for Proposals is now open through July 8, 2022. Sponsorship opportunities are also available. 

 

About The Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

###

Calling all Edge App Developers! Join the ETSI/LF Edge Hackathon

By Akraino, Blog, EdgeX Foundry, LF Edge

We’re partnering with ETSI ISG MEC to host a Hackathon to encourage open edge developers to build solutions that leverage ESTI MEC Services APIs and LF Edge (Akraino) blueprints. Vertical use cases could include Automotive, Mixed and Augmented Reality, Edge Computing and 5G, and more. Teams are highly encouraged to be creative and propose to develop solutions in additional application verticals. 

Here’s how it works:

  • Participants will be asked to develop an innovative Edge Application or Solution, utilizing ETSI MEC Service APIs and LF Edge Akraino Blueprints.
  • Solutions  may include any combination of client apps, edge apps or services, and cloud components.
  • The Edge Hackathon will run remotely from June to September with a short-list of best teams invited to complete with demonstrations and a
  • Hackathon “pitch-off” at the Edge Computing World Global event in Santa Clara, California Silicon Valley on October 10th-12th

Submissions are due 29th June 2022.  More details, including how to register, are available here.

Additional background information:

Edge Computing provides developers with localized, low-latency resources that can be utilized to create new and innovative solutions, which are essential to many application vertical markets in the 5G era.

  • ETSI’s ISG MEC is standardizing an open environment that enables the integration of applications from infrastructure and edge service providers across MEC (Multi-access Edge Computing) platforms and systems, which offers a set of open standardized Edge Service APIs to enable the development and deployment of edge applications at scale.
    • Teams will be provided with a dedicated instance of the ETSI MEC Sandbox for their work over the duration of the Hackathon. The MEC Sandbox is an online environment where developers can access and interact with live ETSI MEC Service APIs in an emulated edge network set in Monaco, which includes 4G, 5G, & WiFi networks configurations, single & dual MEC platform deployments, etc.
  • LF Edge’s Akraino project offers a set of open infrastructure and application Blueprints for Edge, spanning a broad variety of application vertical use-cases. Each blueprint offers a high-quality full-stack solution that is tested and validated by the community.

However, teams are not required to use a specific edge hosting platform or MEC lifecycle management APIs. Developers may use any virtualization environment of their choosing. 

Submissions are due 29th June 2022.  More details, including how to register, are available here.

Overview of Akraino R5’s Project Cassini: IoT and Infrastructure Edge Blueprint Family

By Akraino, Blog

By Olivier Bernard and Parag Beeraka 

This post provides an overview of the Smart Cities Akraino blueprint as well as an overview of key features and implementations of PARSEC in Akraino’s Release 5.

 Overview – Akraino Blueprint:  Smart Cities

The purpose of the Smart Cities blueprint family  is to provide an edge computing platform based on Arm SoC and  improve deployment flexibility and security within the edge computing. High-level relationships between  functional domains is shown in the figure below:

(For the full description of the Smart Cities Reference Architecture please refer to the Smart Cities Documents.)

 

Smart Cities in Akraino R5– Key features and implementations in Akraino Release 5:

 The Smart Cities blueprint’s security components is PARSEC, first available in Akraino’s Release 5. . The following is a brief introduction to PARSEC.

Parsec is the Platform AbstRaction for SECurity, a new open-source initiative to provide a common API to secure services in a platform-agnostic way.

PARSEC aims to define a universal software standard for interacting with secure object storage and cryptography services, creating a common way to interface with functions that would traditionally have been accessed by more specialised APIs. Parsec establishes an ecosystem of developer-friendly libraries in a variety of popular programming languages. Each library is designed to be highly ergonomic and simple to consume. This growing ecosystem will put secure facilities at the fingertips of developers across a broad range of use cases in infrastructure computing, edge computing and the secure Internet of Things.

 

 

(For more information of PARSEC: https://parallaxsecond.github.io/parsec-book/)

 

Software Defined Camera (SDC) blueprint

Security Cameras are seeing increasing growth in many subsegments such as commercial surveillance cameras, consumer surveillance cameras and in many other devices such as dashcams, baby monitors and other IoT vision devices. The total surveillance market is expected to be around $44B in the year 2025 growing at a CAGR of 13%. The other exciting thing happening in this market, along with the increase in the unit shipments of surveillance cameras, is the increasing use of artificial intelligence and machine learning (AI/ML) in these cameras and IoT devices. It is estimated that there are already a billion cameras installed in the world and this number would reach 2 billion by the year 2024. However security cameras are very different from many other devices because once installed, they stay installed for more than 5+ years. So it is critical to ensure that these devices continue to provide enhanced functionality over time. 

In today’s world, there are many technologies that are “Software defined”. This has been possible because of many advancements in technology – at the bottom of the stack, you have the Operating System, then the virtualization layer – which actually kicked off the original software defined compute with Virtual machines etc. Then came containers and then the orchestrators for the containers such as k8s, k3s  – these technological advancements paved the path to define a software defined data center. In the datacenter, after software defined compute, software defined networking and software defined storage started to become the norm. 

Now the time is prime to move software defined trends to the edge devices as well. This is really a transformation that has started across many segments such as Automotive – cars, trucks etc. We do see this trend applied to many other edge devices and one of them would be cameras or as we and many others call it, Smart Cameras. The idea here is that once you buy a camera, the hardware is advanced so that you will be receiving continuous software updates which can be neural network model updates catered to the specific data that you might have captured using the camera or other updates related to functionality and security of the device.  

 

 By designing future camera products with cloud native capabilities in mind, one will be able to scale camera and vision products to unprecedented levels. If all the applications are deployed using a service oriented architecture with containers, a device can be continuously updated. One of the key advantages of this architecture would be to enable new use case scenarios that one might not have envisioned in the past with a simple on demand service deployment post sale through an app store. A simple example can be to deploy a new and updated license plate recognition app a year after the purchase of cameras. At the same time one of the other key advantages with this architecture would be to enable continuous machine learning model updates based on the data pertinent to the camera installations i.e be able to use the data that it is capturing and use it to train the models. With this, the model would be relevant to the specific use-case thus increasing the value of the camera itself. All of this is possible due to enabling a service oriented architecture model using containers and thinking cloud native first. 

The other important aspect that will really bring together the security camera and vision products is the adherence to common standards at the silicon and the platform level. LF Edge member company, Arm,  has started an initiative called Project Cassini and Arm SystemReady® , and it’s a key component of this initiative. As part of Arm SystemReady®, Arm is working with many ecosystem partners (OEMs, ODMs, silicon vendors) to drive a standards approach in order to scale deployments by replacing custom solutions with standards-based solutions and becoming the platform of choice for future camera deployments.  

To address and simplify these challenges, Arm is designing and offering the solution as an open source reference solution to be leveraged collaboratively for future software defined cameras based on established standards focused on security, ML, imaging and cloud-native. The basis of the reference solution is the SystemReady® certified platform. SystemReady® platforms implement UEFI, U-Boot and Trusted standard based Firmware. 

  • With the base software layers compliant to SystemReady®, standard Linux distros such as Fedora, OpenSUSE distros or Yocto can be run on these platforms. 
  • Then the container orchestration service layer can run containerized applications, by using k3S. 
  • The next software layer is ML and Computer vision libraries. Arm has supported ML functions through Arm-NN and the Arm Compute library so that users can develop their ML applications using standard open-source ML frameworks such as PyTorch, Tensorflow etc. 
  • Similarly, with computer vision, it is critical to support open standards such as Video 4 Linux, OpenCV and BLAS libraries. 
  • Both complementary and necessary, is security – this is where orchestration and security microservices such as PARSEC, an open-source platform abstraction layer for security along with PSA certification process are critical, so that end users build the confidence that the product will be secure when deployed in the field. These secure services are the foundation to support and enable containerized applications for inferencing, analytics and storage.

Learn more about Project CASSINI here: https://wiki.akraino.org/display/AK/Project+Cassini+-+IoT+and+Infrastructure+Edge+Blueprint+Family.

Introduction: Akraino’s Federated Multi-Access Edge Cloud Platform Blueprint in R5

By Akraino, Blog

Introduction

This blog focuses on providing an overview of the “Federated Multi-Access Edge Cloud Platform” blueprint as part of the Akraino Public Cloud Edge Interface (PCEI) blueprint family. This blog specifically provides an overview of the key features and implemented components as part of Akraino Release-5. Key idea for this blueprint is to have a federated Multi-Access Edge Cloud (MEC) Platform that showcases the inter-play between telco side and cloud/edge side.

Prior to discussing the specifics of this blueprint implementation, the following subsection provides a brief description of what Multi-Access Edge Cloud (MEC) is and how it is ushering in as an enabler for emerging 5G/AI based applications landscape.

What is MEC and what are its  challenges?

MEC is a network architecture concept that enables cloud computing and IT capabilities to run at the edge of the network. Applications or services running on the edge nodes – which are closer to end users – instead of on the cloud, can enjoy the benefits of lower latency and enhanced end-user experience. MEC essentially moves the intelligence and service control from centralized data centers to the edge of the network – closer to the users. Instead of backhauling all the data to a central site for processing, it can be analyzed, processed, stored locally and shared upstream when needed. MEC solutions are closely “integrated” with access network(s). Such environments often include WiFi, mobile access protocols such as 4G-LTE/5G etc.

MEC opens up plethora of potential vertical and horizontal use cases, such as Autonomous Vehicle (AV), Augmented Reality (AR) and Virtual Reality (VR), Gaming, and Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) enabled applications like autonomous navigation, remote monitoring by using Natural Language Processing (NLP) or facial recognition, video analysis, and more. These emerging 5G/AI based applications landscape typically exhibit characteristics such as the following:

  • Low Latency requirements
  • Mobility
  • Location Awareness
  • Privacy/Security etc.

All of these characteristics pose unique challenges while developing emerging 5G/AI applications at the network edge. Supporting such an extensive feature-set at the required flexibility, dynamicity, performance, and efficiency requires careful and expensive engineering effort and needs adoption of new ways of architecting the enabling technology landscape.

To this regards, our proposed “Federated Multi-Access Edge Cloud Platform” blueprint enables desired abstractions in order to address these challenges and, as a result, ushers in an application development environment that enables support for ease of development and deployment of these emerging applications landscape. Subsequent sections delve deep into the proposed “Federated Multi-Access Edge Cloud Platform” blueprint details.

Blueprint Overview: Federated Multi-Access Edge Cloud Platform

The purpose of the “Federated Multi-Access Edge Cloud Platform” blueprint is an end-to-end technology solution for mobile game deployed across multiple heterogeneous edge nodes using various network access protocols such as mobile and WiFi and others. This blueprint demonstrates how an application leverages a distributed and multi access network edge environment in order to get all the benefits of edge computing.

The diagram above highlights the use case scenario. On the left hand side is the device – as can be seen that the device is moving from location x to y.

The whole use case goes through 4 distinct steps. The first step is the service discovery flow. And then the game service flow follows. And once the device actually moves, it would trigger additional session migration flow. This also includes subsequent service discovery to go along with this session migration. Finally, step number four is once this migration happens, the UE will go to the new edge node.

In order to support all this, platform provides two key abstractions:

  • Multi-Access/Mobile Operator Network Abstraction: Multi-access network means a mobile, Wi Fi or whatever it takes. Multi-operator means even for the same 4G/5G there could be different operators (Verizon, AT&T etc.). There are various MEC edge nodes, they could be the WiFi based edge node and they can be from different operators.
  • Cloud-Side Abstraction: Cloud side abstraction includes key architectural components to be described in the subsequent sections.

Functional Diagram: Federated Multi-Access Edge Cloud Platform

The key component is this federated multi-access edge platform. The platform sits between applications and underlying heterogeneous edge infrastructure and also abstracts the multi-access interface and exposes application developer friendly APIs. This blueprint leverages upstream project KubeEdge as baseline platform – this includes the enhanced  federation function (Karmada).

Telco/GSMA side complexities (5GC/NEF etc.) need to be thought through and designed appropriately in order to realize extremely low latencies (10 ms) requirements desired by typical MEC use cases. For the multi access, we may initially use a simulated mobile access environment to mimic a real time device access protocol conditions as part of the initial release/s.

Key Enabling Architectural Components

Federation Scheduler (Included in Release-5)

As a “Global Scheduler”, responsible for application QoS oriented global scheduling in accordance to the placement policies. Essentially, it refers to a decision-making capability that can decide how workloads should be spread across different clusters similar to how a human operator would. It maintains the resource utilization information for all the MEC edge cloud sites. Cloud federation functionality in our blueprint is enabled using open source Karmada project. The following is an architecture diagram for Karmada.

Karmada (Kubernetes® Armada) is a Kubernetes®  management system that enables cloud-native applications to run across multiple Kubernetes® clusters and clouds with no changes to the underlying applications. By using Kubernetes®-native APIs and providing advanced scheduling capabilities, Karmada truly enables multi-cloud Kubernetes® environment. It aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling. More details related to Karmada project can be found here.

EdgeMesh (Included in Release-5)

EdgeMesh provides support for service mesh capabilities for the edge clouds in support of microservice communication cross cloud and edges. EdgeMesh provides a simple network solution for the inter-communications between services at edge scenarios (east-west communication).

The network topology for edge cloud computing scenario is quite complex. Various Edge nodes are often not interconnected and the direct inter-communication of traffic between applications on these edge nodes is highly desirable requirement for businesses. EdgeMesh addresses these challenges by shielding the complex network topology at the edge applications scenario. More details related to EdgeMesh project can be found here.

Service Discovery (Not included in Release-5)

Service Discovery retrieves the endpoint address of the edge cloud service instance depending on the UE location, network conditions, signal strength, delay, App QoS requirements etc.

Mobility Management (Not included in Release-5)

Cloud Core side mobility service subscribes to UE location tracking events or resource rebalancing scenario. Upon UE mobility or resource rebalancing scenario, mobility service uses Cloud core side Service Discovery service interface to retrieve the address of new appropriate location-aware edge node. Cloud Core side mobility service subsequently initiates UE application state migration process between edge nodes. Simple CRIU container migration strategy may not be enough, it is much more complex than typical VM migration.

Multi-Access Gateway (Not included in Release-5)

Multi access gateway controller manages Edge Data Gateway and Access APIG of edge nodes. Edge data gateway connects with edge gateway (UPF) of 5G network system, and routes traffic to containers on edge nodes. Access APIG connects with the management plane of 5G network system (such as CAPIF), and pulls QoS, RNIS, location and other capabilities into the edge platform.

AutoScaling (Not included in Release-5)

Autoscaling provides capability to automatically scale the number of Pods (workloads) based on observed CPU utilization (or on some other application-provided metrics). Autoscaler also provides vertical Pod autoscaling capability by adjusting a container’s ”CPU limits” and ”memory limits” in accordance to the autoscaling policies.

Service Catalog (Not included in Release-5)

Service Catalog provides a way to list, provision, and bind with services without needing detailed knowledge about how those services are created or managed.

Detail Flow of various Architectural Components

What is included in Release-5

As mentioned earlier that the purpose of this blueprint is an end-to-end technology solution for mobile game deployed across multiple heterogeneous edge nodes using various network access protocols such as mobile and WiFi and others. This blueprint demonstrates how an application leverages a distributed and multi access network edge environment for realizing all the benefits of edge computing.

This is the very first release of this new blueprint as part of the Akraino PCEI family. Current focus for this release is to enable only the following two key architectural components:

  1. Open source Karmada based Cloud Federation
  2. EdgeMesh functionality

This blueprint will evolve as we incorporate remaining architectural components as part of the subsequent Akraino releases. More information on this blueprint can be found here.

Acknowledgements

Project Technical Lead: Deepak Vij, KubeEdge MEC-SIG member – Principal Cloud Technology Strategist at Futurewei Cloud Lab.

Contributors:

  • Peng Du, Futurewei Cloud Lab.
  • Hao Xu, Futurewei Cloud Lab.
  • Qi Fei, KubeEdge MEC-SIG member – Huawei Technologies Co., Ltd.
  • Xue Bai, KubeEdge MEC-SIG member – Huawei Technologies Co., Ltd.
  • Gao Chen, KubeEdge MEC-SIG member – China Unicom Research Institute
  • Jiawei Zhang, KubeEdge MEC-SIG member – Shanghai Jiao Tong University
  • Ruolin Xing, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Shangguang Wang, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Ao Zhou, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Jiahong Ning, KubeEdge MEC-SIG member – Southeastern University
  • Tina Tsou, Arm

Where the Edges Meet, Apps Land and Infra Forms: Akraino Release 5 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix

Introduction

In the PCEI R4 blog we described the initial implementation of the blueprint. This blog focuses on new features and capabilities implemented in the PCEI in Akraino Release 5. Before discussing the specifics of the implementation, it is useful to go over the motivation for PCEI. Among the main drivers behind PCEI are the following:

  • Public Cloud Driven Edge Computing. Edge computing infrastructure and resources are increasingly provided by public clouds (e.g., AWS Outposts, IBM Cloud Satellite, Google Anthos). In the PCEI R4 blog we described  various relationships between PCC (Public Cloud Core) and PCE (Public Cloud Edge), ranging from PCE being Fully-Coupled to PCC at hardware, virtualization, application and services layers to PCE being Fully-Decoupled from PCC at all these layers. This “degree of coupling” between PCE and PCC dictates the choice of orchestration entry points as well as the behavior of the edge infrastructure and applications running on it.
  • Hybrid infrastructure. Most practical deployments of edge infrastructure and applications are hybrid in nature, where an application deployed at the edge needs services residing in the core cloud to function (coupled model). In addition, a PCE application deployed at the edge, may need to communicate, and consume resources from multiple PCC environments.
  • Multi-Domain Interworking. Individual infrastructure domains (e.g., edge, cloud, network fabric) present their own APIs and/or other provisioning methods (e.g., CLI), thus making end-to-end deployment challenging both in complexity and in time. A Multi-domain orchestration solution is needed to handle edge, cloud, and interconnection in a uniform and consistent manner.
  • Interconnection and Federation. Need for efficient and performant interconnection and resource distribution between edge and cloud as well as between distributed edges proximal to end users. We would like to point out that a common assumption in many infrastructure orchestration solutions is that the fundamental L1/L2/L3 interconnection between edge clouds and core clouds as well as between the edges is available for overlay technologies such as SD-WAN or Service Mesh. We specifically see the need for the orchestration solution to be able to enable L2/L3 connectivity between the domains that are being orchestrated.
  • Bare Metal orchestration. As with the interconnection, many orchestration solutions assume that the bare metal compute/storage hardware and basic operating system resources are available for the deployment of virtualization and application/services layers. We would like to point out that in many scenarios this is not the case. 
  • Developer-centric capabilities. Capabilities such Infrastructure-as-Code are becoming critical for activation and configuration of public cloud infrastructure components, interconnection as well as the end-to-end application deployment, integrated with CI/CD environments.

PCEI in Akraino R5

Public Cloud Edge Interface (PCEI) is a set of open APIs, orchestration functionalities and edge capabilities for enabling Multi-Domain Interworking across the Operator Network Edge, the Public Cloud Core and Edge, the 3rd-Party Edge as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. 

Terraform-based Orchestration

One of the biggest challenges with multi-domain infrastructure orchestration is finding a common and uniform method of describing the required resources and parameters in different domains, especially in public clouds (PCC). Every public cloud provides a range of service categories with a variety of different services, with each service having several different components, and each component having multiple features, with different parameters.

Terraform emerged as a common Infrastructure-as-Code tool that allows to abstract diverse provisioning methods (API, CLI, etc.) used in the individual domains and provision infrastructure components using a high-level language, if a Terraform Provider is available for the domain. 

The notable innovation in PCEI R5 is the integration of Terraform as a microservice within the PCEI orchestrator (CDS, see below). This allows for important orchestration properties:

  • Uniformity – use of the same infrastructure orchestration methods across public clouds, edge clouds and interconnection domains.
  • Transparency (model-free) – the orchestrator does not need to understand the details of the individual infrastructure domains (i.e., implement their models). It only needs to know where to retrieve the Terraform plans (programs) for the domain in question and execute the plans using the specified provider.
  • DevOps driven – the Terraform plans can be developed and evolved using DvOps tools and processes.

Examples of Terraform plans are shown below.

Open-Source Technologies in PCEI 

The PCEI blueprint makes use of the following open-source technologies and tools:

  • EMCO – Edge Multi-Cluster Orchestrator. EMCO is used as a multi-tenant service and application deployment orchestrator.
  • CDS – Controller Design Studio. CDS is used as the API Handler, Terraform Executor, Helm Chart Processor, Ansible Executor, GitLab Interface Handler.
  • Terraform. In PCEI R5, Terraform has been integrated with CDS to enable programmatic execution of Terraform plans by the PCEI Enabler to orchestrate PCC, PCE and interconnection infrastructure.
  • Kubernetes. Kubernetes is the underlying software stack for EMCO/CDS. Kubernetes is also used as the virtualization layer for PCE, on which edge applications are deployed using Helm.
  • Helm. Helm is used by EMCO for deployment of applications across multiple Kubernetes edge clusters.
  • GitLab. GitLab is used to store Terraform plans and state files, Helm charts, Ansible playbooks, Cluster configs for retrieval and processing by CDS using API calls.
  • Ansible. Ansible can be used by CDS to deploy Kubernetes clusters on top of bare metal and Linux.
  • Openstack. PCEI R5 can use Terraform to deploy IaaS infrastructure and apps on Openstack edge clouds.

Functional Roles and Components in the PCEI R5 Architecture

Key features and implementations in Akraino Release 5

  • Software Architecture Components
      • Edge Multi-Cloud Orchestrator (EMCO) 
      • Controller Design Studio (CDS) and Controller Blueprint Adapters (CBA)
      • Helm
      • Kubernetes
      • Terraform
  • Features and capabilities
    • NBI APIs
      • GitLab Integration
      • Dynamic Edge Cluster Registration
      • Dynamic App Helm Chart Onboarding
      • Automatic creation of Service Instances in EMCO and deployment of Apps
      • Automatic Terraform Plan Execution
    • Integrated Terraform Plan Executor
      • Azure (PCC), AWS (PCC)
      • Equinix Fabric (Interconnect)
      • Equinix Metal (Bare Metal Cloud for PCE)
      • Openstack (3PE)
    • Equinix Fabric Interconnect
    • Equinix Bare Metal orchestration
    • Multi-Public Cloud Core (PCC) Orchestration (Azure, AWS)
    • Kubernetes Edge
    • Openstack Edge
    • Cloud Native 5G UPF Deployment
    • Deployment of Azure IoT Edge PCE App
    • Deployment of PCEI Location API App 
    • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge 
    • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

DevOps driven Multi-domain INfrastructure Orchestration (DOMINO) 

In PCEI R5 we demonstrated the use of PCEI Enabler based on EMCO/CDS with integrated programmatic Terraform executor to orchestrate infrastructure across multiple domains and deploy an edge application. The DevOps driven Multi-domain Infrastructure Orchestration demo consisted of the following:

  • Deploy EMCO 2.0, CDS and CBAs.
  • Design Infrastructure using a SaaS Infrastructure Design Studio.
      1. Edge Cloud (Equinix Metal in Dallas, TX)
      2. Public Cloud (Azure West US)
      3. Interconnect (Equinix Fabric)
  • Push to GitLab.
      1. Cluster Info
      2. Application Helm Charts (Azure IoT Edge, kube-router)
      3. Terraform Plans
        1. Azure Cloud
        2. Equinix Interconnect
        3. Equinix Metal
  • Provision Infrastructure using CDS/Terraform.
      1. Bare Metal server in Equinix Metal Cloud in Dallas, TX
      2. Deploy K8S on Bare Metal
      3. Azure Cloud in West US (Express Route, Private BGP Peering, Express Route GW, VNET, VM, IoT Hub)
      4. Interconnect Edge Cloud with Public Cloud using Equinix Fabric L2
  • Deploy Edge Application (PCE).
      1. Dynamic K8S Cluster Registration to EMCO
      2. Dynamic onboarding of App Helm Charts to EMCO
      3. Composite cloud native app deployment and end-to-end operation
        1. Azure IoT Edge
        2. Custom Resource Definition for Azure IoT Edge
        3. Kube-router for BGP peering with Azure over ExpressRoute
  • Verify end-to-end IoT traffic flow.

The video recording of the PCEI R5 presentation and demonstration can be found at this link.

For more information on PCEI R5: 

Acknowledgements

Project Technical Lead:
Oleg Berzin, Equinix

Committers: 
Kavitha Papanna, Aarna Networks
Vivek Muthukrishnan, Aarna Networks
Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Contributors: 
Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks

 

Blogs of The AI Edge: I-VICS

By Akraino, Blog

By Zhuming Zhange & Hao Zhongwang

The trends, key technologies and scenarios of VICS

Intelligent networking promotes the evolution of electronic vehicles and its architecture 

Infrastructure required for Intelligent vehicle-infrastructure cooperation systems(I-VICS)

  • Road: Informatization, intelligence and standardization
  • Communication: Unified communication interface and protocol, coordinated vehicle-road interconnection
  • Network: Car wireless communication network, narrowband Internet of Things
  • Services: High-precision time-space reference services, vehicle emergency systems, rapid assisted positioning services
  • Maps: basic maps and geographic information systems
  • Data: big data cloud platform, software

Use case 1: Safety Of The Intended Functionality (SOTIF)  and I-VICS

SOTIF(ISO/PAS 21448) emphasizes to avoid unreasonable risks due to expected functional performance limitations.

The background of the birth of SOTIF is the development of intelligent driving

If classified according to the functional chain of intelligent driving: perception-decision-execution, the “functional performance limitation” is reflected in three aspects:

  • Sensor perception limitations lead to scene recognition errors (including missed recognition of driver mis-operation)
  • Insufficient deep learning causes the decision algorithm to judge the scene incorrectly (including the wrong response to the driver’s mis-operation)
  • Actuator function limitations lead to deviation from the ideal target

  • For Area2 (known unsafe scenarios), the basic idea of SOTIF is to identify risk scenarios through safety analysis, and develop countermeasures against risk scenarios.
  • For Area3 (unknown unsafe scenarios), various scenarios that a car may encounter under various road conditions need to be identified (in theory) in the early stage of development

Use case 2: Autonomous Valet Parking

Functions:

  • Automatically drive a car from a pre-defined drop-off zone (e.g. the entrance to a carpark) to a parking spot indicated by an external system.
  • Park a car in a parking spot, starting in a lane near that parking spot.
  • Drive out of a parking spot.
  • Drive to a pre-defined pick-up zone (e.g. the exit from a carpark).
  • Automatically stop for obstacles while achieving the above.

AVP’s New features / benefits based on I-VICS:

  • Expand the perception range of car
  • Improve the ability of perception and realize swarm intelligence
  • Solve the problem of automatic driving safety

-Convert unsafe scenario to safe scenario

-Convert unknown scene to known scene

Akraino R5: Spotlight on Integrated Edge Cloud, Type2 — LightWeight Multi-Access Edge Cloud (MEC) on AWS

By Akraino, Blog

By Vinothini Raju

Integrated Edge Cloud (IEC) is a family of blueprints within the  Akraino project, which is developing a fully integrated edge infrastructure solution. This open source software stack provides critical infrastructure to enable high performance, reduce latency, improve availability, lower operational overhead, provide scalability, address security needs, and improve fault management within edge computing. The IEC project addresses multiple edge use cases and industries, (not just Telco). The group is  developing solutions and support for carriers, providers, and the IoT networks. The IEC Type2 blueprint, specifically, focuses on medium deployment of edge clouds.

Meanwhile, Multiaccess Edge Cloud (MEC) offers cloud computing capabilities at the edge of the network. Collecting and processing data closer to subscribers reduces latency, data congestion and improves subscriber experience by providing real-time updates. A cloud native implementation at the edge realises the full potential of cloud that allows developers to focus on writing scalable and highly reliable applications, instead of worrying about the operational constraints. Developing applications at the edge goes beyond the scalability requirements, as these edge native applications need real time processing as they are latency-sensitive and are hungry for high bandwidth.

  • Modern Developer platforms should offer a unified experience for building both cloud and edge native applications seamlessly. Using these cloud and edge native sandbox environments, developers should be able to simulate a real-time environment to test applications on mobility, test caching, performance etc. Out-of-the-box integrations like AI/ML frameworks like Kubeflow, data processing frameworks like EdgeXFoundry to synthesize data at the edge, monitoring and logging frameworks like Prometheus, Grafana, EFK/ELK stacks– along with no code/low code experience– can accelerate time- to- delivery and promote developer innovation. 

Apart from providing a rich developer experience, these sandbox environments should be lightweight, which can be provisioned really quickly, and later, extended to a production- ready environment. 

Additionally, leveraging public cloud providers like AWS for a distributed cloud at the edge can greatly reduce the CAPEX/OPEX to set up a MEC Cloud.

Considering these requirements, gopaddle team has proposed and developed an Akraino blueprint that provisions a lightweight Multi-Access Edge Cloud on AWS leveraging microk8s. 

A word about microk8s

Microk8s is a lightweight Kubernetes distribution from Canonical. It uses  snap manager to spin up a Kubernetes cluster in less than a minute. Snap installer consumes as little as 192 MB RAM, and K8s distribution consumes as little as 540 MB. It is fairly simple to spin up a single node cluster, which later can be extended to a multi-node cluster. Once there are 3+ nodes, a High Availability mode can be enabled, making the cluster production-ready. Microk8s offers out-of-the-box tool chains like Kubeflow for Machine Learning workloads, Prometheus for monitoring, in-build image registry, etc. 

While there are other lightweight Kubernetes distributions, microk8s offers better CPU/Memory/Disk utilization. Thus ,microk8s stands out as a good candidate for a developer sandbox environment which can be scaled to a production environment.

Reference: Sebastian B¨ohm and Guido Wirtz 

Distributed Systems Group, University of Bamberg, Bamberg, Germany

http://ceur-ws.org/Vol-2839/paper11.pdf

AWS Wavelength – Bringing Cloud Experience to the Edge

AWS has partnered with a few telecom carrier providers like Verizon, Vodafone, etc. to offer edge environments called AWS Wavelength zones in a few selected regions/zones. AWS VPCs can be extended to AWS Wavelength zones, bringing the AWS experience to the edge environment. 

Using AWS carrier gateway, developers can connect to the carrier network and use their 5G network for developing AI/ML or embedded applications. This gives a real-time development experience to validate their application performance, latency, and caching in real time. This also gives a unified experience for enterprise cloud development and edge development as well.

gopaddle – No Code for Cloud & Edge Native Development

gopaddle is a no code platform to build, deploy and maintain Cloud and Edge native applications across hybrid environments across cloud and edge. The platform helps in easy onboarding of applications using intelligent scaffolding, provides out-of-the-box DevSecOps automation by integrating with 30+ third party tools, pre-built ready-to-use application templates like EdgeXFoundy and provides a centralized provisioning and governance of Multi Cloud and Hybrid Kubernetes environments.

LightWeight MEC Blueprint

The blueprint leverages the three main building blocks  – microk8s, AWS and gopaddle to provision a light weight MEC that acts as a developer sandbox which can be extended for production deployments. The provisioning of the microk8s- based MEC environment on AWS is automated using terraform templates. These templates can be uploaded and centrally managed through gopaddle. Using these templates, multiple environments can be provisioned and centrally managed. 

Fore more information on the IEC Type 2 Akraino blueprint available with the R5 release, visit this page:  https://wiki.akraino.org/display/AK/IEC+Type+2+Architecture+Document+for+R5 

Akraino R5: Spotlight on the AI Edge — Federated Machine Learning at the Edge

By Akraino, Blog

This is the first in a series of brief posts focused on Akraino use cases and blueprints. Thanks to the many community members for creating the blueprints and providing these helpful summaries! 

By: Enzo Zhang  Zifan Wu  Haihui Wang, and Ryan Anderson

All attributes of each row data for Federated Machine Learning (ML) come from more than one edge provider.  Each side has no ability on its own to hold all of the required data to function as designed. Specific edges (or edge sides) can be pretty limited in terms of edge computing capabilities for a variety of reasons/conditions

The application must cover more than one edge while also collecting and exchanging data between edges. Meanwhile, a method must be applied to make sure the communication between these edges is secure enough for all participants.

Here are some examples of  AI edge application use cases to illustrate:  

  • Deploy more than one edge
  • No more extra storage space
  • A few computing units
  • Deal with data locally
  • Network ability
  • Share data each other
  • Work together with ML
  • Data security

Applications group data from edges based on security methods; but data privacy is still managed by the Federated Machine Learning framework.  Data, as an  input of a ML model,  is in a central site, and  exchange of data occurs after ML processes are done.  Required data gets sent back to the edge for its next process.

Many key components are required to make the case work well and machine learning run successfully:.

  1. Data transfer and group
  2. Data collect and distribute
  3. Federated Machine Learning framework
  4. Security encryption algorithm

This functionality, integrated with the framework, can enable edge applications to run right correctly and securely.  

For more information on  the Federated Machine Learning blueprint, please visit this section of the Akraino wiki.

Akraino Project Holds Inaugural Akraino Community Awards

By Akraino, Blog

The Akraino community recently held its first community awards program to highlight and recognize some of the outstanding work happening all across the project. 

The awards –which comprise four categories of three winners each– were presented virtually via private session, but you can read below to learn more about the award categories, winners, and work involved in earning these awards. 

 

 

Award Categories & 2021 Winners:

Women of the year

This award recognizes outstanding women within the (mostly-male) Akraino community. This category identifies the individual(s) who has been a champion for the Akraino community and has provided significant contribution as a committer, contributor, sub-committee, and/or TSC member.

Winners: Jane Shen, Tina Tsou (Arm), and Yolanda Robla Mota (RedHat)

Top Blueprints of the year

This is an exclusive award category for recognition of the Blueprint team that has generated a huge impact within the Akraino community. This award recognizes the Blueprints that showcase significant and rich technical features;  successful  collaborations (within the Akarino community or with Universities and NGOs); achieved landing applications and successfully showcased the features through PoC , field trails and production.

Winners:  ICN, PCEI, and Private LTE/5G 

Top Akrainoians

Champion of Akraino! This individual is a great help to the entire Akraino community in building and spreading awareness of Akraino blueprints within the industry. This person has really made an impact in the community and helped to drive additional interest and excitement, strengthen the existing community, and promote the value of becoming part of Akraino. 

Winners:  Tina Tsou (Arm),  Deepak Kataria (IP Junction),  and Randy Stricklin (Trane)

Top Committers

This award recognizes those who excel in their technical contributions to Akraino projects. The Top Committer has made significant contributions to  key features benefitting the Akraino project as a whole. These contributions are so important that it has  both uplifted the Akraino community, and created an external impact as well.

Winners: James Li (China Mobile), Deepak Kataria (IP Junction), and Ricardo Noriega De Soto (Red Hat)

Congratulations to the entire Akraino community for all of the hard work, with an extra shout-out to this year’s winners!

 

Where the Edges Meet and Apps Land: Akraino Release 4 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix

Introduction

The Public Cloud Interface (PCEI) blueprint was introduced in our last blog. This blog describes the initial implementation of PCEI in Akraino Release 4. In summary, PCEI R4 is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S) and demonstrates

  • Deployment of Public Cloud Edge (PCE) Apps from two Clouds: Azure (IoT Edge) and AWS (GreenGrass Core),
  • Deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API Server.
  • End-to-end operation of the deployed Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Before describing the specifics of the implementation, it is useful to revisit PCEI architectural concepts.

The purpose of Public Cloud Edge Interface (PCEI) is to implement a set of open APIs and orchestration functionalities for enabling Multi-Domain interworking across Mobile Edge, Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. Interfaces between the functional domains are shown in the figure below:

The detailed PCEI Reference Architecture is shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

There are some important points to highlight regarding the relationships between the Public Cloud Core (PCC) and the Public Cloud Edge (PCE) domains. These relationships influence where the orchestration tasks apply and how the edge infrastructure is controlled. Both domains have some fundamental functions such as the underlying platform hardware, virtualization environment as well as the applications and services. In the Public Cloud context, we distinguish two main relationship types between the PCC and PCE functions: Coupled and Decoupled. The table below shows this classification and provides examples.

The PCC-PCE relationships also involve interactions such as Orchestration, Control and Data transactions, messaging and flows. As a general framework, we use the following definitions:

  • Orchestration: Automation and sequencing of deployment and/or provisioning steps. Orchestration may take place between the PCC service and PCE components and/or between an Orchestrator such as the PCEI Enabler and PCC or PCE.
  • Control: Control Plane messaging and/or management interactions between the PCC service and PCE components.
  • Data: Data Plane messaging/traffic between the PCC service and the PCE application.

The figure below illustrates PCC-PCE relationships and interactions. Note that the label “O” designates Orchestration, “C” designates Control and “D” designates Data as defined above.

The PCEI implementation in Akraino Release 4 shows examples of two App-Coupled PCC-PCE relationships: Azure IoT Edge and AWS GreenGrass Core, and one Fully Decoupled PCE-PCC relationship: an implementation of ETSI MEC Location API Server.

In Release 4, PCEI Enabler does not support Hardware (bare metal) or Kubernetes orchestration capabilities.

PCEI in Akraino R4

Public Cloud Edge Interface (PCEI) is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S). PCEI Release 4 (R4) supports deployment of Public Cloud Edge (PCE) Apps from two Public Clouds: Azure and AWS, deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API App, as well as the end-to-end operation of the deployed PCE Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Functional Roles in PCEI Architecture

Key features and implementations in Akraino Release 4:

  • Edge Multi-Cloud Orchestrator (EMCO) Deployment
    • Using ONAP4K8S upstream code
  • Deployment of Azure IoT Edge PCE App
    • Using Azure IoT Edge Helm Charts provided by Microsoft
  • Deployment of AWS Green Grass Core PCE App
    • Using AWS GGC Helm Charts provided by Akraino PCEI BP
  • Deployment of PCEI Location API App
    • Using PCEI Location API Helm Charts provided by Akraino PCEI BP
  • PCEI Location API App Implementation based on ETSI MEC Location API Spec
    • Implementation is based on the ETSI MEC ISG MEC012 Location API described using OpenAPI.
    • The API is based on the Open Mobile Alliance’s specification RESTful Network API for Zonal Presence
  • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge
  • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

 

End-to-End Validation

The below description shows an example of the end-to-end deployment and validation of operation for Azure IoT Edge. The PCEI R4 End-to-End Validation Guide provides examples for AWS GreenGrass Core and the ETSI MEC Location API App.

 

Description of components of the end-to-end validation environment:

  • EMCO – Edge Multi-Cloud Orchestrator deployed in the K8S cluster.
  • Edge K8S Clusters – Kubernetes clusters on which PCE Apps (Azure IoT Edge, AWS GGC), 3PE App (ETSI Location API Handler) are deployed.
  • Public Cloud – IaaS/SaaS (Azure, AWS).
  • PCE – Public Cloud Edge App (Azure IoT Edge, AWS GGC)
  • 3PE – 3rd-Party Edge App (PCEI Location API App)
  • Private Interconnect / Internet – Networking between IoT Device/Client and PCE/3PE as well as connectivity between PCE and PCC.
  • IoT Device – Simulated Low Power Wide Area (LPWA) IoT Client.
  • IP Network/vEPC/UPF) – Network providing connectivity between IoT Device and PCE/3PE.
  • MNO DC – Mobile Network Operator Data Center.
  • Edge DC – Edge Data Center.
  • Core DC – Public Cloud.
  • Developer – an individual/entity providing PCE/3PE App.
  • Operator – an individual/entity operating PCEI functions.

The end-to-end PCEI validation steps are described below (the step numbering refers to Figure 5):

  1. Deploy EMCO on K8S
  2. Deploy Edge K8S clusters
  3. Onboard Edge K8S clusters onto EMCO
  4. Provision Public Cloud Core Service and Push Custom Module for IoT Edge
  5. Package Azure IoT Edge and AWS GGC Helm Charts into EMCO application tar files
  6. Onboard Azure IoT Edge and AWS GGC as a service/application into EMCO
  7. Deploy Azure IoT Edge and AWS GGC onto the Edge K8S clusters
  8. All pods came up and register with Azure cloud IoT Hub and AWS IoT Core
  9. Deploy a custom LPWA IoT module into Azure IoT Edge on the worker cluster
  10. Successfully pass LPWA IoT messages from a simulated IoT device to Azure IoT Edge, decode messages and send Azure IoT Hub

For more information on PCEI R4: https://wiki.akraino.org/x/ECW6AQ

PCEI Release 5 Preview

PCEI Release 5 will feature expanded capabilities and introduce new features as shown below:

For a short video demonstration of PCEI Enabler Release 5 functionality based on ONAP/CDS with GIT integration, NBI API Handler, Helm Chart Processor, Terraform Plan Executor and EWBI Interconnect please refer to this link:

https://wiki.akraino.org/download/attachments/28968608/PCEI-ONAP-DDF-Terraform-Demo-Audio-v5.mp4?api=v2

Acknowledgements

Project Technical Lead: 

Oleg Berzin, Equinix

Committers:

Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Contributors: 

Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks