Skip to main content


Akraino and CAMARA Communities Join Forces to Boost API Integration in Edge Computing

By Akraino, Blog

At the Hefei High-tech Integrated Circuit Incubation Center in Anhui Province, a panel discussion moderated by Guanyu Zhu from Huawei’s Cloud Network OSDT team brought together experts from the Akraino and CAMARA communities, along with industry professionals, to explore the significance of APIs in edge computing and the potential for collaboration between the two communities. Prominent panelists included Tina Tsou, LF Edge Board Chair, Leo Li from Akraino TSC, Gao Chen, the senior engineer from China Unicom, and Shuting Qing, the open-source ecology expert from Huawei’s Cloud Network OSDT.

Shuting Qing provided insights into the origins of the CAMARA project, explaining that “Capability exposure aims to encapsulate these capabilities into a simple interface and expose it to developers for everyone to use.” CAMARA is a vital initiative designed to generate new revenue streams for European operators that have made substantial investments in 4G/5G networks. Qing emphasized the need to clarify the value scenarios and focus on monetization.

Leo Li discussed Akraino’s eagerness to collaborate with emerging communities like CAMARA, emphasizing the need for standardized edge computing hardware modules and the adoption of novel hardware interface technologies such as Ethernet, PCIe, and UCIe. Li highlighted the importance of integrating standardized hardware with an API-based business development model, stating, “We hope to further enhance the business flexibility of operators while promoting hardware standardization, and reduce costs and power consumption through standardization in conjunction with the trend of service APIization.”

The panelists also shared their views on the commercial realization of telecom edge cloud. Gao Chen emphasized the standardization of east-west interfaces, highlighting the importance of providing a unified and user-friendly API to access network capabilities, while Tina Tsou pointed out the potential benefits for various stakeholders, noting that “Edge computing can provide better performance and services, thereby increasing customer loyalty and revenue.”

Addressing potential challenges in implementing commercialization ideas, Leo Li indicated that “We believe that cost will be a crucial factor in the monetization process of edge cloud.” He advocated for standardizing hardware specifications and implementation methods and adopting an API-based business development model to minimize costs.

Tina Tsou expressed her expectations for the CAMARA community, emphasizing the need for more flexible and customizable edge computing capabilities, stronger equipment and data management capabilities, and enhanced security and credibility guarantees. She also called for increased cooperation with other open-source projects.

Shuting Qing and Tina Tsou shared their thoughts on how Akraino and CAMARA could work together to create a more open, collaborative, and innovative edge computing ecosystem. Qing considered that since CAMARA is discussing capability exposure, it would be worth exploring the integration of edge computing open capabilities, such as ETSI MP1, into the CAMARA. Meanwhile, Tsou proposed combining edge computing framework, openness, security and privacy protection, and application scenario expansion.

The panel discussion offered valuable insights into the role of APIs in edge computing and the potential collaboration between Akraino and CAMARA. As the industry continues to develop, the joint efforts of these two communities could result in a more open, collaborative, and innovative edge computing ecosystem that benefits all stakeholders.

Akraino Displays Robotics Blueprint at ONE Summit 2022

By Akraino, Blog, Event

By Jeff Brower, CEO at Signalogic & LF Edge community member

At the ONE Summit (ONES) in Seattle in November, the Linux Foundation Edge community (LF Edge) presented state-of-the-art edge computing in areas of Telco, Oil & Gas, Manufacturing, and Retail. The Akraino blueprint “Robot Architecture Based on SSES” was selected by the LF Edge conference showcase committee to exhibit in the Manufacturing category. This blueprint is architected by Fujitsu and focuses on two key areas in robotics:

  • manipulating elastic and non-uniform objects with variable shapes and surfaces, under variable environmental conditions
  • safe and reliable robot-human interaction


Fujitsu and Signalogic personnel manned the Manufacturing kiosk and we’re happy to report the exhibit was well-attended and effective. While we didn’t see a pre-pandemic level of attendance (maybe half compared to 2019), that was made up for by enthusiasm and energy of attendees and exhibitors. It was a great feeling driving into Seattle on a cold but sunny day, negotiating the city’s notoriously gnarly freeway design and traffic, arriving at the Sheraton, and then focusing 100% on presenting LF Edge, Akraino, and robotics to technical
and business developers ! We can confidently report that in-person conferences are back and thriving.

As it turns out, the conference format was effective for promoting robotics as well as Akraino and LF Edge. As one example, our blueprint’s project team leader, Fukano Haruhisa-san from Fujitsu’s development labs, gave a technical presentation in the early afternoon of Day 1.
Then later in the day, while dinner was served near the exhibit area, attendees who had attended Fukano-san’s presentation zero’d in on our kiosk. They had been busy juggling their conference schedule, but now they had questions and were ready to dig deeper. That was also our chance to promote LF Edge and Akraino. Naturally we took full advantage 😀

Customer discussions at the kiosk were both wide ranging and in-depth. Of particular concern is how to merge requirements for compute intensive onboard processing (i.e. on the robot) with cloud processing. There is a mix of needs, including mapping, handling unusual and as-yet-unknown objects, failure prediction, real-time speech recognition, background noise removal, natural language processing, and more.  Some needs can be met in the cloud, and some demand “never down, never out” capability. The former can be met with containerization, Kubernetes, and other CICD and automation tools, while the latter requires intensive onboard computation. Of course, any onboard computation faces severe limitations in size (form-factor), weight, and available power. It’s a fascinating problem in edge computing and engineering tradeoffs.

Given the effectiveness of the ONE Summit format, I strongly urge the LF Edge board to organize and promote more combined technical + exhibit events. The exhibit component does two important things: (i) encourages effort and progress among blueprint participants, and (ii) provides feedback to shape and guide blueprints moving forward. A little pressure to meet conference schedule deadlines and get demos working is a good thing, and the payoff for LF Edge is increased industry exposure for its member projects.

Congratulations to “Team DOMINO,” Winner of the 2022 ETSI & LF Edge Hackathon

By Akraino, Blog

The 2022 Edge Hackathon, supported by ETSI and LF Edge, concluded last month with 15 teams worldwide building Edge applications or solutions with ETSI Multi-access Edge Computing (MEC) APIs and LF Edge Akraino Blueprints. A team from LF Edge member organizations  Equinix and Aarna Networks—codenamed Team DOMINO—won first place in the Edge Hackathon for their innovative edge application in 5G scenarios using the Akraino Public Cloud Edge Interface (PCEI) blueprint.

The Akraino PCEI blueprint enables multi-domain infrastructure orchestration and cloud native application deployment across public clouds (core and edge), edge clouds, interconnection providers and network operators.

By using the PCEI blueprint, Team DOMINO’s solution demonstrates orchestration of federated MEC infrastructure and services, including 5G Control and User Plane Functions, MEC and Public Cloud IaaS/SaaS, across two operators/providers (a 5G operator and a MEC provider), as well as deployment and operation of end-to-end cloud native IoT applications making use of 5G access, and distributed both across geographic locations and across hybrid MEC (edge cloud) and Public Cloud (SaaS) infrastructure.

Team DOMINO showed how telco providers can enable sharing of their services in a MEC Federation environment, by orchestrating, bare metal servers and their software stack, 5G control plane and user plane functions. In addition, by orchestrating interconnection between the 5G provider and MEC provider, connectivity to a public cloud as well as the IoT application and the MEC Location API service, Team DOMINO demonstrated MEC Service Federation for location-aware IoT.

Learn more about this solution and the Akraino PCEI blueprint on the Akraino wiki.

Congratulations to team DOMINO & everyone involved in this use case and blueprint!

Your Guide to LF Edge (+ Related) Sessions at ONE Summit

By Akraino, Blog, EdgeX Foundry, Event, Home Edge, LF Edge, Open Horizon, Project EVE

In case you missed it, the ONE Summit agenda is now live! With 70+ sessions delivered by speakers from over 50 organizations, at ONE Summit, you can meet industry experts who will share their edge computing knowledge across 5G, factory floor, Smart Home, Robotics, government, Metaverse, and VR use cases, using LF Edge projects including Akraino, EdgeX Foundry, EVE and more.

Save your seat for the ONE Summit today and add these edge sessions to your schedule. We hope to see you in Seattle, WA November 15-16!

Tuesday, November 15:

9:00am – 9:15am

11:30am – 12:00pm

12:10pm – 12:40pm

12:10pm – 12:40pm

2:00pm – 2:30pm

2:40pm – 3:10pm

  • Proliferation of Edge Computing in Smart Home
    • Speakers:
      • Suresh Lalapet Chakravarthy, Staff Engineer, Samsung R&D Institute India – Bangalore
      • Nitu Sajjanlal Gupta, Lead Engineer, Samsung R&D Institute India – Bangalore
    • Featured LF project: Home Edge

3:40pm – 4:10pm

3:54pm – 4:01pm

4:20pm – 4:50pm

  • 4:20pm – 5:30pm
    • Featured LF project: Project EVE
Wednesday, November 16

11:30am – 12:00pm

12:10pm – 12:40pm

2:00pm – 2:30pm

  • Deploying and Automating at the Edge
    • Speakers:
      • William Brooke Frischemeier, SR. Director Head Of Product Management Unified Cloud BU, Rakuten Symphony
      • Mehran Hadipour, VP- BD & tech Alliances, Rakuten Symphony

3:40pm – 4:10pm

4:20pm – 4:50pm

4:20pm – 5:30pm

Hurry! Early Bird (Corporate) registration closes September 9! Bookmark the ONE Summit website to easily find updates as more event news is announced, and follow LF Edge on Twitter to hear more about the event. We hope to see you in Seattle soon!


LF Edge Releases Industry-Defining Edge Computing White Paper to Accelerate Edge/ IoT Deployments

By Akraino, Announcement, Baetyl, EdgeX Foundry, eKuiper, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard, State of the Edge

Collaborative community white paper refines the definitions and nuances of open source edge computing across telecom, industrial, cloud, enterprise and consumer markets

 SAN FRANCISCO – June 24, 2022 –  LF Edge, an umbrella organization under the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced continued ecosystem collaboration via a new collaborative white paper, “Sharpening the Edge II: Diving Deeper into the LF Edge Taxonomy & Projects.” 

A follow-up to the LF Edge community’s original, collaborative 2020 paper which provides an overview of the organization and details the LF Edge taxonomy, high level considerations for developing edge solutions and key use cases,the new publication dives deeper into key areas of edge manageability, security, connectivity and analytics, and highlights how each project addresses these areas. The paper demonstrates maturation of the edge ecosystem and how the rapidly growing LF Edge community has made great progress over the past two years towards building an open, modular framework for edge computing. As with the first publication, the paper addresses  a balance of interests spanning the cloud, telco, IT, OT, IoT, mobile, and consumer markets.  

“With the growing edge computing infrastructure market set to be worth up to $800B by 2028, our LF Edge project communities are evolving,” said Jason Shepherd, VP Ecosystem, ZEDEDA  and former LF Edge Governing Board Chair. “This paper outlines industry direction through an LF Edge community lens. With such a diverse set of knowledgeable stakeholders, the report is an accurate reflection of a unified approach to defining open edge computing.” 

“I’m eager to continue to champion and spearhead the great work of the LF Edge community as the new board chair,” said Tina Tsou, new Governing Board chair, LF Edge.  “The Taxonomy white paper that demonstrates the accelerated community momentum seen by open source edge communities is really exciting and speaks to the power of open source.” 

The white paper, which is now available for download,  was put together as the result of broad community collaboration, spanning insights and expertise from subject matter experts across LF Edge project communities: Akraino, EdgeX Foundry, EVE, Fledge, Open Horizon, State of the Edge, Alvarium, Baetyl, eKuiper, and FIDO Device Onboard. 

ONE Summit North America 2022

Join the broader open source ecosystem spanning Networking, Edge, Access, Cloud and Core at ONE Summit North America, November 15-16 in Seattle, Wash. ONE Summit is the one industry event focused on best practices, technical challenges, and business opportunities facing decision makers across integrated verticals such as 5G, Cloud, Telco, and Enterprise Networking, as well as Edge, Access, IoT, and Core. The Call for Proposals is now open through July 8, 2022. Sponsorship opportunities are also available. 


About The Linux Foundation 

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: Linux is a registered trademark of Linus Torvalds.



Calling all Edge App Developers! Join the ETSI/LF Edge Hackathon

By Akraino, Blog, EdgeX Foundry, LF Edge

We’re partnering with ETSI ISG MEC to host a Hackathon to encourage open edge developers to build solutions that leverage ESTI MEC Services APIs and LF Edge (Akraino) blueprints. Vertical use cases could include Automotive, Mixed and Augmented Reality, Edge Computing and 5G, and more. Teams are highly encouraged to be creative and propose to develop solutions in additional application verticals. 

Here’s how it works:

  • Participants will be asked to develop an innovative Edge Application or Solution, utilizing ETSI MEC Service APIs and LF Edge Akraino Blueprints.
  • Solutions  may include any combination of client apps, edge apps or services, and cloud components.
  • The Edge Hackathon will run remotely from June to September with a short-list of best teams invited to complete with demonstrations and a
  • Hackathon “pitch-off” at the Edge Computing World Global event in Santa Clara, California Silicon Valley on October 10th-12th

Submissions are due 29th June 2022.  More details, including how to register, are available here.

Additional background information:

Edge Computing provides developers with localized, low-latency resources that can be utilized to create new and innovative solutions, which are essential to many application vertical markets in the 5G era.

  • ETSI’s ISG MEC is standardizing an open environment that enables the integration of applications from infrastructure and edge service providers across MEC (Multi-access Edge Computing) platforms and systems, which offers a set of open standardized Edge Service APIs to enable the development and deployment of edge applications at scale.
    • Teams will be provided with a dedicated instance of the ETSI MEC Sandbox for their work over the duration of the Hackathon. The MEC Sandbox is an online environment where developers can access and interact with live ETSI MEC Service APIs in an emulated edge network set in Monaco, which includes 4G, 5G, & WiFi networks configurations, single & dual MEC platform deployments, etc.
  • LF Edge’s Akraino project offers a set of open infrastructure and application Blueprints for Edge, spanning a broad variety of application vertical use-cases. Each blueprint offers a high-quality full-stack solution that is tested and validated by the community.

However, teams are not required to use a specific edge hosting platform or MEC lifecycle management APIs. Developers may use any virtualization environment of their choosing. 

Submissions are due 29th June 2022.  More details, including how to register, are available here.

Overview of Akraino R5’s Project Cassini: IoT and Infrastructure Edge Blueprint Family

By Akraino, Blog

By Olivier Bernard and Parag Beeraka 

This post provides an overview of the Smart Cities Akraino blueprint as well as an overview of key features and implementations of PARSEC in Akraino’s Release 5.

 Overview – Akraino Blueprint:  Smart Cities

The purpose of the Smart Cities blueprint family  is to provide an edge computing platform based on Arm SoC and  improve deployment flexibility and security within the edge computing. High-level relationships between  functional domains is shown in the figure below:

(For the full description of the Smart Cities Reference Architecture please refer to the Smart Cities Documents.)


Smart Cities in Akraino R5– Key features and implementations in Akraino Release 5:

 The Smart Cities blueprint’s security components is PARSEC, first available in Akraino’s Release 5. . The following is a brief introduction to PARSEC.

Parsec is the Platform AbstRaction for SECurity, a new open-source initiative to provide a common API to secure services in a platform-agnostic way.

PARSEC aims to define a universal software standard for interacting with secure object storage and cryptography services, creating a common way to interface with functions that would traditionally have been accessed by more specialised APIs. Parsec establishes an ecosystem of developer-friendly libraries in a variety of popular programming languages. Each library is designed to be highly ergonomic and simple to consume. This growing ecosystem will put secure facilities at the fingertips of developers across a broad range of use cases in infrastructure computing, edge computing and the secure Internet of Things.



(For more information of PARSEC:


Software Defined Camera (SDC) blueprint

Security Cameras are seeing increasing growth in many subsegments such as commercial surveillance cameras, consumer surveillance cameras and in many other devices such as dashcams, baby monitors and other IoT vision devices. The total surveillance market is expected to be around $44B in the year 2025 growing at a CAGR of 13%. The other exciting thing happening in this market, along with the increase in the unit shipments of surveillance cameras, is the increasing use of artificial intelligence and machine learning (AI/ML) in these cameras and IoT devices. It is estimated that there are already a billion cameras installed in the world and this number would reach 2 billion by the year 2024. However security cameras are very different from many other devices because once installed, they stay installed for more than 5+ years. So it is critical to ensure that these devices continue to provide enhanced functionality over time. 

In today’s world, there are many technologies that are “Software defined”. This has been possible because of many advancements in technology – at the bottom of the stack, you have the Operating System, then the virtualization layer – which actually kicked off the original software defined compute with Virtual machines etc. Then came containers and then the orchestrators for the containers such as k8s, k3s  – these technological advancements paved the path to define a software defined data center. In the datacenter, after software defined compute, software defined networking and software defined storage started to become the norm. 

Now the time is prime to move software defined trends to the edge devices as well. This is really a transformation that has started across many segments such as Automotive – cars, trucks etc. We do see this trend applied to many other edge devices and one of them would be cameras or as we and many others call it, Smart Cameras. The idea here is that once you buy a camera, the hardware is advanced so that you will be receiving continuous software updates which can be neural network model updates catered to the specific data that you might have captured using the camera or other updates related to functionality and security of the device.  


 By designing future camera products with cloud native capabilities in mind, one will be able to scale camera and vision products to unprecedented levels. If all the applications are deployed using a service oriented architecture with containers, a device can be continuously updated. One of the key advantages of this architecture would be to enable new use case scenarios that one might not have envisioned in the past with a simple on demand service deployment post sale through an app store. A simple example can be to deploy a new and updated license plate recognition app a year after the purchase of cameras. At the same time one of the other key advantages with this architecture would be to enable continuous machine learning model updates based on the data pertinent to the camera installations i.e be able to use the data that it is capturing and use it to train the models. With this, the model would be relevant to the specific use-case thus increasing the value of the camera itself. All of this is possible due to enabling a service oriented architecture model using containers and thinking cloud native first. 

The other important aspect that will really bring together the security camera and vision products is the adherence to common standards at the silicon and the platform level. LF Edge member company, Arm,  has started an initiative called Project Cassini and Arm SystemReady® , and it’s a key component of this initiative. As part of Arm SystemReady®, Arm is working with many ecosystem partners (OEMs, ODMs, silicon vendors) to drive a standards approach in order to scale deployments by replacing custom solutions with standards-based solutions and becoming the platform of choice for future camera deployments.  

To address and simplify these challenges, Arm is designing and offering the solution as an open source reference solution to be leveraged collaboratively for future software defined cameras based on established standards focused on security, ML, imaging and cloud-native. The basis of the reference solution is the SystemReady® certified platform. SystemReady® platforms implement UEFI, U-Boot and Trusted standard based Firmware. 

  • With the base software layers compliant to SystemReady®, standard Linux distros such as Fedora, OpenSUSE distros or Yocto can be run on these platforms. 
  • Then the container orchestration service layer can run containerized applications, by using k3S. 
  • The next software layer is ML and Computer vision libraries. Arm has supported ML functions through Arm-NN and the Arm Compute library so that users can develop their ML applications using standard open-source ML frameworks such as PyTorch, Tensorflow etc. 
  • Similarly, with computer vision, it is critical to support open standards such as Video 4 Linux, OpenCV and BLAS libraries. 
  • Both complementary and necessary, is security – this is where orchestration and security microservices such as PARSEC, an open-source platform abstraction layer for security along with PSA certification process are critical, so that end users build the confidence that the product will be secure when deployed in the field. These secure services are the foundation to support and enable containerized applications for inferencing, analytics and storage.

Learn more about Project CASSINI here:

Introduction: Akraino’s Federated Multi-Access Edge Cloud Platform Blueprint in R5

By Akraino, Blog


This blog focuses on providing an overview of the “Federated Multi-Access Edge Cloud Platform” blueprint as part of the Akraino Public Cloud Edge Interface (PCEI) blueprint family. This blog specifically provides an overview of the key features and implemented components as part of Akraino Release-5. Key idea for this blueprint is to have a federated Multi-Access Edge Cloud (MEC) Platform that showcases the inter-play between telco side and cloud/edge side.

Prior to discussing the specifics of this blueprint implementation, the following subsection provides a brief description of what Multi-Access Edge Cloud (MEC) is and how it is ushering in as an enabler for emerging 5G/AI based applications landscape.

What is MEC and what are its  challenges?

MEC is a network architecture concept that enables cloud computing and IT capabilities to run at the edge of the network. Applications or services running on the edge nodes – which are closer to end users – instead of on the cloud, can enjoy the benefits of lower latency and enhanced end-user experience. MEC essentially moves the intelligence and service control from centralized data centers to the edge of the network – closer to the users. Instead of backhauling all the data to a central site for processing, it can be analyzed, processed, stored locally and shared upstream when needed. MEC solutions are closely “integrated” with access network(s). Such environments often include WiFi, mobile access protocols such as 4G-LTE/5G etc.

MEC opens up plethora of potential vertical and horizontal use cases, such as Autonomous Vehicle (AV), Augmented Reality (AR) and Virtual Reality (VR), Gaming, and Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) enabled applications like autonomous navigation, remote monitoring by using Natural Language Processing (NLP) or facial recognition, video analysis, and more. These emerging 5G/AI based applications landscape typically exhibit characteristics such as the following:

  • Low Latency requirements
  • Mobility
  • Location Awareness
  • Privacy/Security etc.

All of these characteristics pose unique challenges while developing emerging 5G/AI applications at the network edge. Supporting such an extensive feature-set at the required flexibility, dynamicity, performance, and efficiency requires careful and expensive engineering effort and needs adoption of new ways of architecting the enabling technology landscape.

To this regards, our proposed “Federated Multi-Access Edge Cloud Platform” blueprint enables desired abstractions in order to address these challenges and, as a result, ushers in an application development environment that enables support for ease of development and deployment of these emerging applications landscape. Subsequent sections delve deep into the proposed “Federated Multi-Access Edge Cloud Platform” blueprint details.

Blueprint Overview: Federated Multi-Access Edge Cloud Platform

The purpose of the “Federated Multi-Access Edge Cloud Platform” blueprint is an end-to-end technology solution for mobile game deployed across multiple heterogeneous edge nodes using various network access protocols such as mobile and WiFi and others. This blueprint demonstrates how an application leverages a distributed and multi access network edge environment in order to get all the benefits of edge computing.

The diagram above highlights the use case scenario. On the left hand side is the device – as can be seen that the device is moving from location x to y.

The whole use case goes through 4 distinct steps. The first step is the service discovery flow. And then the game service flow follows. And once the device actually moves, it would trigger additional session migration flow. This also includes subsequent service discovery to go along with this session migration. Finally, step number four is once this migration happens, the UE will go to the new edge node.

In order to support all this, platform provides two key abstractions:

  • Multi-Access/Mobile Operator Network Abstraction: Multi-access network means a mobile, Wi Fi or whatever it takes. Multi-operator means even for the same 4G/5G there could be different operators (Verizon, AT&T etc.). There are various MEC edge nodes, they could be the WiFi based edge node and they can be from different operators.
  • Cloud-Side Abstraction: Cloud side abstraction includes key architectural components to be described in the subsequent sections.

Functional Diagram: Federated Multi-Access Edge Cloud Platform

The key component is this federated multi-access edge platform. The platform sits between applications and underlying heterogeneous edge infrastructure and also abstracts the multi-access interface and exposes application developer friendly APIs. This blueprint leverages upstream project KubeEdge as baseline platform – this includes the enhanced  federation function (Karmada).

Telco/GSMA side complexities (5GC/NEF etc.) need to be thought through and designed appropriately in order to realize extremely low latencies (10 ms) requirements desired by typical MEC use cases. For the multi access, we may initially use a simulated mobile access environment to mimic a real time device access protocol conditions as part of the initial release/s.

Key Enabling Architectural Components

Federation Scheduler (Included in Release-5)

As a “Global Scheduler”, responsible for application QoS oriented global scheduling in accordance to the placement policies. Essentially, it refers to a decision-making capability that can decide how workloads should be spread across different clusters similar to how a human operator would. It maintains the resource utilization information for all the MEC edge cloud sites. Cloud federation functionality in our blueprint is enabled using open source Karmada project. The following is an architecture diagram for Karmada.

Karmada (Kubernetes® Armada) is a Kubernetes®  management system that enables cloud-native applications to run across multiple Kubernetes® clusters and clouds with no changes to the underlying applications. By using Kubernetes®-native APIs and providing advanced scheduling capabilities, Karmada truly enables multi-cloud Kubernetes® environment. It aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling. More details related to Karmada project can be found here.

EdgeMesh (Included in Release-5)

EdgeMesh provides support for service mesh capabilities for the edge clouds in support of microservice communication cross cloud and edges. EdgeMesh provides a simple network solution for the inter-communications between services at edge scenarios (east-west communication).

The network topology for edge cloud computing scenario is quite complex. Various Edge nodes are often not interconnected and the direct inter-communication of traffic between applications on these edge nodes is highly desirable requirement for businesses. EdgeMesh addresses these challenges by shielding the complex network topology at the edge applications scenario. More details related to EdgeMesh project can be found here.

Service Discovery (Not included in Release-5)

Service Discovery retrieves the endpoint address of the edge cloud service instance depending on the UE location, network conditions, signal strength, delay, App QoS requirements etc.

Mobility Management (Not included in Release-5)

Cloud Core side mobility service subscribes to UE location tracking events or resource rebalancing scenario. Upon UE mobility or resource rebalancing scenario, mobility service uses Cloud core side Service Discovery service interface to retrieve the address of new appropriate location-aware edge node. Cloud Core side mobility service subsequently initiates UE application state migration process between edge nodes. Simple CRIU container migration strategy may not be enough, it is much more complex than typical VM migration.

Multi-Access Gateway (Not included in Release-5)

Multi access gateway controller manages Edge Data Gateway and Access APIG of edge nodes. Edge data gateway connects with edge gateway (UPF) of 5G network system, and routes traffic to containers on edge nodes. Access APIG connects with the management plane of 5G network system (such as CAPIF), and pulls QoS, RNIS, location and other capabilities into the edge platform.

AutoScaling (Not included in Release-5)

Autoscaling provides capability to automatically scale the number of Pods (workloads) based on observed CPU utilization (or on some other application-provided metrics). Autoscaler also provides vertical Pod autoscaling capability by adjusting a container’s ”CPU limits” and ”memory limits” in accordance to the autoscaling policies.

Service Catalog (Not included in Release-5)

Service Catalog provides a way to list, provision, and bind with services without needing detailed knowledge about how those services are created or managed.

Detail Flow of various Architectural Components

What is included in Release-5

As mentioned earlier that the purpose of this blueprint is an end-to-end technology solution for mobile game deployed across multiple heterogeneous edge nodes using various network access protocols such as mobile and WiFi and others. This blueprint demonstrates how an application leverages a distributed and multi access network edge environment for realizing all the benefits of edge computing.

This is the very first release of this new blueprint as part of the Akraino PCEI family. Current focus for this release is to enable only the following two key architectural components:

  1. Open source Karmada based Cloud Federation
  2. EdgeMesh functionality

This blueprint will evolve as we incorporate remaining architectural components as part of the subsequent Akraino releases. More information on this blueprint can be found here.


Project Technical Lead: Deepak Vij, KubeEdge MEC-SIG member – Principal Cloud Technology Strategist at Futurewei Cloud Lab.


  • Peng Du, Futurewei Cloud Lab.
  • Hao Xu, Futurewei Cloud Lab.
  • Qi Fei, KubeEdge MEC-SIG member – Huawei Technologies Co., Ltd.
  • Xue Bai, KubeEdge MEC-SIG member – Huawei Technologies Co., Ltd.
  • Gao Chen, KubeEdge MEC-SIG member – China Unicom Research Institute
  • Jiawei Zhang, KubeEdge MEC-SIG member – Shanghai Jiao Tong University
  • Ruolin Xing, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Shangguang Wang, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Ao Zhou, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Jiahong Ning, KubeEdge MEC-SIG member – Southeastern University
  • Tina Tsou, Arm

Where the Edges Meet, Apps Land and Infra Forms: Akraino Release 5 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix


In the PCEI R4 blog we described the initial implementation of the blueprint. This blog focuses on new features and capabilities implemented in the PCEI in Akraino Release 5. Before discussing the specifics of the implementation, it is useful to go over the motivation for PCEI. Among the main drivers behind PCEI are the following:

  • Public Cloud Driven Edge Computing. Edge computing infrastructure and resources are increasingly provided by public clouds (e.g., AWS Outposts, IBM Cloud Satellite, Google Anthos). In the PCEI R4 blog we described  various relationships between PCC (Public Cloud Core) and PCE (Public Cloud Edge), ranging from PCE being Fully-Coupled to PCC at hardware, virtualization, application and services layers to PCE being Fully-Decoupled from PCC at all these layers. This “degree of coupling” between PCE and PCC dictates the choice of orchestration entry points as well as the behavior of the edge infrastructure and applications running on it.
  • Hybrid infrastructure. Most practical deployments of edge infrastructure and applications are hybrid in nature, where an application deployed at the edge needs services residing in the core cloud to function (coupled model). In addition, a PCE application deployed at the edge, may need to communicate, and consume resources from multiple PCC environments.
  • Multi-Domain Interworking. Individual infrastructure domains (e.g., edge, cloud, network fabric) present their own APIs and/or other provisioning methods (e.g., CLI), thus making end-to-end deployment challenging both in complexity and in time. A Multi-domain orchestration solution is needed to handle edge, cloud, and interconnection in a uniform and consistent manner.
  • Interconnection and Federation. Need for efficient and performant interconnection and resource distribution between edge and cloud as well as between distributed edges proximal to end users. We would like to point out that a common assumption in many infrastructure orchestration solutions is that the fundamental L1/L2/L3 interconnection between edge clouds and core clouds as well as between the edges is available for overlay technologies such as SD-WAN or Service Mesh. We specifically see the need for the orchestration solution to be able to enable L2/L3 connectivity between the domains that are being orchestrated.
  • Bare Metal orchestration. As with the interconnection, many orchestration solutions assume that the bare metal compute/storage hardware and basic operating system resources are available for the deployment of virtualization and application/services layers. We would like to point out that in many scenarios this is not the case. 
  • Developer-centric capabilities. Capabilities such Infrastructure-as-Code are becoming critical for activation and configuration of public cloud infrastructure components, interconnection as well as the end-to-end application deployment, integrated with CI/CD environments.

PCEI in Akraino R5

Public Cloud Edge Interface (PCEI) is a set of open APIs, orchestration functionalities and edge capabilities for enabling Multi-Domain Interworking across the Operator Network Edge, the Public Cloud Core and Edge, the 3rd-Party Edge as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. 

Terraform-based Orchestration

One of the biggest challenges with multi-domain infrastructure orchestration is finding a common and uniform method of describing the required resources and parameters in different domains, especially in public clouds (PCC). Every public cloud provides a range of service categories with a variety of different services, with each service having several different components, and each component having multiple features, with different parameters.

Terraform emerged as a common Infrastructure-as-Code tool that allows to abstract diverse provisioning methods (API, CLI, etc.) used in the individual domains and provision infrastructure components using a high-level language, if a Terraform Provider is available for the domain. 

The notable innovation in PCEI R5 is the integration of Terraform as a microservice within the PCEI orchestrator (CDS, see below). This allows for important orchestration properties:

  • Uniformity – use of the same infrastructure orchestration methods across public clouds, edge clouds and interconnection domains.
  • Transparency (model-free) – the orchestrator does not need to understand the details of the individual infrastructure domains (i.e., implement their models). It only needs to know where to retrieve the Terraform plans (programs) for the domain in question and execute the plans using the specified provider.
  • DevOps driven – the Terraform plans can be developed and evolved using DvOps tools and processes.

Examples of Terraform plans are shown below.

Open-Source Technologies in PCEI 

The PCEI blueprint makes use of the following open-source technologies and tools:

  • EMCO – Edge Multi-Cluster Orchestrator. EMCO is used as a multi-tenant service and application deployment orchestrator.
  • CDS – Controller Design Studio. CDS is used as the API Handler, Terraform Executor, Helm Chart Processor, Ansible Executor, GitLab Interface Handler.
  • Terraform. In PCEI R5, Terraform has been integrated with CDS to enable programmatic execution of Terraform plans by the PCEI Enabler to orchestrate PCC, PCE and interconnection infrastructure.
  • Kubernetes. Kubernetes is the underlying software stack for EMCO/CDS. Kubernetes is also used as the virtualization layer for PCE, on which edge applications are deployed using Helm.
  • Helm. Helm is used by EMCO for deployment of applications across multiple Kubernetes edge clusters.
  • GitLab. GitLab is used to store Terraform plans and state files, Helm charts, Ansible playbooks, Cluster configs for retrieval and processing by CDS using API calls.
  • Ansible. Ansible can be used by CDS to deploy Kubernetes clusters on top of bare metal and Linux.
  • Openstack. PCEI R5 can use Terraform to deploy IaaS infrastructure and apps on Openstack edge clouds.

Functional Roles and Components in the PCEI R5 Architecture

Key features and implementations in Akraino Release 5

  • Software Architecture Components
      • Edge Multi-Cloud Orchestrator (EMCO) 
      • Controller Design Studio (CDS) and Controller Blueprint Adapters (CBA)
      • Helm
      • Kubernetes
      • Terraform
  • Features and capabilities
    • NBI APIs
      • GitLab Integration
      • Dynamic Edge Cluster Registration
      • Dynamic App Helm Chart Onboarding
      • Automatic creation of Service Instances in EMCO and deployment of Apps
      • Automatic Terraform Plan Execution
    • Integrated Terraform Plan Executor
      • Azure (PCC), AWS (PCC)
      • Equinix Fabric (Interconnect)
      • Equinix Metal (Bare Metal Cloud for PCE)
      • Openstack (3PE)
    • Equinix Fabric Interconnect
    • Equinix Bare Metal orchestration
    • Multi-Public Cloud Core (PCC) Orchestration (Azure, AWS)
    • Kubernetes Edge
    • Openstack Edge
    • Cloud Native 5G UPF Deployment
    • Deployment of Azure IoT Edge PCE App
    • Deployment of PCEI Location API App 
    • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge 
    • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

DevOps driven Multi-domain INfrastructure Orchestration (DOMINO) 

In PCEI R5 we demonstrated the use of PCEI Enabler based on EMCO/CDS with integrated programmatic Terraform executor to orchestrate infrastructure across multiple domains and deploy an edge application. The DevOps driven Multi-domain Infrastructure Orchestration demo consisted of the following:

  • Deploy EMCO 2.0, CDS and CBAs.
  • Design Infrastructure using a SaaS Infrastructure Design Studio.
      1. Edge Cloud (Equinix Metal in Dallas, TX)
      2. Public Cloud (Azure West US)
      3. Interconnect (Equinix Fabric)
  • Push to GitLab.
      1. Cluster Info
      2. Application Helm Charts (Azure IoT Edge, kube-router)
      3. Terraform Plans
        1. Azure Cloud
        2. Equinix Interconnect
        3. Equinix Metal
  • Provision Infrastructure using CDS/Terraform.
      1. Bare Metal server in Equinix Metal Cloud in Dallas, TX
      2. Deploy K8S on Bare Metal
      3. Azure Cloud in West US (Express Route, Private BGP Peering, Express Route GW, VNET, VM, IoT Hub)
      4. Interconnect Edge Cloud with Public Cloud using Equinix Fabric L2
  • Deploy Edge Application (PCE).
      1. Dynamic K8S Cluster Registration to EMCO
      2. Dynamic onboarding of App Helm Charts to EMCO
      3. Composite cloud native app deployment and end-to-end operation
        1. Azure IoT Edge
        2. Custom Resource Definition for Azure IoT Edge
        3. Kube-router for BGP peering with Azure over ExpressRoute
  • Verify end-to-end IoT traffic flow.

The video recording of the PCEI R5 presentation and demonstration can be found at this link.

For more information on PCEI R5: 


Project Technical Lead:
Oleg Berzin, Equinix

Kavitha Papanna, Aarna Networks
Vivek Muthukrishnan, Aarna Networks
Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks


Blogs of The AI Edge: I-VICS

By Akraino, Blog

By Zhuming Zhange & Hao Zhongwang

The trends, key technologies and scenarios of VICS

Intelligent networking promotes the evolution of electronic vehicles and its architecture 

Infrastructure required for Intelligent vehicle-infrastructure cooperation systems(I-VICS)

  • Road: Informatization, intelligence and standardization
  • Communication: Unified communication interface and protocol, coordinated vehicle-road interconnection
  • Network: Car wireless communication network, narrowband Internet of Things
  • Services: High-precision time-space reference services, vehicle emergency systems, rapid assisted positioning services
  • Maps: basic maps and geographic information systems
  • Data: big data cloud platform, software

Use case 1: Safety Of The Intended Functionality (SOTIF)  and I-VICS

SOTIF(ISO/PAS 21448) emphasizes to avoid unreasonable risks due to expected functional performance limitations.

The background of the birth of SOTIF is the development of intelligent driving

If classified according to the functional chain of intelligent driving: perception-decision-execution, the “functional performance limitation” is reflected in three aspects:

  • Sensor perception limitations lead to scene recognition errors (including missed recognition of driver mis-operation)
  • Insufficient deep learning causes the decision algorithm to judge the scene incorrectly (including the wrong response to the driver’s mis-operation)
  • Actuator function limitations lead to deviation from the ideal target

  • For Area2 (known unsafe scenarios), the basic idea of SOTIF is to identify risk scenarios through safety analysis, and develop countermeasures against risk scenarios.
  • For Area3 (unknown unsafe scenarios), various scenarios that a car may encounter under various road conditions need to be identified (in theory) in the early stage of development

Use case 2: Autonomous Valet Parking


  • Automatically drive a car from a pre-defined drop-off zone (e.g. the entrance to a carpark) to a parking spot indicated by an external system.
  • Park a car in a parking spot, starting in a lane near that parking spot.
  • Drive out of a parking spot.
  • Drive to a pre-defined pick-up zone (e.g. the exit from a carpark).
  • Automatically stop for obstacles while achieving the above.

AVP’s New features / benefits based on I-VICS:

  • Expand the perception range of car
  • Improve the ability of perception and realize swarm intelligence
  • Solve the problem of automatic driving safety

-Convert unsafe scenario to safe scenario

-Convert unknown scene to known scene