Skip to main content
All Posts By

LF Edge

Deep Learning Models Deployment with Edge Computing

By Blog

By Utpal Mangla (IBM), Luca Marchi (IBM), Shikhar Kwatra (AWS)

Smart Edge devices across the world are continuously interacting and exchanging information through machine-to-machine communication protocols. Multitude of similar sensors and endpoints are generating tons of data that needs to be analyzed in real-time used to train deep complex learning models. From Autonomous Vehicles running Computer vision algorithms to smart devices like Alexa running Natural language processing models, the era of deep learning has widely extended to plethora of applications. 

Using Deep learning, one can enable such edge devices to aggregate and interpret unstructured pieces of information (audio, video, textual) and take ameliorative actions, although it comes at the expense of amplified power and performance measurements. Since Deep learning resources require huge computation time and resources to process streams of information generated by such edge devices and sensors, such information can scale very quickly. Hence, bandwidth and network requirements need to be tackled tactfully to ensure smooth scalability of such deep learning models. 

If the Machine and deep learning models are not deployed directly at the edge devices, these smart devices will have to offload the intelligence to the cloud. In the absence of a good network connection and bandwidth, it can be a quite a costly affair. 

However, since these Edge or IoT devices are supposed to run on low power, deep learning models need to be tweaked and packaged efficiently in order to smoothly run on such devices. Furthermore, many existing deep learning models and complex ML use cases leverage third party libraries, which are difficult to port to these low power devices. 

The global Edge Computing in Manufacturing market size is projected to reach USD 12460 Million by 2028, from USD 1531.4 Million in 2021, at a CAGR of 34.5% during 2022-2028. [1]

Simultaneously, the global edge artificial intelligence chips market size was valued at USD 1.8 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR) of 21.3% from 2020 to 2027. [2]

Recent statistical results and experiments have shown that offloading complex models to the edge devices has proven to effectively reduce the power consumption. However, pushing such deep learning models to the edge didn’t essentially meet the latency requirements. Hence, offloading to edge may still be use case dependent and may-not work for all the tasks involved at this point. 

We have to now build deep learning model compilers capable of optimizing these models into executable codes with versioning on the target platform. Furthermore, it needs to be compatible or go in-line with existing Deep learning frameworks inclusive of but not limited to MXNet, Caffe, Tensorflow, PyTorch etc. Lightweight and low power messaging and communication protocol will also enable swarms of such devices to continuously collaborate with each other and solve multiple tasks via parallelization of tasks at an accelerated pace. 

For instance, TensorFlow lite can be used for on-device inferencing. First, we pick an existing versioned model or develop a new model. Then, the TensorFlow model is compressed into a compressed flat buffer in the conversion stage. The compressed tflite file is loaded into an edge device during the deployment stage. In such manner, various audio classification, object detection and gesture recognition models have been successfully deployed on the edge devices. 

Hence, if we migrate the deep learning models with specific libraries and frameworks (like tensorflow lite or ARM compute library for embedded systems) can be optimized to run on IoT devices. In order to scale it further with efficiency, first, we need to directly compile and optimize deep-learning models to executable codes on the target device and we need an extremely light-weight OS to enable multitasking and efficient communications between different tasks.

References:

  1. https://www.prnewswire.com/news-releases/edge-computing-market-size-to-reach-usd-55930-million-by-2028-at-cagr-31-1—valuates-reports-301463725.html
  2. https://www.grandviewresearch.com/industry-analysis/edge-artificial-intelligence-chips-market

 

IoT Enabled Edge Infusion

By Blog

By Shikhar Kwatra, Utpal Mangla, Luca Marchi

The sheer volume of information being processed and generated by Internet of things enabled sensors has been burgeoning tremendously in the past few years. As Moore’s law is still valid providing an indication of number of transistors in an integrated circuit doubling roughly every two years, growing ability of edge devices to handle vast amounts of data with complex designs is also expanding on a large scale. 

Most current IT architectures are unable to compete with the growing data volume and maintain the processing power to understand, analyze, process and generate outcomes from the data in a quick and transparent fashion. That is where IoT solutions meets Edge Computing. 

Transmitting the data to a centralized location across multiple hoops, for instance, from the remote sensors to the cloud involves additional transport delays, security concerns and processing latency. The ability of Edge architectures to move the necessary centralized framework directly to the edge in a distributed fashion in order to make the data to be processed at the originating source has been a great improvement in the IoT-Edge industry.

As IoT projects scale with Edge computing, the ability to efficiently deploy IoT projects, reduce security to the IoT network, and adding of complex processing with inclusion of machine learning and deep learning frameworks have been made possible to the IoT network. 

International Data Corporation’s (IDC) Worldwide Edge Spending Guide estimates that spending on edge computing will reach $24 billion in 2021 in Europe. It expects spending to continue to experience solid growth through 2025, driven by its role in bringing computing resources closer to where the data is created, dramatically reducing time to value, and enabling business processes, decisions, and intelligence outside of the core IT environment. [1]

Sixty percent of organizations surveyed in a recent study conducted by RightScale agree that the holy grail of cost-saving hides in cloud computing initiatives. Edge AI, in contrast, eliminates the exorbitant expenses incurred on the AI or machine learning processes carried out on cloud-based data centers. [2]

Edge is now widely being applied in a cross-engagement matrix across various sectors:

  1. Industrial – IoT devices and sensors where data loss is unacceptable
  2. Telecommunication – Virtual and Software Defined solutions involving ease of virtualization, network security, network management etc.
  3. Network Cloud – Hybrid and Multi-cloud frameworks with peering endpoint connections, provisioning of public and private services requiring uptime, security and scalability
  4. Consumer & Retail framework – Inclusion of Personal Engagement devices, for instance, wearables, speakers, AR/VR, TV, radios, smart phones etc. wherein data security is essential for sensitive information handling 

Since not all the edge devices are powerful or capable of handling complex processing tasks while maintaining an effective security software, such tasks are usually pushed to the edge server or gateway. In that sense, the gateway becomes the communications hub performing critical network functions inclusive of but not limited to data accumulation, sensor data processing, sensor protocol translation and understanding prior to the gateway forwarding the data to the on-premise network or cloud. 

From Autonomous vehicles running different Edge AI solutions to actuators with WIFI or Zigbee with the ability to connect to the network, with increased complexity of fusion of different protocols, different edge capabilities will continue to surface in coming years.

References: 

[1] https://www.idc.com/getdoc.jsp?containerId=prEUR148081021

[2] https://www.itbusinessedge.com/data-center/developments-edge-ai/

 

Overview of Akraino R5’s Project Cassini: IoT and Infrastructure Edge Blueprint Family

By Akraino, Blog

By Olivier Bernard and Parag Beeraka 

This post provides an overview of the Smart Cities Akraino blueprint as well as an overview of key features and implementations of PARSEC in Akraino’s Release 5.

 Overview – Akraino Blueprint:  Smart Cities

The purpose of the Smart Cities blueprint family  is to provide an edge computing platform based on Arm SoC and  improve deployment flexibility and security within the edge computing. High-level relationships between  functional domains is shown in the figure below:

(For the full description of the Smart Cities Reference Architecture please refer to the Smart Cities Documents.)

 

Smart Cities in Akraino R5– Key features and implementations in Akraino Release 5:

 The Smart Cities blueprint’s security components is PARSEC, first available in Akraino’s Release 5. . The following is a brief introduction to PARSEC.

Parsec is the Platform AbstRaction for SECurity, a new open-source initiative to provide a common API to secure services in a platform-agnostic way.

PARSEC aims to define a universal software standard for interacting with secure object storage and cryptography services, creating a common way to interface with functions that would traditionally have been accessed by more specialised APIs. Parsec establishes an ecosystem of developer-friendly libraries in a variety of popular programming languages. Each library is designed to be highly ergonomic and simple to consume. This growing ecosystem will put secure facilities at the fingertips of developers across a broad range of use cases in infrastructure computing, edge computing and the secure Internet of Things.

 

 

(For more information of PARSEC: https://parallaxsecond.github.io/parsec-book/)

 

Software Defined Camera (SDC) blueprint

Security Cameras are seeing increasing growth in many subsegments such as commercial surveillance cameras, consumer surveillance cameras and in many other devices such as dashcams, baby monitors and other IoT vision devices. The total surveillance market is expected to be around $44B in the year 2025 growing at a CAGR of 13%. The other exciting thing happening in this market, along with the increase in the unit shipments of surveillance cameras, is the increasing use of artificial intelligence and machine learning (AI/ML) in these cameras and IoT devices. It is estimated that there are already a billion cameras installed in the world and this number would reach 2 billion by the year 2024. However security cameras are very different from many other devices because once installed, they stay installed for more than 5+ years. So it is critical to ensure that these devices continue to provide enhanced functionality over time. 

In today’s world, there are many technologies that are “Software defined”. This has been possible because of many advancements in technology – at the bottom of the stack, you have the Operating System, then the virtualization layer – which actually kicked off the original software defined compute with Virtual machines etc. Then came containers and then the orchestrators for the containers such as k8s, k3s  – these technological advancements paved the path to define a software defined data center. In the datacenter, after software defined compute, software defined networking and software defined storage started to become the norm. 

Now the time is prime to move software defined trends to the edge devices as well. This is really a transformation that has started across many segments such as Automotive – cars, trucks etc. We do see this trend applied to many other edge devices and one of them would be cameras or as we and many others call it, Smart Cameras. The idea here is that once you buy a camera, the hardware is advanced so that you will be receiving continuous software updates which can be neural network model updates catered to the specific data that you might have captured using the camera or other updates related to functionality and security of the device.  

 

 By designing future camera products with cloud native capabilities in mind, one will be able to scale camera and vision products to unprecedented levels. If all the applications are deployed using a service oriented architecture with containers, a device can be continuously updated. One of the key advantages of this architecture would be to enable new use case scenarios that one might not have envisioned in the past with a simple on demand service deployment post sale through an app store. A simple example can be to deploy a new and updated license plate recognition app a year after the purchase of cameras. At the same time one of the other key advantages with this architecture would be to enable continuous machine learning model updates based on the data pertinent to the camera installations i.e be able to use the data that it is capturing and use it to train the models. With this, the model would be relevant to the specific use-case thus increasing the value of the camera itself. All of this is possible due to enabling a service oriented architecture model using containers and thinking cloud native first. 

The other important aspect that will really bring together the security camera and vision products is the adherence to common standards at the silicon and the platform level. LF Edge member company, Arm,  has started an initiative called Project Cassini and Arm SystemReady® , and it’s a key component of this initiative. As part of Arm SystemReady®, Arm is working with many ecosystem partners (OEMs, ODMs, silicon vendors) to drive a standards approach in order to scale deployments by replacing custom solutions with standards-based solutions and becoming the platform of choice for future camera deployments.  

To address and simplify these challenges, Arm is designing and offering the solution as an open source reference solution to be leveraged collaboratively for future software defined cameras based on established standards focused on security, ML, imaging and cloud-native. The basis of the reference solution is the SystemReady® certified platform. SystemReady® platforms implement UEFI, U-Boot and Trusted standard based Firmware. 

  • With the base software layers compliant to SystemReady®, standard Linux distros such as Fedora, OpenSUSE distros or Yocto can be run on these platforms. 
  • Then the container orchestration service layer can run containerized applications, by using k3S. 
  • The next software layer is ML and Computer vision libraries. Arm has supported ML functions through Arm-NN and the Arm Compute library so that users can develop their ML applications using standard open-source ML frameworks such as PyTorch, Tensorflow etc. 
  • Similarly, with computer vision, it is critical to support open standards such as Video 4 Linux, OpenCV and BLAS libraries. 
  • Both complementary and necessary, is security – this is where orchestration and security microservices such as PARSEC, an open-source platform abstraction layer for security along with PSA certification process are critical, so that end users build the confidence that the product will be secure when deployed in the field. These secure services are the foundation to support and enable containerized applications for inferencing, analytics and storage.

Learn more about Project CASSINI here: https://wiki.akraino.org/display/AK/Project+Cassini+-+IoT+and+Infrastructure+Edge+Blueprint+Family.

Infusion of Machine Learning Operations with Internet of Things

By Blog

By: Shikhar Kwatra, Utpal Mangla, Luca Marchi

With the advancement in deep tech, the operationalization of machine learning and deep learning models has been burgeoning in the space of machine learning. In a typical scenario within the organization involving machine learning or deep learning business case, the data science and IT teams need to extensively collaboration in order to increase the pace of scaling and pushing of multiple machine learning models to production through continuous training, continuous validation, continuous deployment and continuous integration with governance. Machine Learning Operations (MLOps) has carved a new era of DevOps paradigm in the machine learning/artificial intelligence realm by automating end-to-end workflows.

As we are optimizing the models and bringing the data processing and analysis closer to the edge, data scientists and ML engineers are continuously finding new ways to push the complications involved with operationalization of models to such IoT (Internet of things) edge devices. 

A LinkedIn publication revealed that by 2025, the global AI spend would have reached $232 billion and $5 trillion by 2050. According to Cognilytica, the global MLOps market will be worth $4 billion by 2025. The industry was worth $350 million in 2019. [1]

Models running in IoT edge devices need to be very frequently trained due to variable environmental parameters, wherein continuous data drift and limited access to such IoT edge solutions may lead to degradation of the model performance over time. The target platforms on which ML models need to be deployed can also vary, such as IoT Edge or to specialized hardware such as FPGAs which leads to high level of complexity and customization with regards to MLOps on such platforms. 

Models can be packaged into docker image for the purpose of deployment post profiling the models by determining the cores, CPU and memory settings on said target IoT platforms. Such IoT devices also have multiple dependencies for packaging and deploying models that can be executed seamlessly on the platform. Hence, model packaging is easily implemented through containers as they can span over both cloud and IoT edge platforms.

When we are running on IoT edge platforms with certain dependencies of the device, a decision needs to be taken which containerized machine learning models need to be made available offline due to limited connectivity. An access script to access the model, invoke the endpoint and score the request incoming to the edge device needs to be operational in order to provide the respective probabilistic output. 

Continuous monitoring and retraining of models deployed in the IoT devices need to be handled properly using model artifact repository and model versioning features as part of MLOps framework. Different images of the models deployed will be stored in the shared device repository in order to quickly fetch the right image at the right time to be deployed to the Iot device. 

Model retraining can be triggered based on a job scheduler running in the edge device or when new data is incoming, invoking the rest endpoint of the machine learning model. Continuous model retraining, versioning and model evaluation become an integral part of the pipeline. 

In case the data is frequently changing which can be the case with such IoT edge devices or platforms, the frequency of model versioning and refreshing the model due to variable data drift will enable the MLOps engineer persona to automate the model retraining process, thereby saving time for the data scientists to deal with other aspects of feature engineering and model development. 

In time, the rise of such devices continuously collaborating over the internet and integrating with MLOps capabilities is poised to grow over the years. Multi-factored optimizations will continue to occur in order to make the lives of data scientists and ML engineers more focused and easier from model operationalization standpoint via an end-to-end automated approach.

Reference: 

[1] https://askwonder.com/research/historical-global-ai-ml-ops-spend-ulshsljuf

 

American Tower Joins LF Edge as Premiere Member, Community Adds EdgeGallery to Project Roster

By Announcement

LF Edge furthers innovation at the open source edge across a unified ecosystem, with induction of Edge Gallery —an open-source MEC edge computing project —and adds leading innovator American Tower as Premiere member and Ritsumeikan University as new Associate member

SAN FRANCISCO February 28, 2022 LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced American Tower has joined the project as  a Premier member. Additionally, the project announced Edge Gallery has joined the umbrella as a Stage 1 project, RITSUMEIKAN University has joined as an Associate member, and the community issued its 2021 Annual Report.

American Tower, a global leading infrastructure provider of wireless, data center, and interconnect solutions to enable a connected world, joins other existing LF Edge Premiere members: Altran, Arm, AT&T, AVEVA, Baidu, Charter Communications, Dell Technologies, Dianomic, Equinix, Ericsson, F5, Fujitsu, Futurewei, HP, Huawei, Intel, IBM, NTT, Radisys, RedHat, Samsung, Tencent, VMware, Western Digital, ZEDEDA.

“We are pleased to see even more leading technology innovators joining as LF Edge members,” said Arpit Joshipura, general manager, Networking, Edge and IOT, the Linux Foundation. “The proliferation of new technologies joining collaborative innovation at the open source edge means scalability, interoperability, and market innovation is happening across the ecosystem.”

About America Tower

American Tower, one of the largest global REITs, is a leading independent owner, operator and developer of multitenant communications real estate with a portfolio of approximately 219,000 communications sites. For more information about American Tower, please visit americantower.com.

“We are excited to join LF Edge and their members to accelerate innovation, enabled by edge network architecture. A distributed model, positioning critical data closer to the user, provides the low-latency infrastructure to deliver the automation, performance, and cognitive insight required by manufacturing, healthcare, transportation, and more.” – Eric Watko, Vice President, Product Line Management, American Tower.

American Tower is joined by new Associate member, RITSUMEIKAN University, a private university in Kyoto, Japan, that traces its origin to 1869. With the Kinugasa Campus in Kyoto, and Kyoto Prefecture, the university also has a satellite called Biwako-Kusatsu Campus and Osaka-Ibaraki Campus. Ritsumeikan university is known as one of western Japan’s four leading private universities. 

EdgeGallery Joins LF Edge Umbrella

Celebrating it’s two-year mark as an umbrella project, LF Edge welcomes its tenth project, Edge Gallery. Edge Gallery is an open-source MEC edge computing project initiated by Huawei, carriers, and vertical industry partners that joined the Linux Foundation in late 2021. Its purpose is to build a common edge computing platform that meets the “connection + computing” characteristics of the telecom industry, standardize the openness of network capabilities (especially 5G network capabilities), and simplify lifecycle processes such as MEC application development, test, migration, and running. 

EdgeGallery joins the nine existing projects – Akraino, Baetyl, FledgeEdgeX Foundry, Home Edge, Open Horizon, Project EVE, Secure Device Onboard (SDO) and State of the Edge – that support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and faster processing and mobility. LF Edge helps  unify a fragmented edge market around a common, open vision for the future of the industry.

LF Edge 2021 Annual Report

The LF Edge community also issued a report of its progress and results from the past year. “LF Edge  “ summarizes key highlights (including blueprints, deployments and momentum) Governing Board, Technical Advisory Board, Outreach Committee and General Manager. To download the report, visit: https://www.lfedge.org/resources/publications/

More details on LF Edge, including how to join as a member, details on specific projects and other resources, are available here: www.lfedge.org.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Open Networking & Edge Executive Forum (ONEEF) Returns Virtually, April 12-14, 2022

By Announcement, LF Edge

Global Industry Executives across telco, cloud and enterprise to share thought-leading visions with global open source networking and edge communities – in across alternating time zones 

SAN FRANCISCO, February 22, 2022 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, in partnership with LF Networking and LF Edge​​today announced the Open Networking & Edge Executive Forum (ONEEF) will take place virtually April 12-14, 2022. The Open Networking & Edge Executive Forum (spring) and Summit (fall) are the industry’s premier open networking and edge computing events focused on end to end solutions powered by open source. 

Building on the successful inaugural ONEEF event last spring, the Linux Foundation, LF Networking, and LF Edge are pleased to announce the 2022 Executive Forum, where leading industry executives will again share their visions from the Telco, Cloud, and Enterprise verticals. Attendees will learn how to leverage open source ecosystems and gain new insights for digital transformation. Presented in a virtual format across three days, this is a one-track event starting in a different time zone each day to better reach our global audience. 

“We are  pleased to welcome thought leaders from across the globe to the virtual stage for ONEEF 2022,” said Arpit Joshipura, general manager, Networking, Edge & IoT, at the Linux Foundation. “This curated experience is designed to complement the Open Networking & Edge Summit with thought leaders and collaborators from around the globe coming together to share insights, best practices, and new ideas that enhance the vertical space across open source networking, edge, cloud and enterprise stacks.”

Details on Executive speakers and session agenda will be available soon, but attendees can expect to hear industry insights from Analysys Mason analyst Caroline Chappell, as well as updates on the direction of major initiatives like the 5G Super Blue Print. Stay tuned for more details.

Speakers and Content from ONEEF 2021, including sessions videos, are available online.

Registration & Sponsorships

Presented in a virtual format across three days, this is a one track event that will be held in a different time zone each day to reach our global audience in the Americas, EMEA, and APAC. There is no cost to attend, but participants must be registered in order to access the sessions: https://events.linuxfoundation.org/open-networking-and-edge-exec-forum/

Sponsorship opportunities are available for this special executive edition of Open Networking & Edge Summit. For more information on sponsoring this event, contact us at events@lfnetworking.org.

The LF Networking developer community will also host the LFN Developer & Testing Forum this Summer, taking place June 13-16, in Porto, Portugal. Registration for that event  is also open, with more details to come. 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit linuxfoundation.org.

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

eKuiper Issues 1.4 Release & Discusses 2022 Roadmap

By Blog, eKuiper

Written by Jiyong Huang, Chair of the eKuiper Technical Steering Committee and Senior Software Engineer of EMQ

eKuiper, a Stage 1 (at-large) project under the LF Edge umbrella, is a Go language implementation of the lightweight stream processing engine which can run in various IoT edge usage scenarios for real-time data analysis. Being able to run near the data source at the edge, eKuiper can improve system response speed and security, saving network bandwidth and storage costs.

The eKuiper project usually has one release per quarter with new features and several patch versions afterwards. The current active release is v1.4 which is the biggest update after joining LF Edge! We have solid community contributions for thee release, including requirement collections, code contributions, testing and trials. So we are grateful to share the updates and hope they can benefit our users.

Rule Pipeline: Building Complex Businesses Logic with Flexibility

eKuiper uses SQL to define business logic, which lowers the threshold of development. Simple SQL statements can efficiently define business requirements such as filtering, aggregation, and data conversion in practical usage scenarios. However, for some complex scenarios, it is difficult to address by defining a single SQL statement; even if you can, the SQL statement itself is too complex and difficult to maintain.

Based on the new in-memory source and sink, the rule pipeline can connect multiple SQL rules easily, efficiently, and flexibly. The readability and maintainability of SQL statements can be improved when implementing complex business scenarios. The rules are connected with in-memory topics similar to the MQTT topics and support wildcard subscriptions, enabling an exceptionally flexible and efficient rules pipeline. While improving business expressiveness, rule pipelining can also improve runtime performance in certain complex scenarios. For example, multiple rules need to process data that has been filtered by a certain condition. By extracting that filtering condition as a predecessor rule, you can make the filtering calculated only once, significantly reducing the computation in the case of many rules.

Portable Plugin: Making Extensions Easier

The original version of eKuiper supported an extension scheme based on the Go native plug-in system, supporting individual extensions to source, sink and function (UDF). However, due to the limitations of the Go plugin system, writing and using plugins is not easy for users familiar with Go, let alone users of other languages. eKuiper has received a lot of feedback from users in the community about plugin development, operation and deployment, and various operational issues.

To balance development efficiency and runtime efficiency, v1.4.0 adds a new Portable plugin system to lower the threshold of plugin development. The new Portable plug-in is based on the nng protocol for inter-process communication and supports multiple languages, currently providing go and python SDKs, with more SDKs to be added in subsequent versions according to user requirements; simplifies the compilation/deployment process, and runs like a normal program written in various languages without additional restrictions. Due to the different operation mechanisms, portable plugin crashes will not affect eKuiper itself.

Native plug-ins and portable plug-ins can coexist. Users can choose the plug-in implementation or mix them according to their needs.

Shared Connections: Source/Sink Multiplexed Connections

eKuiper provides a rich set of sources and sinks to access and send results to external systems. Many of these sources and sinks are input/output pairs of the same external system type. For example, MQTT and EdgeX both have corresponding source and sink, and in the new version, users can do all connection-related configurations in the connection.yaml file; in the source/sink configuration, you can specify which connections to use without repeating the configuration. Shared connection instances reduce the additional consumption of multiple connections. In some scenarios, users may be limited in the number of connections and ports to external systems, and using shared connections can meet this limit. Also, based on shared connections, eKuiper can support connections to the EdgeX secure data bus.

Other enhancements

  • Support for configuration via environment variables
  • By default, eKuiper uses SQLite, an embedded database, to store metadata such as streams and rules, allowing for no external dependencies at runtime. In the new version, users can choose Redis as the metadata storage solution
  • Rule status returns error reason
  • Optimized SQL Runtime, reduce CPU usage up to 70% in a shared source user scenario
  • sink dynamic parameter support, e.g., MQTT sink can set the topic to a field value in the result so that the data received can be sent to a dynamic topic
  • Authentication support: user-configurable JWT-based authentication for REST API

2022 Roadmap

As a young project, eKuiper is still far from perfect. There are a long list of feature requests from the users and community. eKuiper will continue to grow in the new year and open to anyone who is interested to make edge computing powerful and easier to use at the same time. The team will focus on the four themes:

  • Enrich API and syntax
    • more SQL clause (in, between, like etc.)
    • streaming feature (dynamic table etc.)
    • more handy functions (changed, unnest, statistic functions)
  • Stability & Performance
    • Allow to select feature by build tag and provide a minimal core version
    • Incremental window
    • Continuously memory footprint and CPU optimization
  • Extension
    • Stabilize portable plugin runtime and support more languages
    • More http source type
  • Build & Deployment
    • Cluster and HA
    • Android support

Please check the 2022 Roadmap in our Github project for details, and stay tuned in with the eKuiper community.

Cloud Services at the Edge

By Blog

This post first published on the IBM blog at this link; it has been reposted here with permission. Some content has been redacted so as not to be seen as an endorsement by LF Edge. 

By Ashok Iyengar, Executive Cloud Architect & Gerald Coon, Architect & Dev Leader, IBM Cloud Satellite

Where the enterprise edge ends, where the far edge begins and what, if any, are the various points of intersection?

What do AWS Outpost, Azure Stack, Google Anthos and IBM Cloud Satellite have in common? Each one of them is essentially an extension of their public cloud offering into an enterprise’s on-premises location or edge facility. This is, in fact, the hybrid cloud platform paradigm.

Each vendor has their offering nuances. They even support different hardware for building the on-premises components of a hybrid cloud infrastructure. But the end goal is to combine the compute and storage of public cloud services into an enterprise’s data center — what some might call Enterprise Edge. It is worth pointing out that IBM Cloud Satellite is built on the value of Red Hat OpenShift Container Platform (RHOCP). This blog post will discuss where the enterprise edge ends, where the far edge begins and what, if any, are the various points of intersection.

To reiterate from previous blogs in this series, edge encompasses far edge devices all the way to the public cloud, with enterprise edge and network edge along the way. The various edges (network, enterprise, far edge) are shown on the left side of Figure 1 along with the major components of a platform product that include the cloud region, the tunnel link, a control plane, and different remote Satellite locations:

Figure 1. Different edges and IBM Cloud Satellite components.

 

Note that one would need more than one control plane only. For example, a telco location for the network team and a development location for deploying edge services.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Edge components

As we have mentioned in our other blogs in this series, there are three main components in an edge topology, no matter which edge we are talking about:

  • A central hub that orchestrates and manages edge services deployment.
  • Containerized services or applications that can run on edge devices.
  • Edge nodes or devices where the applications run, and data is generated.

Some edge solutions do not use agents on edge devices, while others like IBM Edge Application Manager require an agent installed on each device. An agent is a small piece of code running on edge nodes or devices to facilitate the deployment and monitoring of applications. Refer to “Architecting at the Edge” for more information.

Which cloud?

In most cases, these platform products that bring public cloud services to an on-premises location work with one cloud provider. AWS Outpost, for example, is a hardware solution only meant to work with AWS. IBM Cloud Satellite, on the other hand, has certain connectivity and resource requirements (CPU/memory) but is agnostic to the hardware. The requirements generally begin at the operating system level (Red Hat) and leave the hardware purchasing to the customer. The Red Hat hosts provided can even be EC2 instances in AWS or other cloud providers. This means IBM Cloud Satellite can bring IBM Cloud services to remote locations as well as services from AWS, Azure, Google Cloud and more that are planned.

[…]

Overlapping or complementary technologies?

We hear the phrase “cloud-out” when describing the compute moving out toward the edge. But what we see from Figure 1 is that the services brought on-premises from the public cloud cannot quite be extended out to the far edge devices. That is where one would require a product like the IBM Edge Application Manager to deploy and manage services at scale.

A common challenge of edge workloads is training the artificial intelligence (AI) and machine learning (ML) models and using predictive model inferencing. An IBM Cloud Satellite location can act as the platform in close proximity where data can be stored and accessed, and AI/ML models can be trained and retrained before they are deployed on to edge devices. Or the apps running on the edge nodes could access a suite of AI/ML services via the Satellite location. Thus, low latency and data sovereignty are two major reasons why enterprises would want to deploy such solutions. Compliance and other security requirements are easier to implement when the cloud object storage or database is physically located on-premises.

It is easy to envision a use case where a retail chain would use a product like AWS Outpost or IBM Cloud Satellite to establish a satellite location in a city. That satellite location could then provide the required cloud-based services to all its stores in that city. These could be a common set of services like AI/ML analytics, access policies, security controls, databases, etc. — providing consistency across all environments. Consistency and access to a large set of powerful processing services are additional advantages of such deployments.

Another common example is with telecommunication service providers that are looking to monetize 5G technology by offering cloud services to their customers. Figure 3 shows a Telco MEC (Mobile Edge Computing) topology making use of IBM Cloud Satellite, IBM Edge Application Manager (IEAM) and Red Hat OpenShift Container Platform (RHOCP):

Figure 3. Telco MEC topology using IBM Cloud Satellite and IEAM services.

 

To provide a bit more context, MEC effectively offers localized cloud servers and services rather than depending on a larger, centralized cloud. This basically means the edge/IoT devices will communicate with more, smaller data hubs that are physically closer to them (i.e., on the “edge” of the network). Rather than online games having to send data to a distant central server, process it and send back a response — all of which slows down overall communication speeds — they will be able to access more power, closer to the gamers.

Wrap-up

In addition to the millions of devices, IoT and edge computing have the challenge of accessing and storing data in the “last mile.” Products like AWS Outpost, Azure Stack, Google Anthos and IBM Cloud Satellite complement IoT and Edge topologies. In fact, the IBM Edge Application Manager Hub is often deployed in a Satellite location or resides in the cloud. The combination of the two technologies provides a compelling solution that companies in healthcare, telecommunications and banking can use. The agnostic nature of IBM Cloud Satellite even allows it to not only bring IBM Cloud services to remote locations but also services from AWS, Azure and Google Cloud.

The IBM Cloud architecture center offers up many hybrid and multicloud reference architectures including AI frameworks. Look for the IBM Edge Computing reference architecture here.

This blog post talked about bringing cloud services to the edge in what is commonly called distributed cloud or “cloud out.” It offers the best of both worlds — public cloud services and secure on-premises infrastructure. The folks at mimik have a very interesting notion of “edge in,” wherein they describe a world of microservices, edge-device-to-edge-device communication and creating a sort of service mesh that expands the power of the edge devices toward the cloud.

Let us know what you think.

Special thanks to Joe Pearson, David Booz, Jeff Sloyer and Bill Lambertson for reviewing the article.

 

Introduction: Akraino’s Federated Multi-Access Edge Cloud Platform Blueprint in R5

By Akraino, Blog

Introduction

This blog focuses on providing an overview of the “Federated Multi-Access Edge Cloud Platform” blueprint as part of the Akraino Public Cloud Edge Interface (PCEI) blueprint family. This blog specifically provides an overview of the key features and implemented components as part of Akraino Release-5. Key idea for this blueprint is to have a federated Multi-Access Edge Cloud (MEC) Platform that showcases the inter-play between telco side and cloud/edge side.

Prior to discussing the specifics of this blueprint implementation, the following subsection provides a brief description of what Multi-Access Edge Cloud (MEC) is and how it is ushering in as an enabler for emerging 5G/AI based applications landscape.

What is MEC and what are its  challenges?

MEC is a network architecture concept that enables cloud computing and IT capabilities to run at the edge of the network. Applications or services running on the edge nodes – which are closer to end users – instead of on the cloud, can enjoy the benefits of lower latency and enhanced end-user experience. MEC essentially moves the intelligence and service control from centralized data centers to the edge of the network – closer to the users. Instead of backhauling all the data to a central site for processing, it can be analyzed, processed, stored locally and shared upstream when needed. MEC solutions are closely “integrated” with access network(s). Such environments often include WiFi, mobile access protocols such as 4G-LTE/5G etc.

MEC opens up plethora of potential vertical and horizontal use cases, such as Autonomous Vehicle (AV), Augmented Reality (AR) and Virtual Reality (VR), Gaming, and Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) enabled applications like autonomous navigation, remote monitoring by using Natural Language Processing (NLP) or facial recognition, video analysis, and more. These emerging 5G/AI based applications landscape typically exhibit characteristics such as the following:

  • Low Latency requirements
  • Mobility
  • Location Awareness
  • Privacy/Security etc.

All of these characteristics pose unique challenges while developing emerging 5G/AI applications at the network edge. Supporting such an extensive feature-set at the required flexibility, dynamicity, performance, and efficiency requires careful and expensive engineering effort and needs adoption of new ways of architecting the enabling technology landscape.

To this regards, our proposed “Federated Multi-Access Edge Cloud Platform” blueprint enables desired abstractions in order to address these challenges and, as a result, ushers in an application development environment that enables support for ease of development and deployment of these emerging applications landscape. Subsequent sections delve deep into the proposed “Federated Multi-Access Edge Cloud Platform” blueprint details.

Blueprint Overview: Federated Multi-Access Edge Cloud Platform

The purpose of the “Federated Multi-Access Edge Cloud Platform” blueprint is an end-to-end technology solution for mobile game deployed across multiple heterogeneous edge nodes using various network access protocols such as mobile and WiFi and others. This blueprint demonstrates how an application leverages a distributed and multi access network edge environment in order to get all the benefits of edge computing.

The diagram above highlights the use case scenario. On the left hand side is the device – as can be seen that the device is moving from location x to y.

The whole use case goes through 4 distinct steps. The first step is the service discovery flow. And then the game service flow follows. And once the device actually moves, it would trigger additional session migration flow. This also includes subsequent service discovery to go along with this session migration. Finally, step number four is once this migration happens, the UE will go to the new edge node.

In order to support all this, platform provides two key abstractions:

  • Multi-Access/Mobile Operator Network Abstraction: Multi-access network means a mobile, Wi Fi or whatever it takes. Multi-operator means even for the same 4G/5G there could be different operators (Verizon, AT&T etc.). There are various MEC edge nodes, they could be the WiFi based edge node and they can be from different operators.
  • Cloud-Side Abstraction: Cloud side abstraction includes key architectural components to be described in the subsequent sections.

Functional Diagram: Federated Multi-Access Edge Cloud Platform

The key component is this federated multi-access edge platform. The platform sits between applications and underlying heterogeneous edge infrastructure and also abstracts the multi-access interface and exposes application developer friendly APIs. This blueprint leverages upstream project KubeEdge as baseline platform – this includes the enhanced  federation function (Karmada).

Telco/GSMA side complexities (5GC/NEF etc.) need to be thought through and designed appropriately in order to realize extremely low latencies (10 ms) requirements desired by typical MEC use cases. For the multi access, we may initially use a simulated mobile access environment to mimic a real time device access protocol conditions as part of the initial release/s.

Key Enabling Architectural Components

Federation Scheduler (Included in Release-5)

As a “Global Scheduler”, responsible for application QoS oriented global scheduling in accordance to the placement policies. Essentially, it refers to a decision-making capability that can decide how workloads should be spread across different clusters similar to how a human operator would. It maintains the resource utilization information for all the MEC edge cloud sites. Cloud federation functionality in our blueprint is enabled using open source Karmada project. The following is an architecture diagram for Karmada.

Karmada (Kubernetes® Armada) is a Kubernetes®  management system that enables cloud-native applications to run across multiple Kubernetes® clusters and clouds with no changes to the underlying applications. By using Kubernetes®-native APIs and providing advanced scheduling capabilities, Karmada truly enables multi-cloud Kubernetes® environment. It aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling. More details related to Karmada project can be found here.

EdgeMesh (Included in Release-5)

EdgeMesh provides support for service mesh capabilities for the edge clouds in support of microservice communication cross cloud and edges. EdgeMesh provides a simple network solution for the inter-communications between services at edge scenarios (east-west communication).

The network topology for edge cloud computing scenario is quite complex. Various Edge nodes are often not interconnected and the direct inter-communication of traffic between applications on these edge nodes is highly desirable requirement for businesses. EdgeMesh addresses these challenges by shielding the complex network topology at the edge applications scenario. More details related to EdgeMesh project can be found here.

Service Discovery (Not included in Release-5)

Service Discovery retrieves the endpoint address of the edge cloud service instance depending on the UE location, network conditions, signal strength, delay, App QoS requirements etc.

Mobility Management (Not included in Release-5)

Cloud Core side mobility service subscribes to UE location tracking events or resource rebalancing scenario. Upon UE mobility or resource rebalancing scenario, mobility service uses Cloud core side Service Discovery service interface to retrieve the address of new appropriate location-aware edge node. Cloud Core side mobility service subsequently initiates UE application state migration process between edge nodes. Simple CRIU container migration strategy may not be enough, it is much more complex than typical VM migration.

Multi-Access Gateway (Not included in Release-5)

Multi access gateway controller manages Edge Data Gateway and Access APIG of edge nodes. Edge data gateway connects with edge gateway (UPF) of 5G network system, and routes traffic to containers on edge nodes. Access APIG connects with the management plane of 5G network system (such as CAPIF), and pulls QoS, RNIS, location and other capabilities into the edge platform.

AutoScaling (Not included in Release-5)

Autoscaling provides capability to automatically scale the number of Pods (workloads) based on observed CPU utilization (or on some other application-provided metrics). Autoscaler also provides vertical Pod autoscaling capability by adjusting a container’s ”CPU limits” and ”memory limits” in accordance to the autoscaling policies.

Service Catalog (Not included in Release-5)

Service Catalog provides a way to list, provision, and bind with services without needing detailed knowledge about how those services are created or managed.

Detail Flow of various Architectural Components

What is included in Release-5

As mentioned earlier that the purpose of this blueprint is an end-to-end technology solution for mobile game deployed across multiple heterogeneous edge nodes using various network access protocols such as mobile and WiFi and others. This blueprint demonstrates how an application leverages a distributed and multi access network edge environment for realizing all the benefits of edge computing.

This is the very first release of this new blueprint as part of the Akraino PCEI family. Current focus for this release is to enable only the following two key architectural components:

  1. Open source Karmada based Cloud Federation
  2. EdgeMesh functionality

This blueprint will evolve as we incorporate remaining architectural components as part of the subsequent Akraino releases. More information on this blueprint can be found here.

Acknowledgements

Project Technical Lead: Deepak Vij, KubeEdge MEC-SIG member – Principal Cloud Technology Strategist at Futurewei Cloud Lab.

Contributors:

  • Peng Du, Futurewei Cloud Lab.
  • Hao Xu, Futurewei Cloud Lab.
  • Qi Fei, KubeEdge MEC-SIG member – Huawei Technologies Co., Ltd.
  • Xue Bai, KubeEdge MEC-SIG member – Huawei Technologies Co., Ltd.
  • Gao Chen, KubeEdge MEC-SIG member – China Unicom Research Institute
  • Jiawei Zhang, KubeEdge MEC-SIG member – Shanghai Jiao Tong University
  • Ruolin Xing, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Shangguang Wang, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Ao Zhou, KubeEdge MEC-SIG member – State Key Laboratory of Network and Switching Technology, Beijing University of Posts and Telecommunications
  • Jiahong Ning, KubeEdge MEC-SIG member – Southeastern University
  • Tina Tsou, Arm

Vision 2022: Open Networking & Edge Predictions

By Blog

By: Arpit Joshipura, GM, Networking, Edge & IOT

As we wrap up the second year of living through a global pandemic, I wanted to take a moment to both look ahead to next year, as well as recognize how the open networking and edge industry has shifted over the past year. Read below for a list of what we can expect in 2022, as well as a brief “report card” on where my industry predictions from last year landed.

  1. Dis-aggregation will enter the “Re-aggregation” phase (in terms of software, organizations, and industries) This will be enabled by Super Blueprints (which bring end- to- end open source projects together), and we’ll see more multi-org collaboration (e.g., Standards Bodies, Alliances and Foundations) re-aggregating to solve common problems. Edge computing will serve as the glue that binds common IoT frameworks together across vertical industries.
  2. Realists and Visionaries will fight it out for dollars and productivity Given that what started as a pandemic could become endemic, there will be an internal tussle between Realists (making money off of 4G), Engineers currently coding 5G,  and Visionaries looking to 6G and beyond. (In other words, the cycle continues).
  3.  Security will emerge as the key differentiator in Open source Collaboration among governments and other global organizations against “bad actors” will penetrate geopolitical walls to bring a global ecosystem together, via open source.
  4. Market Analysts will reinvent themselves There is no longer a clear way to track Cloud, Telecom, Enterprise and other markets individually. There is a big market realignment in progress,  with new killer use cases (such as X and Y).
  5. Seamless Vertical industries will emerge Enabled by Open Source Software — many vertical industries will not even know (or care) how the pipe traverses across their last mile to central cloud and edges (led by Manufacturing, Retail, Energy, Healthcare & Automotive).

What did I miss? I would love to have your comments on LinkedIn.

Now let’s take a look at where my predictions from last year actually landed…

See my 2021 predictions from last year: https://www.lfnetworking.org/blog/2020/12/15/predictions-2021-networking-edge/

Hindsight 2021

Prediction 1:Telecom & Cloud ‘Plumbing’ based on 5G Open Source will drive accelerated investments from top markets (Government, Manufacturing, and Enterprises) 

Grade: A Where we netted out: Great stories on end user deployment and momentum (Deutsche Telekom, AT&T, Orange, Bell, China Mobile, Verizon, DARPA WorldBankWalmart…. plus cloud players like Google, Microsoft, and top global Network vendors). 

Prediction 2: The Last piece of the “open” puzzle will fall in place: Radio Access Network (RAN)

Grade: B Where we netted out:  The puzzle has fallen into place,  but with many pieces (e.g., ORAN SC, OpenRAN, SD-RAN, Open in the name of RAN). RAN and packet core continue to be the focus in an Open World.

Prediction 3: “Remote Work” will continue to be the greatest positive distraction, especially within the open source community

Grade: A+ Where we netted out: Spot on!  >200% growth in Commits across LF Networking and LF Edge. 

Prediction 4: “Futures” (aka bells and whistle features & future-looking capabilities) will give way to “functioning blueprints”  

Grade: A Where we netted out: The US DoD is now banking on Open Source for security reasons; 5G Super Blue Print is the fastest growing initiative in Open Source; Akraino has 25+ deployed blueprints; and more. At the end of the day, open source has moved from classroom theory to in- field practical code.

Prediction 5: AI/ML technologies become mainstream 

Grade: B Where we netted out: Still not there. While Intent based has been incorporated into open source, common frameworks are still fragmented. Deployments are specific to carriers, countries, and enterprises.

About the Author: Arpit Joshipura is General Manager, Networking, Edge & IoT, the Linux Foundation.