Skip to main content
Category

Blog

Calling all Edge App Developers! Join the ETSI/LF Edge Hackathon

By Akraino, Blog, EdgeX Foundry, LF Edge

We’re partnering with ETSI ISG MEC to host a Hackathon to encourage open edge developers to build solutions that leverage ESTI MEC Services APIs and LF Edge (Akraino) blueprints. Vertical use cases could include Automotive, Mixed and Augmented Reality, Edge Computing and 5G, and more. Teams are highly encouraged to be creative and propose to develop solutions in additional application verticals. 

Here’s how it works:

  • Participants will be asked to develop an innovative Edge Application or Solution, utilizing ETSI MEC Service APIs and LF Edge Akraino Blueprints.
  • Solutions  may include any combination of client apps, edge apps or services, and cloud components.
  • The Edge Hackathon will run remotely from June to September with a short-list of best teams invited to complete with demonstrations and a
  • Hackathon “pitch-off” at the Edge Computing World Global event in Santa Clara, California Silicon Valley on October 10th-12th

Submissions are due 29th June 2022.  More details, including how to register, are available here.

Additional background information:

Edge Computing provides developers with localized, low-latency resources that can be utilized to create new and innovative solutions, which are essential to many application vertical markets in the 5G era.

  • ETSI’s ISG MEC is standardizing an open environment that enables the integration of applications from infrastructure and edge service providers across MEC (Multi-access Edge Computing) platforms and systems, which offers a set of open standardized Edge Service APIs to enable the development and deployment of edge applications at scale.
    • Teams will be provided with a dedicated instance of the ETSI MEC Sandbox for their work over the duration of the Hackathon. The MEC Sandbox is an online environment where developers can access and interact with live ETSI MEC Service APIs in an emulated edge network set in Monaco, which includes 4G, 5G, & WiFi networks configurations, single & dual MEC platform deployments, etc.
  • LF Edge’s Akraino project offers a set of open infrastructure and application Blueprints for Edge, spanning a broad variety of application vertical use-cases. Each blueprint offers a high-quality full-stack solution that is tested and validated by the community.

However, teams are not required to use a specific edge hosting platform or MEC lifecycle management APIs. Developers may use any virtualization environment of their choosing. 

Submissions are due 29th June 2022.  More details, including how to register, are available here.

What’s Next for EdgeX: Onto Levski

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry TSC chair

May was a busy month at EdgeX Foundry.  EdgeX released version 2.2, Kamakura in the middle of May (details here) and went straight into planning our fall release – code named Levski.  We also selected our 2022 EdgeX Award winners as well, and I’ll be posting a follow-up  to congratulate them and speak to some of their efforts.

Levski

First, what is the next release of EdgeX going to be about?  It is slated to be released in November of 2022.  It will, in all likelihood be another minor dot release (version 2.3 to be precise) that is backward compatible with all EdgeX 2.x releases.  The Levski release will also not be an LTS release.  Jakarta remains the first and only LTS for EdgeX for now (more on that below).  By the way, in case you are wondering where the name Levski (or the name of any EdgeX release) comes from, a top contributor to our project is selected in each release cycle to name an upcoming release.  Levski is named after one of our CLI developers’ favorite home-country (she is from Bulgaria) mountain peak.

You will see the words “likely” and “anticipate” show up a lot in this post because a lot can happen in the span of six months when building open-source software.  These caveats are there to remind the reader that what is planned is not set in stone – so if you are an adopter/user of EdgeX, design accordingly.

Major Features

Two of the most important features that we are looking to land in the Levski release are:

  • North-to-south message bus communications
  • Control plane (aka system management) events

North-to-south messaging will set up the means for a 3rd party application or cloud system to use MQTT to communicate with EdgeX’s command service and trigger an actuation or data fetch command all the way through to the devices/sensors.  

Today, communications going from north to south in EdgeX are accomplished via REST.  Therefore, things are not really asynchronous and there isn’t the opportunity to set the quality of service (QoS) of the communications in these instances.  Levski will allow adopters of EdgeX to use messaging to trigger actions and communicate data needs.  This compliments what is already available in EdgeX which is the ability to use messaging from south to north.  The design for this feature was finalized in this past Kamakura release cycle.  You can see the design document for more details.

The second major feature to highlight in Levski is control plane events (CPE) or also known as system management events.  This feature will allow EdgeX to send important events to would-be listeners (could be other EdgeX services, could be 3rd party subscribers) that something (an event) has happened in EdgeX.  Examples of control plane events are that a new device/sensor was provisioned or a new profile was uploaded.  Control plane events could also report on micro service issues – such as not having access to a database it needs.  Each EdgeX micro service will define its own control plane events, however in this release, it is anticipated that control plane events will first be tried with core metadata.

Other Features and Fixes

More Metrics

During the Kamakura release cycle, we implemented a new beta metrics/telemetry feature.  This feature allows any service to report (via message bus) on telemetry from the service that allows for better monitoring and management of EdgeX.  Telemetry such as how many events have been persisted, or how long does it take for a message to be processed in an app service can now be collected and published by EdgeX Metrics.  In this release cycle, we plan to proliferate metrics across all services (it was only in a few services for Kamakura) and take the feature out of beta status.

Experiment with MQTT authentication in service-to-service communications

Today, EdgeX has an API gateway to protect the REST APIs.  We also secure external message bus communications, but there is nothing that secures message bus communications internally.  In Levski, the community is going to be exploring possible solutions for securing MQTT message traffic.  We plan to explore and prototype with OpenZiti – a zero trust open-source project that provides an overlay network for secure communications between services and may even allow us to provide an alternative to our API gateway (Kong today).

Unit of Measures

In this past release cycle, we spent some time designing a solution that would allow units of measure to be associated to all of the edge sensor data collected.  There are many “standard” units of measure in the world.  We did not pick one, but allowed the adopter to pick / use the units of measure they want associated to their data.  In the Levski release, we hope to implement the design we formalized in the last release.  See our design documentation for more details.

Miscellaneous

The community is seeing more efforts by users/adopters to deploy EdgeX in non-containerized environments and wanting to build EdgeX for other environments (ARM32, RISC-V, etc.).  There is also a growing desire on the part of adopters to be able to have versions of our micro services that don’t include dependencies or modules that aren’t being used (for example, not including 0MQ when not using 0MQ for communications or not including security modules in a deployment that is not using EdgeX security features).  Therefore, the project is looking to provide more “build” from scratch options that remove unnecessary libraries, modules, etc. in environments where these features are not needed.  We call them “lite builds” and they should be more prevalent in Levski.

A growing demand for NATs as a message bus implementation in EdgeX will have the community doing some research and experimentation with NATS in this release cycle.  It is unlikely that full-fledged NATS message bus support comes out of this release, but we should begin to understand if NATs can be used as a message but implementation for our messaging abstraction and know the benefits of NATs over Redis Pub/Sub or MQTT for our use cases.

As you hopefully can see, Levski, while considered a minor release, will likely include a number of useful new features.

EdgeX 3.0

As part of the planning cycle, the EdgeX community leadership also considered the question of the next major release (EdgeX 3.0) and next LTS release.  By project definition, major releases are reserved for significant new features and inclusion of non-backward compatible changes.  We have tentatively slated the spring of 2023 as the target time for EdgeX 3.0.  We have been collecting a number of fixes and features that will require non-backward compatible changes.  We think the time will be right to work on these in the Minnesota release (code name for the spring 2023 release).  If all goes to plan, an LTS release will follow that in the fall of 2023.  The Napa release (code name for the fall 2023 release) would be the community’s second LTS release.  The 3rd major release and 2nd LTS release would fall exactly two years from our 2nd major release (Ireland in spring 2021) and first LTS (Jakarta in fall 2021).

Adopters should take some comfort in the fact that the project is healthy, stable and is looking out to the future.  As always, we welcome input and feedback on how EdgeX is supporting your edge/IOT solutions and what we can do better.

EDGE in the Healthcare Nutrition Industry

By Blog

By LF Edge Community members Utpal Mangla (Industry EDGE Cloud – IBM Cloud Platform);  Atul Gupta (Lead Data Architect – IBM); and  Luca Marchi (Industry EDGE and Distribute Cloud – IBM Cloud Platform)

Diet and Nutrition plays a vital role in healthcare and often it is still left behind by medical practitioners, doctors, nurses, and alternative medical care professionals. Whether patient is recovering from regular sickness, injuries, surgeries, or chronic illness – all of them need special care for diet and nutrition provided to the patients. The need is to provide enough nutrition at pre-determined frequencies for faster recovery and making sure the normalcy returns for the most patients in moderate time and at optimal expense. But the diet and nutrition guidance provided is either very generic or not monitored and adjusted as per patients’ vitals and recovery progression. 

These traditional mechanisms of feeding the health, diet, nutrition tracking data to medical practitioners is one-way mechanism and siloed decision making is based on this disparate data and not tied up for common benefits of the patients.

With the ongoing digital transformation of the healthcare, there is organic growth of edge devices in the hands of patients, healthcare staff such as fit-bit, cellphones, nutrition scales, smart kitchen, and bed-care devices, etc. The questions are – Where do we take it from here? What do we do with this explosion of edge data? Are we creating another set of data-silos?

Edge computing has the solutions for most of these problems and it can provide connectivity to use this data effectively and efficiently. This can converge all the data silos from traditional devices, healthcare facilities, human diet, and nutrition data into common repository such as ‘Data producers at Edge’, which can feed the data into cloud computing environments. The specialized healthcare cloud computing environments are now available, but they lack the data ingestion components. 

The ‘Data producers at Edge’ ingest the healthcare data into cloud computing environments from the Edge devices, healthcare facilities, et al completing the full cycle of data usage. This complete data cycle can now start benefiting patients, their families and as much gain efficiency for healthcare staff and facilities. Hospitals and alternative health professionals can start treating patients remotely using this new paraphernalia of devices, solutions, and inter-connectivity. 

These Data producers have rich data about patients’ daily health, vitals, physical vicinities, living conditions, walk-score, mobility, and assistance provided, etc. The net impact of using this data in combined and constructive way can not only open new horizons for healthcare but can start a new journey with Edge transformation which is yet to be realized. There is a potential of new industry disruptor in this area by leveraging benefits of Edge, Cloud computing, IoT devices, Patient Literacy, et al. The combination is self-serviced components and interconnectivity which can expand and scale on demand, as needed and provide solutions to areas of high demand and growth. It is now known that many healthcare areas can only be scaled and gain service efficiency by technology solutions and not by putting more healthcare facilities and professionals. Also, the traditional diet and nutrition methods have always played effective role in patient recovery, which can yield accurate, efficient, and robust healthcare solutions if used in conjunction with Edge and IoT advancements.

This full cyclic approach will tie together the Diet and Nutrition aspects to other healthcare areas, components, and capabilities. The basics of patient’s recovery progression can ultimately start to play its role in more connected manner and predictive care can replace the reactive care approach. 

The Edge data can be used to generate patient analytics and align it to diet and nutrition data to come up with new predictive care capabilities. The health care staff can adjust, re-calibrate the exercises based on existing and proposed diet and nutrition for the patients. This can even extend into improving the supply-chain of the products and services needed for the healthcare industry.

The diet and nutrition data, combined with sophisticated cloud compute can yield better and improved predictive care for the patients and move to new era in Healthcare Digital Edge Transformation

EdgeX Foundry Turns 5 Years Old & Issues its 10th Release

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry Technical Steering Committee Chair

EdgeX Foundry, an LF Edge project, turned five years old in April and celebrated with its 10th release!

Today, the project announced the release of Kamakura – version 2.2 of the edge/IoT platform.  EdgeX has consistently released two versions a year (spring and fall) since its founding in 2017.  Looking back to the founding and announcement of the project at Hannover Messe 2017, I would be lying if I was to say I expected the project to be where it is today.

Many new startup businesses don’t last five years. For EdgeX to celebrate five years, steadily provide new releases of the platform each year, and continue to grow its community of adopters is quite an achievement.  It speaks both to the need for an open edge / IoT platform and the commitment and hard work of the developers on the project.  I am very proud of what this community has accomplished.

We’ll take a closer look at EdgeX highlights over the past five years in an upcoming blog post, so for now, let’s dive into the Kamakura release. 

 

Here’s an overview of what’s new:

Kamakura Features

While this “dot” release follows on the heels of the project’s first ever long-term support release (Jakarta in the fall of 2021), the Kamakura release still contains a number of new features while still maintaining backward compatibility with all 2.x releases.  Notably, with this release, EdgeX has added:

  • Metrics Telemetry collection: EdgeX is about collecting sensor/device information and making that data available to analytics packages and enterprise systems in a consistent way so it can be acted on.  However, EdgeX had only minimal means to report on its own health or status.  An adopter could watch EdgeX service container memory or CPU usage or use the “ping” facility to make sure the service was still responsive, but that is about it.  In the Kamakura release, new service level metrics can be collected and sent through the EdgeX message bus to be consumed and monitored.  Some initial metrics collection is provided in the core data service (ex: number of events and reading persisted) for this release, but the framework for doing this system wide is now in place to offer more in the next release or allow an adopter to instrument other services to report their metrics telemetry as they see fit.  The metrics information can be easily subscribed to by a time series database, dashboard or custom monitoring application.

 

  • Delayed Start Services:  When EdgeX is running with security enabled, an initialization service (called security-secretstore-setup) provides each EdgeX micro service with a token which is used to access the EdgeX secrets store (provided by Vault).  These tokens have a time to live (TTL) which means if they were not used or renewed, they expired.  This created problems for services that may need to start later – especially those that are going to connect to future sensors/devices.  Furthermore, not all EdgeX services are known when the platform initializes.  An adopter may choose to add new connectors (both north and south) all the time to address new needs.   These types of issues and others often meant that adopters often had to shutdown EdgeX and restart all of EdgeX in order to provide new security tokens to all services.  In Kamakura, this issue has been addressed with a new service that uses mutual authentication TLS and exchanges a SPIFFE X.509 SVID for getting secret store tokens.  This new service allows a service to get or renew a token used to access the EdgeX secrets store anytime and not just at the bootstrapping / initialization of EdgeX as a whole.

 

  • New Camera Device Services:  While EdgeX somewhat supported connectivity to ONVIF cameras through a camera device service, the service was incomplete (for example not allowing the camera’s PTZ capability to be accessed).  With Kamakura, EdgeX will now have two camera device services.  A new ONVIF camera device service is more complete and tested against server popular camera devices.  It allows for PTZ and even the discovery of ONVIF compliant cameras.  A second camera service – a USB camera service – will help manage and monitor simple webcams.  Importantly, EdgeX will, through the USB camera service, be able to get a USB camera’s video stream to other packages (such as an AI/ML engine) to incorporate visual inference results into the EdgeX data sensor fusion capability.

 

  • Dynamic Device Profiles: in prior releases, device profiles were considered mostly static.  That is, once a device profile was used and associated to a device or any other EdgeX object like an event/reading, it was not allowed to change.  One could not, for example, have an empty device profile and add device resources (the sensor points collected by a device service) later as more details of the sensor or use case emerged.  With the Kamakura release, device profiles are much more dynamic in nature.  New resources can be added all the time.  Elements of the device profile can change over time.  This allows device profiles to be more flexible and adaptive to the sensors and use case needs without having to remove and re-add devices all the time.

 

  • V2 of the CLI:  EdgeX has had a command line interface tool for several releases.  But the tool was not completely updated to EdgeX 2.0 until this latest release.  For that matter, the CLI did not offer the ability to call on all 100% of the EdgeX APIs.  With Kamakura, the CLI now provides 100% coverage of the REST APIs (any call that can be done via REST can be done through the CLI) in addition to making the CLI tool compatible with EdgeX 2.0 instances.  Several new features were added as well to include tab completion, use of the registry to provide configuration, and improved result outputs.

The EdgeX community worked on many other features/fixes and behind the scenes improvements to include:

  • Updated LLRP/RFID device service and application services
  • Added or updated GPIO and CoAP services 
  • Ability to build EdgeX services on Windows platform (providing optional inclusion of ZMQ libraries as needed)
  • CORs enablement
  • Added additional testing – especially around the optional use of services
  • Added testing to the GUI
  • Optimized the dev ops CI/CD pipelines
  • Kubernetes Helm Chart example for EdgeX
  • Added linting (the automated checking of source code for programmatic, stylistic and security issues) part of the regular build checks and tests on all service pull requests
  • Formalized release and planning meeting schedule
  • Better tracking of project decisions through new GitHub project boards

Tech Talks

When EdgeX first got started, we provided a series of webinars called Tech Talks to help educate people on what EdgeX is and how it works.  We are announcing that a new Tech Talk webinar series will begin May 24 and run weekly through June.  The topic list includes:

  •   May 24 – Getting Started with EdgeX
  •   June 7 – Build an EdgeX Application Service
  •   June 14 – Build an EdgeX Device Service
  •   June 21 – Creating a Device Profile
  •   June 28 – Getting started with EdgeX Snaps

These talks will be presented by distinguished members of our community.  We are going to livestream them through YouTube at 9am PDT on the appointed days.  These new talks will help provide instruction in the new EdgeX 2.0 APIs introduced with the Ireland release.  You can register for the talks here:

What’s Next: Fall Release – “Levski”

Planning for the fall 2022 release of EdgeX – codenamed Levski – is underway and will be reported on this site soon.  As part of the Kamakura release cycle, several architectural decision records were created which set the foundation for many new features in the fall release.  This includes a north-south messaging subsystem and standardization of units of measure in event/readings. 

 EdgeX is 5, but its future still looks long and bright.

 

Deep Learning Models Deployment with Edge Computing

By Blog

By Utpal Mangla (IBM), Luca Marchi (IBM), Shikhar Kwatra (AWS)

Smart Edge devices across the world are continuously interacting and exchanging information through machine-to-machine communication protocols. Multitude of similar sensors and endpoints are generating tons of data that needs to be analyzed in real-time used to train deep complex learning models. From Autonomous Vehicles running Computer vision algorithms to smart devices like Alexa running Natural language processing models, the era of deep learning has widely extended to plethora of applications. 

Using Deep learning, one can enable such edge devices to aggregate and interpret unstructured pieces of information (audio, video, textual) and take ameliorative actions, although it comes at the expense of amplified power and performance measurements. Since Deep learning resources require huge computation time and resources to process streams of information generated by such edge devices and sensors, such information can scale very quickly. Hence, bandwidth and network requirements need to be tackled tactfully to ensure smooth scalability of such deep learning models. 

If the Machine and deep learning models are not deployed directly at the edge devices, these smart devices will have to offload the intelligence to the cloud. In the absence of a good network connection and bandwidth, it can be a quite a costly affair. 

However, since these Edge or IoT devices are supposed to run on low power, deep learning models need to be tweaked and packaged efficiently in order to smoothly run on such devices. Furthermore, many existing deep learning models and complex ML use cases leverage third party libraries, which are difficult to port to these low power devices. 

The global Edge Computing in Manufacturing market size is projected to reach USD 12460 Million by 2028, from USD 1531.4 Million in 2021, at a CAGR of 34.5% during 2022-2028. [1]

Simultaneously, the global edge artificial intelligence chips market size was valued at USD 1.8 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR) of 21.3% from 2020 to 2027. [2]

Recent statistical results and experiments have shown that offloading complex models to the edge devices has proven to effectively reduce the power consumption. However, pushing such deep learning models to the edge didn’t essentially meet the latency requirements. Hence, offloading to edge may still be use case dependent and may-not work for all the tasks involved at this point. 

We have to now build deep learning model compilers capable of optimizing these models into executable codes with versioning on the target platform. Furthermore, it needs to be compatible or go in-line with existing Deep learning frameworks inclusive of but not limited to MXNet, Caffe, Tensorflow, PyTorch etc. Lightweight and low power messaging and communication protocol will also enable swarms of such devices to continuously collaborate with each other and solve multiple tasks via parallelization of tasks at an accelerated pace. 

For instance, TensorFlow lite can be used for on-device inferencing. First, we pick an existing versioned model or develop a new model. Then, the TensorFlow model is compressed into a compressed flat buffer in the conversion stage. The compressed tflite file is loaded into an edge device during the deployment stage. In such manner, various audio classification, object detection and gesture recognition models have been successfully deployed on the edge devices. 

Hence, if we migrate the deep learning models with specific libraries and frameworks (like tensorflow lite or ARM compute library for embedded systems) can be optimized to run on IoT devices. In order to scale it further with efficiency, first, we need to directly compile and optimize deep-learning models to executable codes on the target device and we need an extremely light-weight OS to enable multitasking and efficient communications between different tasks.

References:

  1. https://www.prnewswire.com/news-releases/edge-computing-market-size-to-reach-usd-55930-million-by-2028-at-cagr-31-1—valuates-reports-301463725.html
  2. https://www.grandviewresearch.com/industry-analysis/edge-artificial-intelligence-chips-market

 

IoT Enabled Edge Infusion

By Blog

By Shikhar Kwatra, Utpal Mangla, Luca Marchi

The sheer volume of information being processed and generated by Internet of things enabled sensors has been burgeoning tremendously in the past few years. As Moore’s law is still valid providing an indication of number of transistors in an integrated circuit doubling roughly every two years, growing ability of edge devices to handle vast amounts of data with complex designs is also expanding on a large scale. 

Most current IT architectures are unable to compete with the growing data volume and maintain the processing power to understand, analyze, process and generate outcomes from the data in a quick and transparent fashion. That is where IoT solutions meets Edge Computing. 

Transmitting the data to a centralized location across multiple hoops, for instance, from the remote sensors to the cloud involves additional transport delays, security concerns and processing latency. The ability of Edge architectures to move the necessary centralized framework directly to the edge in a distributed fashion in order to make the data to be processed at the originating source has been a great improvement in the IoT-Edge industry.

As IoT projects scale with Edge computing, the ability to efficiently deploy IoT projects, reduce security to the IoT network, and adding of complex processing with inclusion of machine learning and deep learning frameworks have been made possible to the IoT network. 

International Data Corporation’s (IDC) Worldwide Edge Spending Guide estimates that spending on edge computing will reach $24 billion in 2021 in Europe. It expects spending to continue to experience solid growth through 2025, driven by its role in bringing computing resources closer to where the data is created, dramatically reducing time to value, and enabling business processes, decisions, and intelligence outside of the core IT environment. [1]

Sixty percent of organizations surveyed in a recent study conducted by RightScale agree that the holy grail of cost-saving hides in cloud computing initiatives. Edge AI, in contrast, eliminates the exorbitant expenses incurred on the AI or machine learning processes carried out on cloud-based data centers. [2]

Edge is now widely being applied in a cross-engagement matrix across various sectors:

  1. Industrial – IoT devices and sensors where data loss is unacceptable
  2. Telecommunication – Virtual and Software Defined solutions involving ease of virtualization, network security, network management etc.
  3. Network Cloud – Hybrid and Multi-cloud frameworks with peering endpoint connections, provisioning of public and private services requiring uptime, security and scalability
  4. Consumer & Retail framework – Inclusion of Personal Engagement devices, for instance, wearables, speakers, AR/VR, TV, radios, smart phones etc. wherein data security is essential for sensitive information handling 

Since not all the edge devices are powerful or capable of handling complex processing tasks while maintaining an effective security software, such tasks are usually pushed to the edge server or gateway. In that sense, the gateway becomes the communications hub performing critical network functions inclusive of but not limited to data accumulation, sensor data processing, sensor protocol translation and understanding prior to the gateway forwarding the data to the on-premise network or cloud. 

From Autonomous vehicles running different Edge AI solutions to actuators with WIFI or Zigbee with the ability to connect to the network, with increased complexity of fusion of different protocols, different edge capabilities will continue to surface in coming years.

References: 

[1] https://www.idc.com/getdoc.jsp?containerId=prEUR148081021

[2] https://www.itbusinessedge.com/data-center/developments-edge-ai/

 

Overview of Akraino R5’s Project Cassini: IoT and Infrastructure Edge Blueprint Family

By Akraino, Blog

By Olivier Bernard and Parag Beeraka 

This post provides an overview of the Smart Cities Akraino blueprint as well as an overview of key features and implementations of PARSEC in Akraino’s Release 5.

 Overview – Akraino Blueprint:  Smart Cities

The purpose of the Smart Cities blueprint family  is to provide an edge computing platform based on Arm SoC and  improve deployment flexibility and security within the edge computing. High-level relationships between  functional domains is shown in the figure below:

(For the full description of the Smart Cities Reference Architecture please refer to the Smart Cities Documents.)

 

Smart Cities in Akraino R5– Key features and implementations in Akraino Release 5:

 The Smart Cities blueprint’s security components is PARSEC, first available in Akraino’s Release 5. . The following is a brief introduction to PARSEC.

Parsec is the Platform AbstRaction for SECurity, a new open-source initiative to provide a common API to secure services in a platform-agnostic way.

PARSEC aims to define a universal software standard for interacting with secure object storage and cryptography services, creating a common way to interface with functions that would traditionally have been accessed by more specialised APIs. Parsec establishes an ecosystem of developer-friendly libraries in a variety of popular programming languages. Each library is designed to be highly ergonomic and simple to consume. This growing ecosystem will put secure facilities at the fingertips of developers across a broad range of use cases in infrastructure computing, edge computing and the secure Internet of Things.

 

 

(For more information of PARSEC: https://parallaxsecond.github.io/parsec-book/)

 

Software Defined Camera (SDC) blueprint

Security Cameras are seeing increasing growth in many subsegments such as commercial surveillance cameras, consumer surveillance cameras and in many other devices such as dashcams, baby monitors and other IoT vision devices. The total surveillance market is expected to be around $44B in the year 2025 growing at a CAGR of 13%. The other exciting thing happening in this market, along with the increase in the unit shipments of surveillance cameras, is the increasing use of artificial intelligence and machine learning (AI/ML) in these cameras and IoT devices. It is estimated that there are already a billion cameras installed in the world and this number would reach 2 billion by the year 2024. However security cameras are very different from many other devices because once installed, they stay installed for more than 5+ years. So it is critical to ensure that these devices continue to provide enhanced functionality over time. 

In today’s world, there are many technologies that are “Software defined”. This has been possible because of many advancements in technology – at the bottom of the stack, you have the Operating System, then the virtualization layer – which actually kicked off the original software defined compute with Virtual machines etc. Then came containers and then the orchestrators for the containers such as k8s, k3s  – these technological advancements paved the path to define a software defined data center. In the datacenter, after software defined compute, software defined networking and software defined storage started to become the norm. 

Now the time is prime to move software defined trends to the edge devices as well. This is really a transformation that has started across many segments such as Automotive – cars, trucks etc. We do see this trend applied to many other edge devices and one of them would be cameras or as we and many others call it, Smart Cameras. The idea here is that once you buy a camera, the hardware is advanced so that you will be receiving continuous software updates which can be neural network model updates catered to the specific data that you might have captured using the camera or other updates related to functionality and security of the device.  

 

 By designing future camera products with cloud native capabilities in mind, one will be able to scale camera and vision products to unprecedented levels. If all the applications are deployed using a service oriented architecture with containers, a device can be continuously updated. One of the key advantages of this architecture would be to enable new use case scenarios that one might not have envisioned in the past with a simple on demand service deployment post sale through an app store. A simple example can be to deploy a new and updated license plate recognition app a year after the purchase of cameras. At the same time one of the other key advantages with this architecture would be to enable continuous machine learning model updates based on the data pertinent to the camera installations i.e be able to use the data that it is capturing and use it to train the models. With this, the model would be relevant to the specific use-case thus increasing the value of the camera itself. All of this is possible due to enabling a service oriented architecture model using containers and thinking cloud native first. 

The other important aspect that will really bring together the security camera and vision products is the adherence to common standards at the silicon and the platform level. LF Edge member company, Arm,  has started an initiative called Project Cassini and Arm SystemReady® , and it’s a key component of this initiative. As part of Arm SystemReady®, Arm is working with many ecosystem partners (OEMs, ODMs, silicon vendors) to drive a standards approach in order to scale deployments by replacing custom solutions with standards-based solutions and becoming the platform of choice for future camera deployments.  

To address and simplify these challenges, Arm is designing and offering the solution as an open source reference solution to be leveraged collaboratively for future software defined cameras based on established standards focused on security, ML, imaging and cloud-native. The basis of the reference solution is the SystemReady® certified platform. SystemReady® platforms implement UEFI, U-Boot and Trusted standard based Firmware. 

  • With the base software layers compliant to SystemReady®, standard Linux distros such as Fedora, OpenSUSE distros or Yocto can be run on these platforms. 
  • Then the container orchestration service layer can run containerized applications, by using k3S. 
  • The next software layer is ML and Computer vision libraries. Arm has supported ML functions through Arm-NN and the Arm Compute library so that users can develop their ML applications using standard open-source ML frameworks such as PyTorch, Tensorflow etc. 
  • Similarly, with computer vision, it is critical to support open standards such as Video 4 Linux, OpenCV and BLAS libraries. 
  • Both complementary and necessary, is security – this is where orchestration and security microservices such as PARSEC, an open-source platform abstraction layer for security along with PSA certification process are critical, so that end users build the confidence that the product will be secure when deployed in the field. These secure services are the foundation to support and enable containerized applications for inferencing, analytics and storage.

Learn more about Project CASSINI here: https://wiki.akraino.org/display/AK/Project+Cassini+-+IoT+and+Infrastructure+Edge+Blueprint+Family.

Infusion of Machine Learning Operations with Internet of Things

By Blog

By: Shikhar Kwatra, Utpal Mangla, Luca Marchi

With the advancement in deep tech, the operationalization of machine learning and deep learning models has been burgeoning in the space of machine learning. In a typical scenario within the organization involving machine learning or deep learning business case, the data science and IT teams need to extensively collaboration in order to increase the pace of scaling and pushing of multiple machine learning models to production through continuous training, continuous validation, continuous deployment and continuous integration with governance. Machine Learning Operations (MLOps) has carved a new era of DevOps paradigm in the machine learning/artificial intelligence realm by automating end-to-end workflows.

As we are optimizing the models and bringing the data processing and analysis closer to the edge, data scientists and ML engineers are continuously finding new ways to push the complications involved with operationalization of models to such IoT (Internet of things) edge devices. 

A LinkedIn publication revealed that by 2025, the global AI spend would have reached $232 billion and $5 trillion by 2050. According to Cognilytica, the global MLOps market will be worth $4 billion by 2025. The industry was worth $350 million in 2019. [1]

Models running in IoT edge devices need to be very frequently trained due to variable environmental parameters, wherein continuous data drift and limited access to such IoT edge solutions may lead to degradation of the model performance over time. The target platforms on which ML models need to be deployed can also vary, such as IoT Edge or to specialized hardware such as FPGAs which leads to high level of complexity and customization with regards to MLOps on such platforms. 

Models can be packaged into docker image for the purpose of deployment post profiling the models by determining the cores, CPU and memory settings on said target IoT platforms. Such IoT devices also have multiple dependencies for packaging and deploying models that can be executed seamlessly on the platform. Hence, model packaging is easily implemented through containers as they can span over both cloud and IoT edge platforms.

When we are running on IoT edge platforms with certain dependencies of the device, a decision needs to be taken which containerized machine learning models need to be made available offline due to limited connectivity. An access script to access the model, invoke the endpoint and score the request incoming to the edge device needs to be operational in order to provide the respective probabilistic output. 

Continuous monitoring and retraining of models deployed in the IoT devices need to be handled properly using model artifact repository and model versioning features as part of MLOps framework. Different images of the models deployed will be stored in the shared device repository in order to quickly fetch the right image at the right time to be deployed to the Iot device. 

Model retraining can be triggered based on a job scheduler running in the edge device or when new data is incoming, invoking the rest endpoint of the machine learning model. Continuous model retraining, versioning and model evaluation become an integral part of the pipeline. 

In case the data is frequently changing which can be the case with such IoT edge devices or platforms, the frequency of model versioning and refreshing the model due to variable data drift will enable the MLOps engineer persona to automate the model retraining process, thereby saving time for the data scientists to deal with other aspects of feature engineering and model development. 

In time, the rise of such devices continuously collaborating over the internet and integrating with MLOps capabilities is poised to grow over the years. Multi-factored optimizations will continue to occur in order to make the lives of data scientists and ML engineers more focused and easier from model operationalization standpoint via an end-to-end automated approach.

Reference: 

[1] https://askwonder.com/research/historical-global-ai-ml-ops-spend-ulshsljuf

 

eKuiper Issues 1.4 Release & Discusses 2022 Roadmap

By Blog, eKuiper

Written by Jiyong Huang, Chair of the eKuiper Technical Steering Committee and Senior Software Engineer of EMQ

eKuiper, a Stage 1 (at-large) project under the LF Edge umbrella, is a Go language implementation of the lightweight stream processing engine which can run in various IoT edge usage scenarios for real-time data analysis. Being able to run near the data source at the edge, eKuiper can improve system response speed and security, saving network bandwidth and storage costs.

The eKuiper project usually has one release per quarter with new features and several patch versions afterwards. The current active release is v1.4 which is the biggest update after joining LF Edge! We have solid community contributions for thee release, including requirement collections, code contributions, testing and trials. So we are grateful to share the updates and hope they can benefit our users.

Rule Pipeline: Building Complex Businesses Logic with Flexibility

eKuiper uses SQL to define business logic, which lowers the threshold of development. Simple SQL statements can efficiently define business requirements such as filtering, aggregation, and data conversion in practical usage scenarios. However, for some complex scenarios, it is difficult to address by defining a single SQL statement; even if you can, the SQL statement itself is too complex and difficult to maintain.

Based on the new in-memory source and sink, the rule pipeline can connect multiple SQL rules easily, efficiently, and flexibly. The readability and maintainability of SQL statements can be improved when implementing complex business scenarios. The rules are connected with in-memory topics similar to the MQTT topics and support wildcard subscriptions, enabling an exceptionally flexible and efficient rules pipeline. While improving business expressiveness, rule pipelining can also improve runtime performance in certain complex scenarios. For example, multiple rules need to process data that has been filtered by a certain condition. By extracting that filtering condition as a predecessor rule, you can make the filtering calculated only once, significantly reducing the computation in the case of many rules.

Portable Plugin: Making Extensions Easier

The original version of eKuiper supported an extension scheme based on the Go native plug-in system, supporting individual extensions to source, sink and function (UDF). However, due to the limitations of the Go plugin system, writing and using plugins is not easy for users familiar with Go, let alone users of other languages. eKuiper has received a lot of feedback from users in the community about plugin development, operation and deployment, and various operational issues.

To balance development efficiency and runtime efficiency, v1.4.0 adds a new Portable plugin system to lower the threshold of plugin development. The new Portable plug-in is based on the nng protocol for inter-process communication and supports multiple languages, currently providing go and python SDKs, with more SDKs to be added in subsequent versions according to user requirements; simplifies the compilation/deployment process, and runs like a normal program written in various languages without additional restrictions. Due to the different operation mechanisms, portable plugin crashes will not affect eKuiper itself.

Native plug-ins and portable plug-ins can coexist. Users can choose the plug-in implementation or mix them according to their needs.

Shared Connections: Source/Sink Multiplexed Connections

eKuiper provides a rich set of sources and sinks to access and send results to external systems. Many of these sources and sinks are input/output pairs of the same external system type. For example, MQTT and EdgeX both have corresponding source and sink, and in the new version, users can do all connection-related configurations in the connection.yaml file; in the source/sink configuration, you can specify which connections to use without repeating the configuration. Shared connection instances reduce the additional consumption of multiple connections. In some scenarios, users may be limited in the number of connections and ports to external systems, and using shared connections can meet this limit. Also, based on shared connections, eKuiper can support connections to the EdgeX secure data bus.

Other enhancements

  • Support for configuration via environment variables
  • By default, eKuiper uses SQLite, an embedded database, to store metadata such as streams and rules, allowing for no external dependencies at runtime. In the new version, users can choose Redis as the metadata storage solution
  • Rule status returns error reason
  • Optimized SQL Runtime, reduce CPU usage up to 70% in a shared source user scenario
  • sink dynamic parameter support, e.g., MQTT sink can set the topic to a field value in the result so that the data received can be sent to a dynamic topic
  • Authentication support: user-configurable JWT-based authentication for REST API

2022 Roadmap

As a young project, eKuiper is still far from perfect. There are a long list of feature requests from the users and community. eKuiper will continue to grow in the new year and open to anyone who is interested to make edge computing powerful and easier to use at the same time. The team will focus on the four themes:

  • Enrich API and syntax
    • more SQL clause (in, between, like etc.)
    • streaming feature (dynamic table etc.)
    • more handy functions (changed, unnest, statistic functions)
  • Stability & Performance
    • Allow to select feature by build tag and provide a minimal core version
    • Incremental window
    • Continuously memory footprint and CPU optimization
  • Extension
    • Stabilize portable plugin runtime and support more languages
    • More http source type
  • Build & Deployment
    • Cluster and HA
    • Android support

Please check the 2022 Roadmap in our Github project for details, and stay tuned in with the eKuiper community.

Cloud Services at the Edge

By Blog

This post first published on the IBM blog at this link; it has been reposted here with permission. Some content has been redacted so as not to be seen as an endorsement by LF Edge. 

By Ashok Iyengar, Executive Cloud Architect & Gerald Coon, Architect & Dev Leader, IBM Cloud Satellite

Where the enterprise edge ends, where the far edge begins and what, if any, are the various points of intersection?

What do AWS Outpost, Azure Stack, Google Anthos and IBM Cloud Satellite have in common? Each one of them is essentially an extension of their public cloud offering into an enterprise’s on-premises location or edge facility. This is, in fact, the hybrid cloud platform paradigm.

Each vendor has their offering nuances. They even support different hardware for building the on-premises components of a hybrid cloud infrastructure. But the end goal is to combine the compute and storage of public cloud services into an enterprise’s data center — what some might call Enterprise Edge. It is worth pointing out that IBM Cloud Satellite is built on the value of Red Hat OpenShift Container Platform (RHOCP). This blog post will discuss where the enterprise edge ends, where the far edge begins and what, if any, are the various points of intersection.

To reiterate from previous blogs in this series, edge encompasses far edge devices all the way to the public cloud, with enterprise edge and network edge along the way. The various edges (network, enterprise, far edge) are shown on the left side of Figure 1 along with the major components of a platform product that include the cloud region, the tunnel link, a control plane, and different remote Satellite locations:

Figure 1. Different edges and IBM Cloud Satellite components.

 

Note that one would need more than one control plane only. For example, a telco location for the network team and a development location for deploying edge services.

Please make sure to check out all the installments in this series of blog posts on edge computing:

Edge components

As we have mentioned in our other blogs in this series, there are three main components in an edge topology, no matter which edge we are talking about:

  • A central hub that orchestrates and manages edge services deployment.
  • Containerized services or applications that can run on edge devices.
  • Edge nodes or devices where the applications run, and data is generated.

Some edge solutions do not use agents on edge devices, while others like IBM Edge Application Manager require an agent installed on each device. An agent is a small piece of code running on edge nodes or devices to facilitate the deployment and monitoring of applications. Refer to “Architecting at the Edge” for more information.

Which cloud?

In most cases, these platform products that bring public cloud services to an on-premises location work with one cloud provider. AWS Outpost, for example, is a hardware solution only meant to work with AWS. IBM Cloud Satellite, on the other hand, has certain connectivity and resource requirements (CPU/memory) but is agnostic to the hardware. The requirements generally begin at the operating system level (Red Hat) and leave the hardware purchasing to the customer. The Red Hat hosts provided can even be EC2 instances in AWS or other cloud providers. This means IBM Cloud Satellite can bring IBM Cloud services to remote locations as well as services from AWS, Azure, Google Cloud and more that are planned.

[…]

Overlapping or complementary technologies?

We hear the phrase “cloud-out” when describing the compute moving out toward the edge. But what we see from Figure 1 is that the services brought on-premises from the public cloud cannot quite be extended out to the far edge devices. That is where one would require a product like the IBM Edge Application Manager to deploy and manage services at scale.

A common challenge of edge workloads is training the artificial intelligence (AI) and machine learning (ML) models and using predictive model inferencing. An IBM Cloud Satellite location can act as the platform in close proximity where data can be stored and accessed, and AI/ML models can be trained and retrained before they are deployed on to edge devices. Or the apps running on the edge nodes could access a suite of AI/ML services via the Satellite location. Thus, low latency and data sovereignty are two major reasons why enterprises would want to deploy such solutions. Compliance and other security requirements are easier to implement when the cloud object storage or database is physically located on-premises.

It is easy to envision a use case where a retail chain would use a product like AWS Outpost or IBM Cloud Satellite to establish a satellite location in a city. That satellite location could then provide the required cloud-based services to all its stores in that city. These could be a common set of services like AI/ML analytics, access policies, security controls, databases, etc. — providing consistency across all environments. Consistency and access to a large set of powerful processing services are additional advantages of such deployments.

Another common example is with telecommunication service providers that are looking to monetize 5G technology by offering cloud services to their customers. Figure 3 shows a Telco MEC (Mobile Edge Computing) topology making use of IBM Cloud Satellite, IBM Edge Application Manager (IEAM) and Red Hat OpenShift Container Platform (RHOCP):

Figure 3. Telco MEC topology using IBM Cloud Satellite and IEAM services.

 

To provide a bit more context, MEC effectively offers localized cloud servers and services rather than depending on a larger, centralized cloud. This basically means the edge/IoT devices will communicate with more, smaller data hubs that are physically closer to them (i.e., on the “edge” of the network). Rather than online games having to send data to a distant central server, process it and send back a response — all of which slows down overall communication speeds — they will be able to access more power, closer to the gamers.

Wrap-up

In addition to the millions of devices, IoT and edge computing have the challenge of accessing and storing data in the “last mile.” Products like AWS Outpost, Azure Stack, Google Anthos and IBM Cloud Satellite complement IoT and Edge topologies. In fact, the IBM Edge Application Manager Hub is often deployed in a Satellite location or resides in the cloud. The combination of the two technologies provides a compelling solution that companies in healthcare, telecommunications and banking can use. The agnostic nature of IBM Cloud Satellite even allows it to not only bring IBM Cloud services to remote locations but also services from AWS, Azure and Google Cloud.

The IBM Cloud architecture center offers up many hybrid and multicloud reference architectures including AI frameworks. Look for the IBM Edge Computing reference architecture here.

This blog post talked about bringing cloud services to the edge in what is commonly called distributed cloud or “cloud out.” It offers the best of both worlds — public cloud services and secure on-premises infrastructure. The folks at mimik have a very interesting notion of “edge in,” wherein they describe a world of microservices, edge-device-to-edge-device communication and creating a sort of service mesh that expands the power of the edge devices toward the cloud.

Let us know what you think.

Special thanks to Joe Pearson, David Booz, Jeff Sloyer and Bill Lambertson for reviewing the article.