Skip to main content
Category

Blog

Congratulations to the Recipients of the 2022 EdgeX Awards!

By Blog, EdgeX Foundry

By Jim White, outgoing EdgeX Foundary TSC Chair

Each year since its founding, EdgeX Foundry awards at least four members of its contributing community with Contribution and Innovation awards.  On this fifth anniversary of the project’s founding, we honored seven distinguished engineers for their efforts on the project.

Innovation Award

Presented to individuals who have provided the most innovative solution to the open-source project over the past year

Byron Nevis and Jim Wang (both from Intel) designed and implemented the new “Delayed Start Services” security capability which is an innovative way to handle providing security tokens to services that start after the EdgeX security framework has started and already handed out tokens to all the known services.  Bryon and Jim have led most EdgeX security efforts over the past two years.  But this year, they provided some real innovation around securely storing secrets, distributing secrets, and making secrets available to other services.  Their innovation will allow adopters to add new device services and application services to EdgeX over time and as use cases demand without requiring a restart of the system.

Anthony Casagrande and Marc Fuller (both from Intel) have been instrumental in Intel’s use of EdgeX to show how it can be a platform in support of retail edge use cases.  Anthony/Marc’s work has provided both a device service and an application service to ingest and use RFID information via the LLRP (Low Level Reader Protocol) which is used in bar code reading equipment found in many retail store locations.  In addition to their inventions of these ingesting and using services, Anthony and Marc have found (and in some cases fixed) a number of bugs and have identified many needed features (submitting more than 25 Github issues) that real world adopters of the platform need.

Emilo Reyes (again from Intel – there seems to be a theme here!) has been a contributor to the EdgeX DevOps team for the last few years and has been a vital part of making the EdgeX community a better, more streamlined community. Emilio has passion for quality and with his experience around unit testing, has contributed hundreds of tests for our Jenkins global pipeline libraries providing us continuous confidence in our ability to deliver quality code. Emilio also has a passion for automation and has been a driving force behind much of our GitHub automation for EdgeX.   For instance, he created automated management of GitHub labels and milestones from a central repository, greatly reducing the number of repetitive tasks needed to manage labels/milestones across 20+ repositories. 

Contribution Award

Recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year.

 

Iain Anderson (from IOTech Systems) – has been a solid contributor, working group leader, and architect/thought leader since the founding of EdgeX in 2017.  As Device Service chairman, he has overseen several releases (major and minor) and is currently responsible for 11 active EdgeX device services and helping to usher in 4 more that are in active development.  He is the project’s first and foremost authority in C development and personally handles most of the C device service and SDK development.  Iain has over 530 commits to the project (almost 50K lines of code) which puts him #6 all time on the project.  He is also #7 in all time Pull Request submissions for the project.  He is a quiet, steadfast, never-a-complaint contributor that has championed many of the project’s advances such as device service and device profile simplifications, message bus communications from device services, device filtering, and future device service record and replay features.

 

Siggi Skulason (from Canonical) – updated and significantly improved the EdgeX CLI tool over the course of the last two release cycles.  Importantly, Siggi upgraded the tool from the V1 APIs to the V2 APIs, added support for all the service endpoints (versus a small subset selection of APIs), made the CLI available as a snap, removed a significant amount of technical debt, and improved the products help and documentation.  

 

I am happy to honor and call out these gentlemen for their efforts.  I don’t know of too many open-source projects that go to honor and thank its contributors with awards like this.  To be nominated and then selected by your peers – especially of the caliber of the EdgeX engineering community – is such a great recognition of their work.  Congratulations Bryon, Jim, Anthony, Marc, Emilo, Iain and Siggi.  Jobs well done.

You can view the Awards presentation here:

 

Thank You EdgeX

It’s been a productive few months – one release out, another planned and being worked on, and a time to shell out some well earned “kudos”.  Before I go, I want to let the community know  I am stepping aside from my role as TSC chair, and a new set of leaders are stepping up to take EdgeX into its future – and a bright future it has.

About a month after I joined Dell Technologies in 2015, I was handed the task of finding an IoT software platform to put on our new brand of gateways.  EdgeX began life on my kitchen island with an idea, an architecture and a small bit of code.  With the support of the company, some great leaders, and a collection of some of the brightest engineers I have ever worked with, my work was expanded on, productized and launched as the edge software you know today.  For the past seven years (almost to the day), EdgeX has been at the center of my professional working life (and my wife would probably add that it included much of my personal life).  Creating it, taking it into open source, working with the Linux Foundation to make it available and known, working at IOTech to commercialize the idea, and leading this wonderful community has been the highlight of my career.  It has allowed me to travel the world, meet so many amazing people, be a part of an incredible creative process, and watch something I started get used to create solutions that help people all over the globe.  EdgeX has exceeded even my wildest dreams.  It’s just hard to wrap my brain around it even today.  I am no Einstein.  But if you have an idea (even a moderately good idea), find some amazing people around you that can help turn the idea into reality, and you can catch lightning in a bottle.

I would need a separate blog post to thank everyone I owe for this experience and the success of EdgeX.  That is not an exaggeration.  “Thank you” is not enough but to the EdgeX community, people I worked with at Dell and those in my current home of IOTech Systems, I hope you take it as a small down payment for all I owe you.

I’ll close by suggesting that if anyone ever offers you the chance to work on, let alone start, an open-source project – jump at the opportunity.  You will be better for the experience.  I’ll paraphrase Winston Churchill – many forms of software development have been tried and will be tried in this world.  No one pretends that open-source development is perfect or all-wise.  Indeed, it has been said that open-source development is the worst form of software development except for all other forms that have been tried.  

EdgeX – small at the edge, but forever big in my heart.

EdgeX Foundry Issues Bug Fix Release (v2.1.1) of its EdgeX Jakarta Release

By Blog, EdgeX Foundry
The EdgeX Foundry project, a highly-scalable and flexible open source framework that facilitates interoperability between devices and applications at the IoT edge, issued a bug fix release (version 2.1.1) of the EdgeX Jakarta release.  Importantly, as Jakarta is under long-term-support (LTS), this new bug fix release addresses a critical Common Vulnerabilities and Exposures (CVE) issue.  Adopters using the Jakarta LTS release should upgrade immediately to this bug fix release.  The CVE is documented on the project’s GitHub site.  In addition, this bug fix release also addresses some less critical bugs that were identified and fixed through the Kamakura release cycle.  Details on what is in version 2.1.1 can be found on the project’s Wiki site.
All of the fixes in this Jakarta dot release are already available in the Kamakura release (the latest, but non-LTS, release from EdgeX).  Adopters of Kamakura do not need to make any changes as the CVE and other bug fixes are already part of the latest EdgeX release.

Calling all Edge App Developers! Join the ETSI/LF Edge Hackathon

By Akraino, Blog, EdgeX Foundry, LF Edge

We’re partnering with ETSI ISG MEC to host a Hackathon to encourage open edge developers to build solutions that leverage ESTI MEC Services APIs and LF Edge (Akraino) blueprints. Vertical use cases could include Automotive, Mixed and Augmented Reality, Edge Computing and 5G, and more. Teams are highly encouraged to be creative and propose to develop solutions in additional application verticals. 

Here’s how it works:

  • Participants will be asked to develop an innovative Edge Application or Solution, utilizing ETSI MEC Service APIs and LF Edge Akraino Blueprints.
  • Solutions  may include any combination of client apps, edge apps or services, and cloud components.
  • The Edge Hackathon will run remotely from June to September with a short-list of best teams invited to complete with demonstrations and a
  • Hackathon “pitch-off” at the Edge Computing World Global event in Santa Clara, California Silicon Valley on October 10th-12th

Submissions are due 29th June 2022.  More details, including how to register, are available here.

Additional background information:

Edge Computing provides developers with localized, low-latency resources that can be utilized to create new and innovative solutions, which are essential to many application vertical markets in the 5G era.

  • ETSI’s ISG MEC is standardizing an open environment that enables the integration of applications from infrastructure and edge service providers across MEC (Multi-access Edge Computing) platforms and systems, which offers a set of open standardized Edge Service APIs to enable the development and deployment of edge applications at scale.
    • Teams will be provided with a dedicated instance of the ETSI MEC Sandbox for their work over the duration of the Hackathon. The MEC Sandbox is an online environment where developers can access and interact with live ETSI MEC Service APIs in an emulated edge network set in Monaco, which includes 4G, 5G, & WiFi networks configurations, single & dual MEC platform deployments, etc.
  • LF Edge’s Akraino project offers a set of open infrastructure and application Blueprints for Edge, spanning a broad variety of application vertical use-cases. Each blueprint offers a high-quality full-stack solution that is tested and validated by the community.

However, teams are not required to use a specific edge hosting platform or MEC lifecycle management APIs. Developers may use any virtualization environment of their choosing. 

Submissions are due 29th June 2022.  More details, including how to register, are available here.

What’s Next for EdgeX: Onto Levski

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry TSC chair

May was a busy month at EdgeX Foundry.  EdgeX released version 2.2, Kamakura in the middle of May (details here) and went straight into planning our fall release – code named Levski.  We also selected our 2022 EdgeX Award winners as well, and I’ll be posting a follow-up  to congratulate them and speak to some of their efforts.

Levski

First, what is the next release of EdgeX going to be about?  It is slated to be released in November of 2022.  It will, in all likelihood be another minor dot release (version 2.3 to be precise) that is backward compatible with all EdgeX 2.x releases.  The Levski release will also not be an LTS release.  Jakarta remains the first and only LTS for EdgeX for now (more on that below).  By the way, in case you are wondering where the name Levski (or the name of any EdgeX release) comes from, a top contributor to our project is selected in each release cycle to name an upcoming release.  Levski is named after one of our CLI developers’ favorite home-country (she is from Bulgaria) mountain peak.

You will see the words “likely” and “anticipate” show up a lot in this post because a lot can happen in the span of six months when building open-source software.  These caveats are there to remind the reader that what is planned is not set in stone – so if you are an adopter/user of EdgeX, design accordingly.

Major Features

Two of the most important features that we are looking to land in the Levski release are:

  • North-to-south message bus communications
  • Control plane (aka system management) events

North-to-south messaging will set up the means for a 3rd party application or cloud system to use MQTT to communicate with EdgeX’s command service and trigger an actuation or data fetch command all the way through to the devices/sensors.  

Today, communications going from north to south in EdgeX are accomplished via REST.  Therefore, things are not really asynchronous and there isn’t the opportunity to set the quality of service (QoS) of the communications in these instances.  Levski will allow adopters of EdgeX to use messaging to trigger actions and communicate data needs.  This compliments what is already available in EdgeX which is the ability to use messaging from south to north.  The design for this feature was finalized in this past Kamakura release cycle.  You can see the design document for more details.

The second major feature to highlight in Levski is control plane events (CPE) or also known as system management events.  This feature will allow EdgeX to send important events to would-be listeners (could be other EdgeX services, could be 3rd party subscribers) that something (an event) has happened in EdgeX.  Examples of control plane events are that a new device/sensor was provisioned or a new profile was uploaded.  Control plane events could also report on micro service issues – such as not having access to a database it needs.  Each EdgeX micro service will define its own control plane events, however in this release, it is anticipated that control plane events will first be tried with core metadata.

Other Features and Fixes

More Metrics

During the Kamakura release cycle, we implemented a new beta metrics/telemetry feature.  This feature allows any service to report (via message bus) on telemetry from the service that allows for better monitoring and management of EdgeX.  Telemetry such as how many events have been persisted, or how long does it take for a message to be processed in an app service can now be collected and published by EdgeX Metrics.  In this release cycle, we plan to proliferate metrics across all services (it was only in a few services for Kamakura) and take the feature out of beta status.

Experiment with MQTT authentication in service-to-service communications

Today, EdgeX has an API gateway to protect the REST APIs.  We also secure external message bus communications, but there is nothing that secures message bus communications internally.  In Levski, the community is going to be exploring possible solutions for securing MQTT message traffic.  We plan to explore and prototype with OpenZiti – a zero trust open-source project that provides an overlay network for secure communications between services and may even allow us to provide an alternative to our API gateway (Kong today).

Unit of Measures

In this past release cycle, we spent some time designing a solution that would allow units of measure to be associated to all of the edge sensor data collected.  There are many “standard” units of measure in the world.  We did not pick one, but allowed the adopter to pick / use the units of measure they want associated to their data.  In the Levski release, we hope to implement the design we formalized in the last release.  See our design documentation for more details.

Miscellaneous

The community is seeing more efforts by users/adopters to deploy EdgeX in non-containerized environments and wanting to build EdgeX for other environments (ARM32, RISC-V, etc.).  There is also a growing desire on the part of adopters to be able to have versions of our micro services that don’t include dependencies or modules that aren’t being used (for example, not including 0MQ when not using 0MQ for communications or not including security modules in a deployment that is not using EdgeX security features).  Therefore, the project is looking to provide more “build” from scratch options that remove unnecessary libraries, modules, etc. in environments where these features are not needed.  We call them “lite builds” and they should be more prevalent in Levski.

A growing demand for NATs as a message bus implementation in EdgeX will have the community doing some research and experimentation with NATS in this release cycle.  It is unlikely that full-fledged NATS message bus support comes out of this release, but we should begin to understand if NATs can be used as a message but implementation for our messaging abstraction and know the benefits of NATs over Redis Pub/Sub or MQTT for our use cases.

As you hopefully can see, Levski, while considered a minor release, will likely include a number of useful new features.

EdgeX 3.0

As part of the planning cycle, the EdgeX community leadership also considered the question of the next major release (EdgeX 3.0) and next LTS release.  By project definition, major releases are reserved for significant new features and inclusion of non-backward compatible changes.  We have tentatively slated the spring of 2023 as the target time for EdgeX 3.0.  We have been collecting a number of fixes and features that will require non-backward compatible changes.  We think the time will be right to work on these in the Minnesota release (code name for the spring 2023 release).  If all goes to plan, an LTS release will follow that in the fall of 2023.  The Napa release (code name for the fall 2023 release) would be the community’s second LTS release.  The 3rd major release and 2nd LTS release would fall exactly two years from our 2nd major release (Ireland in spring 2021) and first LTS (Jakarta in fall 2021).

Adopters should take some comfort in the fact that the project is healthy, stable and is looking out to the future.  As always, we welcome input and feedback on how EdgeX is supporting your edge/IOT solutions and what we can do better.

EDGE in the Healthcare Nutrition Industry

By Blog

By LF Edge Community members Utpal Mangla (Industry EDGE Cloud – IBM Cloud Platform);  Atul Gupta (Lead Data Architect – IBM); and  Luca Marchi (Industry EDGE and Distribute Cloud – IBM Cloud Platform)

Diet and Nutrition plays a vital role in healthcare and often it is still left behind by medical practitioners, doctors, nurses, and alternative medical care professionals. Whether patient is recovering from regular sickness, injuries, surgeries, or chronic illness – all of them need special care for diet and nutrition provided to the patients. The need is to provide enough nutrition at pre-determined frequencies for faster recovery and making sure the normalcy returns for the most patients in moderate time and at optimal expense. But the diet and nutrition guidance provided is either very generic or not monitored and adjusted as per patients’ vitals and recovery progression. 

These traditional mechanisms of feeding the health, diet, nutrition tracking data to medical practitioners is one-way mechanism and siloed decision making is based on this disparate data and not tied up for common benefits of the patients.

With the ongoing digital transformation of the healthcare, there is organic growth of edge devices in the hands of patients, healthcare staff such as fit-bit, cellphones, nutrition scales, smart kitchen, and bed-care devices, etc. The questions are – Where do we take it from here? What do we do with this explosion of edge data? Are we creating another set of data-silos?

Edge computing has the solutions for most of these problems and it can provide connectivity to use this data effectively and efficiently. This can converge all the data silos from traditional devices, healthcare facilities, human diet, and nutrition data into common repository such as ‘Data producers at Edge’, which can feed the data into cloud computing environments. The specialized healthcare cloud computing environments are now available, but they lack the data ingestion components. 

The ‘Data producers at Edge’ ingest the healthcare data into cloud computing environments from the Edge devices, healthcare facilities, et al completing the full cycle of data usage. This complete data cycle can now start benefiting patients, their families and as much gain efficiency for healthcare staff and facilities. Hospitals and alternative health professionals can start treating patients remotely using this new paraphernalia of devices, solutions, and inter-connectivity. 

These Data producers have rich data about patients’ daily health, vitals, physical vicinities, living conditions, walk-score, mobility, and assistance provided, etc. The net impact of using this data in combined and constructive way can not only open new horizons for healthcare but can start a new journey with Edge transformation which is yet to be realized. There is a potential of new industry disruptor in this area by leveraging benefits of Edge, Cloud computing, IoT devices, Patient Literacy, et al. The combination is self-serviced components and interconnectivity which can expand and scale on demand, as needed and provide solutions to areas of high demand and growth. It is now known that many healthcare areas can only be scaled and gain service efficiency by technology solutions and not by putting more healthcare facilities and professionals. Also, the traditional diet and nutrition methods have always played effective role in patient recovery, which can yield accurate, efficient, and robust healthcare solutions if used in conjunction with Edge and IoT advancements.

This full cyclic approach will tie together the Diet and Nutrition aspects to other healthcare areas, components, and capabilities. The basics of patient’s recovery progression can ultimately start to play its role in more connected manner and predictive care can replace the reactive care approach. 

The Edge data can be used to generate patient analytics and align it to diet and nutrition data to come up with new predictive care capabilities. The health care staff can adjust, re-calibrate the exercises based on existing and proposed diet and nutrition for the patients. This can even extend into improving the supply-chain of the products and services needed for the healthcare industry.

The diet and nutrition data, combined with sophisticated cloud compute can yield better and improved predictive care for the patients and move to new era in Healthcare Digital Edge Transformation

EdgeX Foundry Turns 5 Years Old & Issues its 10th Release

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry Technical Steering Committee Chair

EdgeX Foundry, an LF Edge project, turned five years old in April and celebrated with its 10th release!

Today, the project announced the release of Kamakura – version 2.2 of the edge/IoT platform.  EdgeX has consistently released two versions a year (spring and fall) since its founding in 2017.  Looking back to the founding and announcement of the project at Hannover Messe 2017, I would be lying if I was to say I expected the project to be where it is today.

Many new startup businesses don’t last five years. For EdgeX to celebrate five years, steadily provide new releases of the platform each year, and continue to grow its community of adopters is quite an achievement.  It speaks both to the need for an open edge / IoT platform and the commitment and hard work of the developers on the project.  I am very proud of what this community has accomplished.

We’ll take a closer look at EdgeX highlights over the past five years in an upcoming blog post, so for now, let’s dive into the Kamakura release. 

 

Here’s an overview of what’s new:

Kamakura Features

While this “dot” release follows on the heels of the project’s first ever long-term support release (Jakarta in the fall of 2021), the Kamakura release still contains a number of new features while still maintaining backward compatibility with all 2.x releases.  Notably, with this release, EdgeX has added:

  • Metrics Telemetry collection: EdgeX is about collecting sensor/device information and making that data available to analytics packages and enterprise systems in a consistent way so it can be acted on.  However, EdgeX had only minimal means to report on its own health or status.  An adopter could watch EdgeX service container memory or CPU usage or use the “ping” facility to make sure the service was still responsive, but that is about it.  In the Kamakura release, new service level metrics can be collected and sent through the EdgeX message bus to be consumed and monitored.  Some initial metrics collection is provided in the core data service (ex: number of events and reading persisted) for this release, but the framework for doing this system wide is now in place to offer more in the next release or allow an adopter to instrument other services to report their metrics telemetry as they see fit.  The metrics information can be easily subscribed to by a time series database, dashboard or custom monitoring application.

 

  • Delayed Start Services:  When EdgeX is running with security enabled, an initialization service (called security-secretstore-setup) provides each EdgeX micro service with a token which is used to access the EdgeX secrets store (provided by Vault).  These tokens have a time to live (TTL) which means if they were not used or renewed, they expired.  This created problems for services that may need to start later – especially those that are going to connect to future sensors/devices.  Furthermore, not all EdgeX services are known when the platform initializes.  An adopter may choose to add new connectors (both north and south) all the time to address new needs.   These types of issues and others often meant that adopters often had to shutdown EdgeX and restart all of EdgeX in order to provide new security tokens to all services.  In Kamakura, this issue has been addressed with a new service that uses mutual authentication TLS and exchanges a SPIFFE X.509 SVID for getting secret store tokens.  This new service allows a service to get or renew a token used to access the EdgeX secrets store anytime and not just at the bootstrapping / initialization of EdgeX as a whole.

 

  • New Camera Device Services:  While EdgeX somewhat supported connectivity to ONVIF cameras through a camera device service, the service was incomplete (for example not allowing the camera’s PTZ capability to be accessed).  With Kamakura, EdgeX will now have two camera device services.  A new ONVIF camera device service is more complete and tested against server popular camera devices.  It allows for PTZ and even the discovery of ONVIF compliant cameras.  A second camera service – a USB camera service – will help manage and monitor simple webcams.  Importantly, EdgeX will, through the USB camera service, be able to get a USB camera’s video stream to other packages (such as an AI/ML engine) to incorporate visual inference results into the EdgeX data sensor fusion capability.

 

  • Dynamic Device Profiles: in prior releases, device profiles were considered mostly static.  That is, once a device profile was used and associated to a device or any other EdgeX object like an event/reading, it was not allowed to change.  One could not, for example, have an empty device profile and add device resources (the sensor points collected by a device service) later as more details of the sensor or use case emerged.  With the Kamakura release, device profiles are much more dynamic in nature.  New resources can be added all the time.  Elements of the device profile can change over time.  This allows device profiles to be more flexible and adaptive to the sensors and use case needs without having to remove and re-add devices all the time.

 

  • V2 of the CLI:  EdgeX has had a command line interface tool for several releases.  But the tool was not completely updated to EdgeX 2.0 until this latest release.  For that matter, the CLI did not offer the ability to call on all 100% of the EdgeX APIs.  With Kamakura, the CLI now provides 100% coverage of the REST APIs (any call that can be done via REST can be done through the CLI) in addition to making the CLI tool compatible with EdgeX 2.0 instances.  Several new features were added as well to include tab completion, use of the registry to provide configuration, and improved result outputs.

The EdgeX community worked on many other features/fixes and behind the scenes improvements to include:

  • Updated LLRP/RFID device service and application services
  • Added or updated GPIO and CoAP services 
  • Ability to build EdgeX services on Windows platform (providing optional inclusion of ZMQ libraries as needed)
  • CORs enablement
  • Added additional testing – especially around the optional use of services
  • Added testing to the GUI
  • Optimized the dev ops CI/CD pipelines
  • Kubernetes Helm Chart example for EdgeX
  • Added linting (the automated checking of source code for programmatic, stylistic and security issues) part of the regular build checks and tests on all service pull requests
  • Formalized release and planning meeting schedule
  • Better tracking of project decisions through new GitHub project boards

Tech Talks

When EdgeX first got started, we provided a series of webinars called Tech Talks to help educate people on what EdgeX is and how it works.  We are announcing that a new Tech Talk webinar series will begin May 24 and run weekly through June.  The topic list includes:

  •   May 24 – Getting Started with EdgeX
  •   June 7 – Build an EdgeX Application Service
  •   June 14 – Build an EdgeX Device Service
  •   June 21 – Creating a Device Profile
  •   June 28 – Getting started with EdgeX Snaps

These talks will be presented by distinguished members of our community.  We are going to livestream them through YouTube at 9am PDT on the appointed days.  These new talks will help provide instruction in the new EdgeX 2.0 APIs introduced with the Ireland release.  You can register for the talks here:

What’s Next: Fall Release – “Levski”

Planning for the fall 2022 release of EdgeX – codenamed Levski – is underway and will be reported on this site soon.  As part of the Kamakura release cycle, several architectural decision records were created which set the foundation for many new features in the fall release.  This includes a north-south messaging subsystem and standardization of units of measure in event/readings. 

 EdgeX is 5, but its future still looks long and bright.

 

Deep Learning Models Deployment with Edge Computing

By Blog

By Utpal Mangla (IBM), Luca Marchi (IBM), Shikhar Kwatra (AWS)

Smart Edge devices across the world are continuously interacting and exchanging information through machine-to-machine communication protocols. Multitude of similar sensors and endpoints are generating tons of data that needs to be analyzed in real-time used to train deep complex learning models. From Autonomous Vehicles running Computer vision algorithms to smart devices like Alexa running Natural language processing models, the era of deep learning has widely extended to plethora of applications. 

Using Deep learning, one can enable such edge devices to aggregate and interpret unstructured pieces of information (audio, video, textual) and take ameliorative actions, although it comes at the expense of amplified power and performance measurements. Since Deep learning resources require huge computation time and resources to process streams of information generated by such edge devices and sensors, such information can scale very quickly. Hence, bandwidth and network requirements need to be tackled tactfully to ensure smooth scalability of such deep learning models. 

If the Machine and deep learning models are not deployed directly at the edge devices, these smart devices will have to offload the intelligence to the cloud. In the absence of a good network connection and bandwidth, it can be a quite a costly affair. 

However, since these Edge or IoT devices are supposed to run on low power, deep learning models need to be tweaked and packaged efficiently in order to smoothly run on such devices. Furthermore, many existing deep learning models and complex ML use cases leverage third party libraries, which are difficult to port to these low power devices. 

The global Edge Computing in Manufacturing market size is projected to reach USD 12460 Million by 2028, from USD 1531.4 Million in 2021, at a CAGR of 34.5% during 2022-2028. [1]

Simultaneously, the global edge artificial intelligence chips market size was valued at USD 1.8 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR) of 21.3% from 2020 to 2027. [2]

Recent statistical results and experiments have shown that offloading complex models to the edge devices has proven to effectively reduce the power consumption. However, pushing such deep learning models to the edge didn’t essentially meet the latency requirements. Hence, offloading to edge may still be use case dependent and may-not work for all the tasks involved at this point. 

We have to now build deep learning model compilers capable of optimizing these models into executable codes with versioning on the target platform. Furthermore, it needs to be compatible or go in-line with existing Deep learning frameworks inclusive of but not limited to MXNet, Caffe, Tensorflow, PyTorch etc. Lightweight and low power messaging and communication protocol will also enable swarms of such devices to continuously collaborate with each other and solve multiple tasks via parallelization of tasks at an accelerated pace. 

For instance, TensorFlow lite can be used for on-device inferencing. First, we pick an existing versioned model or develop a new model. Then, the TensorFlow model is compressed into a compressed flat buffer in the conversion stage. The compressed tflite file is loaded into an edge device during the deployment stage. In such manner, various audio classification, object detection and gesture recognition models have been successfully deployed on the edge devices. 

Hence, if we migrate the deep learning models with specific libraries and frameworks (like tensorflow lite or ARM compute library for embedded systems) can be optimized to run on IoT devices. In order to scale it further with efficiency, first, we need to directly compile and optimize deep-learning models to executable codes on the target device and we need an extremely light-weight OS to enable multitasking and efficient communications between different tasks.

References:

  1. https://www.prnewswire.com/news-releases/edge-computing-market-size-to-reach-usd-55930-million-by-2028-at-cagr-31-1—valuates-reports-301463725.html
  2. https://www.grandviewresearch.com/industry-analysis/edge-artificial-intelligence-chips-market

 

IoT Enabled Edge Infusion

By Blog

By Shikhar Kwatra, Utpal Mangla, Luca Marchi

The sheer volume of information being processed and generated by Internet of things enabled sensors has been burgeoning tremendously in the past few years. As Moore’s law is still valid providing an indication of number of transistors in an integrated circuit doubling roughly every two years, growing ability of edge devices to handle vast amounts of data with complex designs is also expanding on a large scale. 

Most current IT architectures are unable to compete with the growing data volume and maintain the processing power to understand, analyze, process and generate outcomes from the data in a quick and transparent fashion. That is where IoT solutions meets Edge Computing. 

Transmitting the data to a centralized location across multiple hoops, for instance, from the remote sensors to the cloud involves additional transport delays, security concerns and processing latency. The ability of Edge architectures to move the necessary centralized framework directly to the edge in a distributed fashion in order to make the data to be processed at the originating source has been a great improvement in the IoT-Edge industry.

As IoT projects scale with Edge computing, the ability to efficiently deploy IoT projects, reduce security to the IoT network, and adding of complex processing with inclusion of machine learning and deep learning frameworks have been made possible to the IoT network. 

International Data Corporation’s (IDC) Worldwide Edge Spending Guide estimates that spending on edge computing will reach $24 billion in 2021 in Europe. It expects spending to continue to experience solid growth through 2025, driven by its role in bringing computing resources closer to where the data is created, dramatically reducing time to value, and enabling business processes, decisions, and intelligence outside of the core IT environment. [1]

Sixty percent of organizations surveyed in a recent study conducted by RightScale agree that the holy grail of cost-saving hides in cloud computing initiatives. Edge AI, in contrast, eliminates the exorbitant expenses incurred on the AI or machine learning processes carried out on cloud-based data centers. [2]

Edge is now widely being applied in a cross-engagement matrix across various sectors:

  1. Industrial – IoT devices and sensors where data loss is unacceptable
  2. Telecommunication – Virtual and Software Defined solutions involving ease of virtualization, network security, network management etc.
  3. Network Cloud – Hybrid and Multi-cloud frameworks with peering endpoint connections, provisioning of public and private services requiring uptime, security and scalability
  4. Consumer & Retail framework – Inclusion of Personal Engagement devices, for instance, wearables, speakers, AR/VR, TV, radios, smart phones etc. wherein data security is essential for sensitive information handling 

Since not all the edge devices are powerful or capable of handling complex processing tasks while maintaining an effective security software, such tasks are usually pushed to the edge server or gateway. In that sense, the gateway becomes the communications hub performing critical network functions inclusive of but not limited to data accumulation, sensor data processing, sensor protocol translation and understanding prior to the gateway forwarding the data to the on-premise network or cloud. 

From Autonomous vehicles running different Edge AI solutions to actuators with WIFI or Zigbee with the ability to connect to the network, with increased complexity of fusion of different protocols, different edge capabilities will continue to surface in coming years.

References: 

[1] https://www.idc.com/getdoc.jsp?containerId=prEUR148081021

[2] https://www.itbusinessedge.com/data-center/developments-edge-ai/

 

Overview of Akraino R5’s Project Cassini: IoT and Infrastructure Edge Blueprint Family

By Akraino, Blog

By Olivier Bernard and Parag Beeraka 

This post provides an overview of the Smart Cities Akraino blueprint as well as an overview of key features and implementations of PARSEC in Akraino’s Release 5.

 Overview – Akraino Blueprint:  Smart Cities

The purpose of the Smart Cities blueprint family  is to provide an edge computing platform based on Arm SoC and  improve deployment flexibility and security within the edge computing. High-level relationships between  functional domains is shown in the figure below:

(For the full description of the Smart Cities Reference Architecture please refer to the Smart Cities Documents.)

 

Smart Cities in Akraino R5– Key features and implementations in Akraino Release 5:

 The Smart Cities blueprint’s security components is PARSEC, first available in Akraino’s Release 5. . The following is a brief introduction to PARSEC.

Parsec is the Platform AbstRaction for SECurity, a new open-source initiative to provide a common API to secure services in a platform-agnostic way.

PARSEC aims to define a universal software standard for interacting with secure object storage and cryptography services, creating a common way to interface with functions that would traditionally have been accessed by more specialised APIs. Parsec establishes an ecosystem of developer-friendly libraries in a variety of popular programming languages. Each library is designed to be highly ergonomic and simple to consume. This growing ecosystem will put secure facilities at the fingertips of developers across a broad range of use cases in infrastructure computing, edge computing and the secure Internet of Things.

 

 

(For more information of PARSEC: https://parallaxsecond.github.io/parsec-book/)

 

Software Defined Camera (SDC) blueprint

Security Cameras are seeing increasing growth in many subsegments such as commercial surveillance cameras, consumer surveillance cameras and in many other devices such as dashcams, baby monitors and other IoT vision devices. The total surveillance market is expected to be around $44B in the year 2025 growing at a CAGR of 13%. The other exciting thing happening in this market, along with the increase in the unit shipments of surveillance cameras, is the increasing use of artificial intelligence and machine learning (AI/ML) in these cameras and IoT devices. It is estimated that there are already a billion cameras installed in the world and this number would reach 2 billion by the year 2024. However security cameras are very different from many other devices because once installed, they stay installed for more than 5+ years. So it is critical to ensure that these devices continue to provide enhanced functionality over time. 

In today’s world, there are many technologies that are “Software defined”. This has been possible because of many advancements in technology – at the bottom of the stack, you have the Operating System, then the virtualization layer – which actually kicked off the original software defined compute with Virtual machines etc. Then came containers and then the orchestrators for the containers such as k8s, k3s  – these technological advancements paved the path to define a software defined data center. In the datacenter, after software defined compute, software defined networking and software defined storage started to become the norm. 

Now the time is prime to move software defined trends to the edge devices as well. This is really a transformation that has started across many segments such as Automotive – cars, trucks etc. We do see this trend applied to many other edge devices and one of them would be cameras or as we and many others call it, Smart Cameras. The idea here is that once you buy a camera, the hardware is advanced so that you will be receiving continuous software updates which can be neural network model updates catered to the specific data that you might have captured using the camera or other updates related to functionality and security of the device.  

 

 By designing future camera products with cloud native capabilities in mind, one will be able to scale camera and vision products to unprecedented levels. If all the applications are deployed using a service oriented architecture with containers, a device can be continuously updated. One of the key advantages of this architecture would be to enable new use case scenarios that one might not have envisioned in the past with a simple on demand service deployment post sale through an app store. A simple example can be to deploy a new and updated license plate recognition app a year after the purchase of cameras. At the same time one of the other key advantages with this architecture would be to enable continuous machine learning model updates based on the data pertinent to the camera installations i.e be able to use the data that it is capturing and use it to train the models. With this, the model would be relevant to the specific use-case thus increasing the value of the camera itself. All of this is possible due to enabling a service oriented architecture model using containers and thinking cloud native first. 

The other important aspect that will really bring together the security camera and vision products is the adherence to common standards at the silicon and the platform level. LF Edge member company, Arm,  has started an initiative called Project Cassini and Arm SystemReady® , and it’s a key component of this initiative. As part of Arm SystemReady®, Arm is working with many ecosystem partners (OEMs, ODMs, silicon vendors) to drive a standards approach in order to scale deployments by replacing custom solutions with standards-based solutions and becoming the platform of choice for future camera deployments.  

To address and simplify these challenges, Arm is designing and offering the solution as an open source reference solution to be leveraged collaboratively for future software defined cameras based on established standards focused on security, ML, imaging and cloud-native. The basis of the reference solution is the SystemReady® certified platform. SystemReady® platforms implement UEFI, U-Boot and Trusted standard based Firmware. 

  • With the base software layers compliant to SystemReady®, standard Linux distros such as Fedora, OpenSUSE distros or Yocto can be run on these platforms. 
  • Then the container orchestration service layer can run containerized applications, by using k3S. 
  • The next software layer is ML and Computer vision libraries. Arm has supported ML functions through Arm-NN and the Arm Compute library so that users can develop their ML applications using standard open-source ML frameworks such as PyTorch, Tensorflow etc. 
  • Similarly, with computer vision, it is critical to support open standards such as Video 4 Linux, OpenCV and BLAS libraries. 
  • Both complementary and necessary, is security – this is where orchestration and security microservices such as PARSEC, an open-source platform abstraction layer for security along with PSA certification process are critical, so that end users build the confidence that the product will be secure when deployed in the field. These secure services are the foundation to support and enable containerized applications for inferencing, analytics and storage.

Learn more about Project CASSINI here: https://wiki.akraino.org/display/AK/Project+Cassini+-+IoT+and+Infrastructure+Edge+Blueprint+Family.

Infusion of Machine Learning Operations with Internet of Things

By Blog

By: Shikhar Kwatra, Utpal Mangla, Luca Marchi

With the advancement in deep tech, the operationalization of machine learning and deep learning models has been burgeoning in the space of machine learning. In a typical scenario within the organization involving machine learning or deep learning business case, the data science and IT teams need to extensively collaboration in order to increase the pace of scaling and pushing of multiple machine learning models to production through continuous training, continuous validation, continuous deployment and continuous integration with governance. Machine Learning Operations (MLOps) has carved a new era of DevOps paradigm in the machine learning/artificial intelligence realm by automating end-to-end workflows.

As we are optimizing the models and bringing the data processing and analysis closer to the edge, data scientists and ML engineers are continuously finding new ways to push the complications involved with operationalization of models to such IoT (Internet of things) edge devices. 

A LinkedIn publication revealed that by 2025, the global AI spend would have reached $232 billion and $5 trillion by 2050. According to Cognilytica, the global MLOps market will be worth $4 billion by 2025. The industry was worth $350 million in 2019. [1]

Models running in IoT edge devices need to be very frequently trained due to variable environmental parameters, wherein continuous data drift and limited access to such IoT edge solutions may lead to degradation of the model performance over time. The target platforms on which ML models need to be deployed can also vary, such as IoT Edge or to specialized hardware such as FPGAs which leads to high level of complexity and customization with regards to MLOps on such platforms. 

Models can be packaged into docker image for the purpose of deployment post profiling the models by determining the cores, CPU and memory settings on said target IoT platforms. Such IoT devices also have multiple dependencies for packaging and deploying models that can be executed seamlessly on the platform. Hence, model packaging is easily implemented through containers as they can span over both cloud and IoT edge platforms.

When we are running on IoT edge platforms with certain dependencies of the device, a decision needs to be taken which containerized machine learning models need to be made available offline due to limited connectivity. An access script to access the model, invoke the endpoint and score the request incoming to the edge device needs to be operational in order to provide the respective probabilistic output. 

Continuous monitoring and retraining of models deployed in the IoT devices need to be handled properly using model artifact repository and model versioning features as part of MLOps framework. Different images of the models deployed will be stored in the shared device repository in order to quickly fetch the right image at the right time to be deployed to the Iot device. 

Model retraining can be triggered based on a job scheduler running in the edge device or when new data is incoming, invoking the rest endpoint of the machine learning model. Continuous model retraining, versioning and model evaluation become an integral part of the pipeline. 

In case the data is frequently changing which can be the case with such IoT edge devices or platforms, the frequency of model versioning and refreshing the model due to variable data drift will enable the MLOps engineer persona to automate the model retraining process, thereby saving time for the data scientists to deal with other aspects of feature engineering and model development. 

In time, the rise of such devices continuously collaborating over the internet and integrating with MLOps capabilities is poised to grow over the years. Multi-factored optimizations will continue to occur in order to make the lives of data scientists and ML engineers more focused and easier from model operationalization standpoint via an end-to-end automated approach.

Reference: 

[1] https://askwonder.com/research/historical-global-ai-ml-ops-spend-ulshsljuf