Skip to main content
All Posts By

LF Edge

EDGE in the Healthcare Nutrition Industry

By Blog

By LF Edge Community members Utpal Mangla (Industry EDGE Cloud – IBM Cloud Platform);  Atul Gupta (Lead Data Architect – IBM); and  Luca Marchi (Industry EDGE and Distribute Cloud – IBM Cloud Platform)

Diet and Nutrition plays a vital role in healthcare and often it is still left behind by medical practitioners, doctors, nurses, and alternative medical care professionals. Whether patient is recovering from regular sickness, injuries, surgeries, or chronic illness – all of them need special care for diet and nutrition provided to the patients. The need is to provide enough nutrition at pre-determined frequencies for faster recovery and making sure the normalcy returns for the most patients in moderate time and at optimal expense. But the diet and nutrition guidance provided is either very generic or not monitored and adjusted as per patients’ vitals and recovery progression. 

These traditional mechanisms of feeding the health, diet, nutrition tracking data to medical practitioners is one-way mechanism and siloed decision making is based on this disparate data and not tied up for common benefits of the patients.

With the ongoing digital transformation of the healthcare, there is organic growth of edge devices in the hands of patients, healthcare staff such as fit-bit, cellphones, nutrition scales, smart kitchen, and bed-care devices, etc. The questions are – Where do we take it from here? What do we do with this explosion of edge data? Are we creating another set of data-silos?

Edge computing has the solutions for most of these problems and it can provide connectivity to use this data effectively and efficiently. This can converge all the data silos from traditional devices, healthcare facilities, human diet, and nutrition data into common repository such as ‘Data producers at Edge’, which can feed the data into cloud computing environments. The specialized healthcare cloud computing environments are now available, but they lack the data ingestion components. 

The ‘Data producers at Edge’ ingest the healthcare data into cloud computing environments from the Edge devices, healthcare facilities, et al completing the full cycle of data usage. This complete data cycle can now start benefiting patients, their families and as much gain efficiency for healthcare staff and facilities. Hospitals and alternative health professionals can start treating patients remotely using this new paraphernalia of devices, solutions, and inter-connectivity. 

These Data producers have rich data about patients’ daily health, vitals, physical vicinities, living conditions, walk-score, mobility, and assistance provided, etc. The net impact of using this data in combined and constructive way can not only open new horizons for healthcare but can start a new journey with Edge transformation which is yet to be realized. There is a potential of new industry disruptor in this area by leveraging benefits of Edge, Cloud computing, IoT devices, Patient Literacy, et al. The combination is self-serviced components and interconnectivity which can expand and scale on demand, as needed and provide solutions to areas of high demand and growth. It is now known that many healthcare areas can only be scaled and gain service efficiency by technology solutions and not by putting more healthcare facilities and professionals. Also, the traditional diet and nutrition methods have always played effective role in patient recovery, which can yield accurate, efficient, and robust healthcare solutions if used in conjunction with Edge and IoT advancements.

This full cyclic approach will tie together the Diet and Nutrition aspects to other healthcare areas, components, and capabilities. The basics of patient’s recovery progression can ultimately start to play its role in more connected manner and predictive care can replace the reactive care approach. 

The Edge data can be used to generate patient analytics and align it to diet and nutrition data to come up with new predictive care capabilities. The health care staff can adjust, re-calibrate the exercises based on existing and proposed diet and nutrition for the patients. This can even extend into improving the supply-chain of the products and services needed for the healthcare industry.

The diet and nutrition data, combined with sophisticated cloud compute can yield better and improved predictive care for the patients and move to new era in Healthcare Digital Edge Transformation

EdgeX Foundry Celebrates 5 Years, Issues Kamakura Release

By Announcement

Community’s 10th release comes at project’s fifth anniversary and adds new features related to metrics collection, security, and configurability 

SAN FRANCISCO – May 12, 2022EdgeX Foundry, a highly-scalable and flexible open source framework that facilitates interoperability between devices and applications at the IoT edge, and a  Linux Foundation project under the LF Edge umbrella, today announced the release of version 2.2 of EdgeX, codenamed ‘Kamakura.’  It is the project’s tenth release and coincides with the celebration of EdgeX Foundry’s fifth anniversary.

“Many new startup businesses don’t last 5 years, and for EdgeX to reach its fifth birthday while consistently releasing twice a year since its inception is quite the achievement,” said Jim White, chairman of the EdgeX Foundry Technical Steering Committee and CTO of IoTSystems.  “The project continues to see global adoption growth, especially in places like China, and its success is a testament to both the need for an open edge/IoT platform, as well as the dedication and support of a fantastic development community.”

“Having set itself apart as the de facto open source IoT framework for edge computing in just five years is no small feat,” said Arpit Joshipura, general manager, Networking, Edge, and IoT at the Linux Foundation. “The continued growth and success of EdgeX Foundry is a testament to its strong community and ability to innovate in real time. Congratulations to the entire ecosystem on this  milestone,  as well as the release of Kamakura.”  

While a ‘dot’ release, EdgeX Kamakura still contains a number of new features while still maintaining backward compatibility with all 2.x releases.  

With the Kamakura release, EdgeX has added: 

  •  micro service metrics/telemetry collection capability (beta) – making it easier for adopters to monitor the health and status of the EdgeX services
  • delayed start services – allowing services to be added and started anytime and still receive security tokens without the need for a restart of the platform
  • new camera device services – allowing improved capabilities to command, control and interface with ONVIF and USB cameras
  • dynamic device profiles – allowing device profiles to be modified over time without needing to remove and re-add devices/sensors
  • V2 of the CLI – an update of the command line interface making it compatible with EdgeX 2.x releases while also adding 100% coverage of the REST APIs

Learn more about the Kamakura release on the EdgeX Foundry Wiki site or on the blog post

EdgeX Foundry Tech Talks

The EdgeX community will be presenting a new series of “EdgeX Foundry Tech Talks,” designed to help people get started with EdgeX. The series begins May 24  and will run weekly through June.  Developers and adopters can register for the series at the following links:

 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open-source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

EdgeX Foundry Turns 5 Years Old & Issues its 10th Release

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry Technical Steering Committee Chair

EdgeX Foundry, an LF Edge project, turned five years old in April and celebrated with its 10th release!

Today, the project announced the release of Kamakura – version 2.2 of the edge/IoT platform.  EdgeX has consistently released two versions a year (spring and fall) since its founding in 2017.  Looking back to the founding and announcement of the project at Hannover Messe 2017, I would be lying if I was to say I expected the project to be where it is today.

Many new startup businesses don’t last five years. For EdgeX to celebrate five years, steadily provide new releases of the platform each year, and continue to grow its community of adopters is quite an achievement.  It speaks both to the need for an open edge / IoT platform and the commitment and hard work of the developers on the project.  I am very proud of what this community has accomplished.

We’ll take a closer look at EdgeX highlights over the past five years in an upcoming blog post, so for now, let’s dive into the Kamakura release. 

 

Here’s an overview of what’s new:

Kamakura Features

While this “dot” release follows on the heels of the project’s first ever long-term support release (Jakarta in the fall of 2021), the Kamakura release still contains a number of new features while still maintaining backward compatibility with all 2.x releases.  Notably, with this release, EdgeX has added:

  • Metrics Telemetry collection: EdgeX is about collecting sensor/device information and making that data available to analytics packages and enterprise systems in a consistent way so it can be acted on.  However, EdgeX had only minimal means to report on its own health or status.  An adopter could watch EdgeX service container memory or CPU usage or use the “ping” facility to make sure the service was still responsive, but that is about it.  In the Kamakura release, new service level metrics can be collected and sent through the EdgeX message bus to be consumed and monitored.  Some initial metrics collection is provided in the core data service (ex: number of events and reading persisted) for this release, but the framework for doing this system wide is now in place to offer more in the next release or allow an adopter to instrument other services to report their metrics telemetry as they see fit.  The metrics information can be easily subscribed to by a time series database, dashboard or custom monitoring application.

 

  • Delayed Start Services:  When EdgeX is running with security enabled, an initialization service (called security-secretstore-setup) provides each EdgeX micro service with a token which is used to access the EdgeX secrets store (provided by Vault).  These tokens have a time to live (TTL) which means if they were not used or renewed, they expired.  This created problems for services that may need to start later – especially those that are going to connect to future sensors/devices.  Furthermore, not all EdgeX services are known when the platform initializes.  An adopter may choose to add new connectors (both north and south) all the time to address new needs.   These types of issues and others often meant that adopters often had to shutdown EdgeX and restart all of EdgeX in order to provide new security tokens to all services.  In Kamakura, this issue has been addressed with a new service that uses mutual authentication TLS and exchanges a SPIFFE X.509 SVID for getting secret store tokens.  This new service allows a service to get or renew a token used to access the EdgeX secrets store anytime and not just at the bootstrapping / initialization of EdgeX as a whole.

 

  • New Camera Device Services:  While EdgeX somewhat supported connectivity to ONVIF cameras through a camera device service, the service was incomplete (for example not allowing the camera’s PTZ capability to be accessed).  With Kamakura, EdgeX will now have two camera device services.  A new ONVIF camera device service is more complete and tested against server popular camera devices.  It allows for PTZ and even the discovery of ONVIF compliant cameras.  A second camera service – a USB camera service – will help manage and monitor simple webcams.  Importantly, EdgeX will, through the USB camera service, be able to get a USB camera’s video stream to other packages (such as an AI/ML engine) to incorporate visual inference results into the EdgeX data sensor fusion capability.

 

  • Dynamic Device Profiles: in prior releases, device profiles were considered mostly static.  That is, once a device profile was used and associated to a device or any other EdgeX object like an event/reading, it was not allowed to change.  One could not, for example, have an empty device profile and add device resources (the sensor points collected by a device service) later as more details of the sensor or use case emerged.  With the Kamakura release, device profiles are much more dynamic in nature.  New resources can be added all the time.  Elements of the device profile can change over time.  This allows device profiles to be more flexible and adaptive to the sensors and use case needs without having to remove and re-add devices all the time.

 

  • V2 of the CLI:  EdgeX has had a command line interface tool for several releases.  But the tool was not completely updated to EdgeX 2.0 until this latest release.  For that matter, the CLI did not offer the ability to call on all 100% of the EdgeX APIs.  With Kamakura, the CLI now provides 100% coverage of the REST APIs (any call that can be done via REST can be done through the CLI) in addition to making the CLI tool compatible with EdgeX 2.0 instances.  Several new features were added as well to include tab completion, use of the registry to provide configuration, and improved result outputs.

The EdgeX community worked on many other features/fixes and behind the scenes improvements to include:

  • Updated LLRP/RFID device service and application services
  • Added or updated GPIO and CoAP services 
  • Ability to build EdgeX services on Windows platform (providing optional inclusion of ZMQ libraries as needed)
  • CORs enablement
  • Added additional testing – especially around the optional use of services
  • Added testing to the GUI
  • Optimized the dev ops CI/CD pipelines
  • Kubernetes Helm Chart example for EdgeX
  • Added linting (the automated checking of source code for programmatic, stylistic and security issues) part of the regular build checks and tests on all service pull requests
  • Formalized release and planning meeting schedule
  • Better tracking of project decisions through new GitHub project boards

Tech Talks

When EdgeX first got started, we provided a series of webinars called Tech Talks to help educate people on what EdgeX is and how it works.  We are announcing that a new Tech Talk webinar series will begin May 24 and run weekly through June.  The topic list includes:

  •   May 24 – Getting Started with EdgeX
  •   June 7 – Build an EdgeX Application Service
  •   June 14 – Build an EdgeX Device Service
  •   June 21 – Creating a Device Profile
  •   June 28 – Getting started with EdgeX Snaps

These talks will be presented by distinguished members of our community.  We are going to livestream them through YouTube at 9am PDT on the appointed days.  These new talks will help provide instruction in the new EdgeX 2.0 APIs introduced with the Ireland release.  You can register for the talks here:

What’s Next: Fall Release – “Levski”

Planning for the fall 2022 release of EdgeX – codenamed Levski – is underway and will be reported on this site soon.  As part of the Kamakura release cycle, several architectural decision records were created which set the foundation for many new features in the fall release.  This includes a north-south messaging subsystem and standardization of units of measure in event/readings. 

 EdgeX is 5, but its future still looks long and bright.

 

Deep Learning Models Deployment with Edge Computing

By Blog

By Utpal Mangla (IBM), Luca Marchi (IBM), Shikhar Kwatra (AWS)

Smart Edge devices across the world are continuously interacting and exchanging information through machine-to-machine communication protocols. Multitude of similar sensors and endpoints are generating tons of data that needs to be analyzed in real-time used to train deep complex learning models. From Autonomous Vehicles running Computer vision algorithms to smart devices like Alexa running Natural language processing models, the era of deep learning has widely extended to plethora of applications. 

Using Deep learning, one can enable such edge devices to aggregate and interpret unstructured pieces of information (audio, video, textual) and take ameliorative actions, although it comes at the expense of amplified power and performance measurements. Since Deep learning resources require huge computation time and resources to process streams of information generated by such edge devices and sensors, such information can scale very quickly. Hence, bandwidth and network requirements need to be tackled tactfully to ensure smooth scalability of such deep learning models. 

If the Machine and deep learning models are not deployed directly at the edge devices, these smart devices will have to offload the intelligence to the cloud. In the absence of a good network connection and bandwidth, it can be a quite a costly affair. 

However, since these Edge or IoT devices are supposed to run on low power, deep learning models need to be tweaked and packaged efficiently in order to smoothly run on such devices. Furthermore, many existing deep learning models and complex ML use cases leverage third party libraries, which are difficult to port to these low power devices. 

The global Edge Computing in Manufacturing market size is projected to reach USD 12460 Million by 2028, from USD 1531.4 Million in 2021, at a CAGR of 34.5% during 2022-2028. [1]

Simultaneously, the global edge artificial intelligence chips market size was valued at USD 1.8 billion in 2019 and is expected to grow at a compound annual growth rate (CAGR) of 21.3% from 2020 to 2027. [2]

Recent statistical results and experiments have shown that offloading complex models to the edge devices has proven to effectively reduce the power consumption. However, pushing such deep learning models to the edge didn’t essentially meet the latency requirements. Hence, offloading to edge may still be use case dependent and may-not work for all the tasks involved at this point. 

We have to now build deep learning model compilers capable of optimizing these models into executable codes with versioning on the target platform. Furthermore, it needs to be compatible or go in-line with existing Deep learning frameworks inclusive of but not limited to MXNet, Caffe, Tensorflow, PyTorch etc. Lightweight and low power messaging and communication protocol will also enable swarms of such devices to continuously collaborate with each other and solve multiple tasks via parallelization of tasks at an accelerated pace. 

For instance, TensorFlow lite can be used for on-device inferencing. First, we pick an existing versioned model or develop a new model. Then, the TensorFlow model is compressed into a compressed flat buffer in the conversion stage. The compressed tflite file is loaded into an edge device during the deployment stage. In such manner, various audio classification, object detection and gesture recognition models have been successfully deployed on the edge devices. 

Hence, if we migrate the deep learning models with specific libraries and frameworks (like tensorflow lite or ARM compute library for embedded systems) can be optimized to run on IoT devices. In order to scale it further with efficiency, first, we need to directly compile and optimize deep-learning models to executable codes on the target device and we need an extremely light-weight OS to enable multitasking and efficient communications between different tasks.

References:

  1. https://www.prnewswire.com/news-releases/edge-computing-market-size-to-reach-usd-55930-million-by-2028-at-cagr-31-1—valuates-reports-301463725.html
  2. https://www.grandviewresearch.com/industry-analysis/edge-artificial-intelligence-chips-market

 

IoT Enabled Edge Infusion

By Blog

By Shikhar Kwatra, Utpal Mangla, Luca Marchi

The sheer volume of information being processed and generated by Internet of things enabled sensors has been burgeoning tremendously in the past few years. As Moore’s law is still valid providing an indication of number of transistors in an integrated circuit doubling roughly every two years, growing ability of edge devices to handle vast amounts of data with complex designs is also expanding on a large scale. 

Most current IT architectures are unable to compete with the growing data volume and maintain the processing power to understand, analyze, process and generate outcomes from the data in a quick and transparent fashion. That is where IoT solutions meets Edge Computing. 

Transmitting the data to a centralized location across multiple hoops, for instance, from the remote sensors to the cloud involves additional transport delays, security concerns and processing latency. The ability of Edge architectures to move the necessary centralized framework directly to the edge in a distributed fashion in order to make the data to be processed at the originating source has been a great improvement in the IoT-Edge industry.

As IoT projects scale with Edge computing, the ability to efficiently deploy IoT projects, reduce security to the IoT network, and adding of complex processing with inclusion of machine learning and deep learning frameworks have been made possible to the IoT network. 

International Data Corporation’s (IDC) Worldwide Edge Spending Guide estimates that spending on edge computing will reach $24 billion in 2021 in Europe. It expects spending to continue to experience solid growth through 2025, driven by its role in bringing computing resources closer to where the data is created, dramatically reducing time to value, and enabling business processes, decisions, and intelligence outside of the core IT environment. [1]

Sixty percent of organizations surveyed in a recent study conducted by RightScale agree that the holy grail of cost-saving hides in cloud computing initiatives. Edge AI, in contrast, eliminates the exorbitant expenses incurred on the AI or machine learning processes carried out on cloud-based data centers. [2]

Edge is now widely being applied in a cross-engagement matrix across various sectors:

  1. Industrial – IoT devices and sensors where data loss is unacceptable
  2. Telecommunication – Virtual and Software Defined solutions involving ease of virtualization, network security, network management etc.
  3. Network Cloud – Hybrid and Multi-cloud frameworks with peering endpoint connections, provisioning of public and private services requiring uptime, security and scalability
  4. Consumer & Retail framework – Inclusion of Personal Engagement devices, for instance, wearables, speakers, AR/VR, TV, radios, smart phones etc. wherein data security is essential for sensitive information handling 

Since not all the edge devices are powerful or capable of handling complex processing tasks while maintaining an effective security software, such tasks are usually pushed to the edge server or gateway. In that sense, the gateway becomes the communications hub performing critical network functions inclusive of but not limited to data accumulation, sensor data processing, sensor protocol translation and understanding prior to the gateway forwarding the data to the on-premise network or cloud. 

From Autonomous vehicles running different Edge AI solutions to actuators with WIFI or Zigbee with the ability to connect to the network, with increased complexity of fusion of different protocols, different edge capabilities will continue to surface in coming years.

References: 

[1] https://www.idc.com/getdoc.jsp?containerId=prEUR148081021

[2] https://www.itbusinessedge.com/data-center/developments-edge-ai/

 

Overview of Akraino R5’s Project Cassini: IoT and Infrastructure Edge Blueprint Family

By Akraino, Blog

By Olivier Bernard and Parag Beeraka 

This post provides an overview of the Smart Cities Akraino blueprint as well as an overview of key features and implementations of PARSEC in Akraino’s Release 5.

 Overview – Akraino Blueprint:  Smart Cities

The purpose of the Smart Cities blueprint family  is to provide an edge computing platform based on Arm SoC and  improve deployment flexibility and security within the edge computing. High-level relationships between  functional domains is shown in the figure below:

(For the full description of the Smart Cities Reference Architecture please refer to the Smart Cities Documents.)

 

Smart Cities in Akraino R5– Key features and implementations in Akraino Release 5:

 The Smart Cities blueprint’s security components is PARSEC, first available in Akraino’s Release 5. . The following is a brief introduction to PARSEC.

Parsec is the Platform AbstRaction for SECurity, a new open-source initiative to provide a common API to secure services in a platform-agnostic way.

PARSEC aims to define a universal software standard for interacting with secure object storage and cryptography services, creating a common way to interface with functions that would traditionally have been accessed by more specialised APIs. Parsec establishes an ecosystem of developer-friendly libraries in a variety of popular programming languages. Each library is designed to be highly ergonomic and simple to consume. This growing ecosystem will put secure facilities at the fingertips of developers across a broad range of use cases in infrastructure computing, edge computing and the secure Internet of Things.

 

 

(For more information of PARSEC: https://parallaxsecond.github.io/parsec-book/)

 

Software Defined Camera (SDC) blueprint

Security Cameras are seeing increasing growth in many subsegments such as commercial surveillance cameras, consumer surveillance cameras and in many other devices such as dashcams, baby monitors and other IoT vision devices. The total surveillance market is expected to be around $44B in the year 2025 growing at a CAGR of 13%. The other exciting thing happening in this market, along with the increase in the unit shipments of surveillance cameras, is the increasing use of artificial intelligence and machine learning (AI/ML) in these cameras and IoT devices. It is estimated that there are already a billion cameras installed in the world and this number would reach 2 billion by the year 2024. However security cameras are very different from many other devices because once installed, they stay installed for more than 5+ years. So it is critical to ensure that these devices continue to provide enhanced functionality over time. 

In today’s world, there are many technologies that are “Software defined”. This has been possible because of many advancements in technology – at the bottom of the stack, you have the Operating System, then the virtualization layer – which actually kicked off the original software defined compute with Virtual machines etc. Then came containers and then the orchestrators for the containers such as k8s, k3s  – these technological advancements paved the path to define a software defined data center. In the datacenter, after software defined compute, software defined networking and software defined storage started to become the norm. 

Now the time is prime to move software defined trends to the edge devices as well. This is really a transformation that has started across many segments such as Automotive – cars, trucks etc. We do see this trend applied to many other edge devices and one of them would be cameras or as we and many others call it, Smart Cameras. The idea here is that once you buy a camera, the hardware is advanced so that you will be receiving continuous software updates which can be neural network model updates catered to the specific data that you might have captured using the camera or other updates related to functionality and security of the device.  

 

 By designing future camera products with cloud native capabilities in mind, one will be able to scale camera and vision products to unprecedented levels. If all the applications are deployed using a service oriented architecture with containers, a device can be continuously updated. One of the key advantages of this architecture would be to enable new use case scenarios that one might not have envisioned in the past with a simple on demand service deployment post sale through an app store. A simple example can be to deploy a new and updated license plate recognition app a year after the purchase of cameras. At the same time one of the other key advantages with this architecture would be to enable continuous machine learning model updates based on the data pertinent to the camera installations i.e be able to use the data that it is capturing and use it to train the models. With this, the model would be relevant to the specific use-case thus increasing the value of the camera itself. All of this is possible due to enabling a service oriented architecture model using containers and thinking cloud native first. 

The other important aspect that will really bring together the security camera and vision products is the adherence to common standards at the silicon and the platform level. LF Edge member company, Arm,  has started an initiative called Project Cassini and Arm SystemReady® , and it’s a key component of this initiative. As part of Arm SystemReady®, Arm is working with many ecosystem partners (OEMs, ODMs, silicon vendors) to drive a standards approach in order to scale deployments by replacing custom solutions with standards-based solutions and becoming the platform of choice for future camera deployments.  

To address and simplify these challenges, Arm is designing and offering the solution as an open source reference solution to be leveraged collaboratively for future software defined cameras based on established standards focused on security, ML, imaging and cloud-native. The basis of the reference solution is the SystemReady® certified platform. SystemReady® platforms implement UEFI, U-Boot and Trusted standard based Firmware. 

  • With the base software layers compliant to SystemReady®, standard Linux distros such as Fedora, OpenSUSE distros or Yocto can be run on these platforms. 
  • Then the container orchestration service layer can run containerized applications, by using k3S. 
  • The next software layer is ML and Computer vision libraries. Arm has supported ML functions through Arm-NN and the Arm Compute library so that users can develop their ML applications using standard open-source ML frameworks such as PyTorch, Tensorflow etc. 
  • Similarly, with computer vision, it is critical to support open standards such as Video 4 Linux, OpenCV and BLAS libraries. 
  • Both complementary and necessary, is security – this is where orchestration and security microservices such as PARSEC, an open-source platform abstraction layer for security along with PSA certification process are critical, so that end users build the confidence that the product will be secure when deployed in the field. These secure services are the foundation to support and enable containerized applications for inferencing, analytics and storage.

Learn more about Project CASSINI here: https://wiki.akraino.org/display/AK/Project+Cassini+-+IoT+and+Infrastructure+Edge+Blueprint+Family.

Infusion of Machine Learning Operations with Internet of Things

By Blog

By: Shikhar Kwatra, Utpal Mangla, Luca Marchi

With the advancement in deep tech, the operationalization of machine learning and deep learning models has been burgeoning in the space of machine learning. In a typical scenario within the organization involving machine learning or deep learning business case, the data science and IT teams need to extensively collaboration in order to increase the pace of scaling and pushing of multiple machine learning models to production through continuous training, continuous validation, continuous deployment and continuous integration with governance. Machine Learning Operations (MLOps) has carved a new era of DevOps paradigm in the machine learning/artificial intelligence realm by automating end-to-end workflows.

As we are optimizing the models and bringing the data processing and analysis closer to the edge, data scientists and ML engineers are continuously finding new ways to push the complications involved with operationalization of models to such IoT (Internet of things) edge devices. 

A LinkedIn publication revealed that by 2025, the global AI spend would have reached $232 billion and $5 trillion by 2050. According to Cognilytica, the global MLOps market will be worth $4 billion by 2025. The industry was worth $350 million in 2019. [1]

Models running in IoT edge devices need to be very frequently trained due to variable environmental parameters, wherein continuous data drift and limited access to such IoT edge solutions may lead to degradation of the model performance over time. The target platforms on which ML models need to be deployed can also vary, such as IoT Edge or to specialized hardware such as FPGAs which leads to high level of complexity and customization with regards to MLOps on such platforms. 

Models can be packaged into docker image for the purpose of deployment post profiling the models by determining the cores, CPU and memory settings on said target IoT platforms. Such IoT devices also have multiple dependencies for packaging and deploying models that can be executed seamlessly on the platform. Hence, model packaging is easily implemented through containers as they can span over both cloud and IoT edge platforms.

When we are running on IoT edge platforms with certain dependencies of the device, a decision needs to be taken which containerized machine learning models need to be made available offline due to limited connectivity. An access script to access the model, invoke the endpoint and score the request incoming to the edge device needs to be operational in order to provide the respective probabilistic output. 

Continuous monitoring and retraining of models deployed in the IoT devices need to be handled properly using model artifact repository and model versioning features as part of MLOps framework. Different images of the models deployed will be stored in the shared device repository in order to quickly fetch the right image at the right time to be deployed to the Iot device. 

Model retraining can be triggered based on a job scheduler running in the edge device or when new data is incoming, invoking the rest endpoint of the machine learning model. Continuous model retraining, versioning and model evaluation become an integral part of the pipeline. 

In case the data is frequently changing which can be the case with such IoT edge devices or platforms, the frequency of model versioning and refreshing the model due to variable data drift will enable the MLOps engineer persona to automate the model retraining process, thereby saving time for the data scientists to deal with other aspects of feature engineering and model development. 

In time, the rise of such devices continuously collaborating over the internet and integrating with MLOps capabilities is poised to grow over the years. Multi-factored optimizations will continue to occur in order to make the lives of data scientists and ML engineers more focused and easier from model operationalization standpoint via an end-to-end automated approach.

Reference: 

[1] https://askwonder.com/research/historical-global-ai-ml-ops-spend-ulshsljuf

 

American Tower Joins LF Edge as Premiere Member, Community Adds EdgeGallery to Project Roster

By Announcement

LF Edge furthers innovation at the open source edge across a unified ecosystem, with induction of Edge Gallery —an open-source MEC edge computing project —and adds leading innovator American Tower as Premiere member and Ritsumeikan University as new Associate member

SAN FRANCISCO February 28, 2022 LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced American Tower has joined the project as  a Premier member. Additionally, the project announced Edge Gallery has joined the umbrella as a Stage 1 project, RITSUMEIKAN University has joined as an Associate member, and the community issued its 2021 Annual Report.

American Tower, a global leading infrastructure provider of wireless, data center, and interconnect solutions to enable a connected world, joins other existing LF Edge Premiere members: Altran, Arm, AT&T, AVEVA, Baidu, Charter Communications, Dell Technologies, Dianomic, Equinix, Ericsson, F5, Fujitsu, Futurewei, HP, Huawei, Intel, IBM, NTT, Radisys, RedHat, Samsung, Tencent, VMware, Western Digital, ZEDEDA.

“We are pleased to see even more leading technology innovators joining as LF Edge members,” said Arpit Joshipura, general manager, Networking, Edge and IOT, the Linux Foundation. “The proliferation of new technologies joining collaborative innovation at the open source edge means scalability, interoperability, and market innovation is happening across the ecosystem.”

About America Tower

American Tower, one of the largest global REITs, is a leading independent owner, operator and developer of multitenant communications real estate with a portfolio of approximately 219,000 communications sites. For more information about American Tower, please visit americantower.com.

“We are excited to join LF Edge and their members to accelerate innovation, enabled by edge network architecture. A distributed model, positioning critical data closer to the user, provides the low-latency infrastructure to deliver the automation, performance, and cognitive insight required by manufacturing, healthcare, transportation, and more.” – Eric Watko, Vice President, Product Line Management, American Tower.

American Tower is joined by new Associate member, RITSUMEIKAN University, a private university in Kyoto, Japan, that traces its origin to 1869. With the Kinugasa Campus in Kyoto, and Kyoto Prefecture, the university also has a satellite called Biwako-Kusatsu Campus and Osaka-Ibaraki Campus. Ritsumeikan university is known as one of western Japan’s four leading private universities. 

EdgeGallery Joins LF Edge Umbrella

Celebrating it’s two-year mark as an umbrella project, LF Edge welcomes its tenth project, Edge Gallery. Edge Gallery is an open-source MEC edge computing project initiated by Huawei, carriers, and vertical industry partners that joined the Linux Foundation in late 2021. Its purpose is to build a common edge computing platform that meets the “connection + computing” characteristics of the telecom industry, standardize the openness of network capabilities (especially 5G network capabilities), and simplify lifecycle processes such as MEC application development, test, migration, and running. 

EdgeGallery joins the nine existing projects – Akraino, Baetyl, FledgeEdgeX Foundry, Home Edge, Open Horizon, Project EVE, Secure Device Onboard (SDO) and State of the Edge – that support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and faster processing and mobility. LF Edge helps  unify a fragmented edge market around a common, open vision for the future of the industry.

LF Edge 2021 Annual Report

The LF Edge community also issued a report of its progress and results from the past year. “LF Edge  “ summarizes key highlights (including blueprints, deployments and momentum) Governing Board, Technical Advisory Board, Outreach Committee and General Manager. To download the report, visit: https://www.lfedge.org/resources/publications/

More details on LF Edge, including how to join as a member, details on specific projects and other resources, are available here: www.lfedge.org.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Open Networking & Edge Executive Forum (ONEEF) Returns Virtually, April 12-14, 2022

By Announcement, LF Edge

Global Industry Executives across telco, cloud and enterprise to share thought-leading visions with global open source networking and edge communities – in across alternating time zones 

SAN FRANCISCO, February 22, 2022 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, in partnership with LF Networking and LF Edge​​today announced the Open Networking & Edge Executive Forum (ONEEF) will take place virtually April 12-14, 2022. The Open Networking & Edge Executive Forum (spring) and Summit (fall) are the industry’s premier open networking and edge computing events focused on end to end solutions powered by open source. 

Building on the successful inaugural ONEEF event last spring, the Linux Foundation, LF Networking, and LF Edge are pleased to announce the 2022 Executive Forum, where leading industry executives will again share their visions from the Telco, Cloud, and Enterprise verticals. Attendees will learn how to leverage open source ecosystems and gain new insights for digital transformation. Presented in a virtual format across three days, this is a one-track event starting in a different time zone each day to better reach our global audience. 

“We are  pleased to welcome thought leaders from across the globe to the virtual stage for ONEEF 2022,” said Arpit Joshipura, general manager, Networking, Edge & IoT, at the Linux Foundation. “This curated experience is designed to complement the Open Networking & Edge Summit with thought leaders and collaborators from around the globe coming together to share insights, best practices, and new ideas that enhance the vertical space across open source networking, edge, cloud and enterprise stacks.”

Details on Executive speakers and session agenda will be available soon, but attendees can expect to hear industry insights from Analysys Mason analyst Caroline Chappell, as well as updates on the direction of major initiatives like the 5G Super Blue Print. Stay tuned for more details.

Speakers and Content from ONEEF 2021, including sessions videos, are available online.

Registration & Sponsorships

Presented in a virtual format across three days, this is a one track event that will be held in a different time zone each day to reach our global audience in the Americas, EMEA, and APAC. There is no cost to attend, but participants must be registered in order to access the sessions: https://events.linuxfoundation.org/open-networking-and-edge-exec-forum/

Sponsorship opportunities are available for this special executive edition of Open Networking & Edge Summit. For more information on sponsoring this event, contact us at events@lfnetworking.org.

The LF Networking developer community will also host the LFN Developer & Testing Forum this Summer, taking place June 13-16, in Porto, Portugal. Registration for that event  is also open, with more details to come. 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit linuxfoundation.org.

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

eKuiper Issues 1.4 Release & Discusses 2022 Roadmap

By Blog, eKuiper

Written by Jiyong Huang, Chair of the eKuiper Technical Steering Committee and Senior Software Engineer of EMQ

eKuiper, a Stage 1 (at-large) project under the LF Edge umbrella, is a Go language implementation of the lightweight stream processing engine which can run in various IoT edge usage scenarios for real-time data analysis. Being able to run near the data source at the edge, eKuiper can improve system response speed and security, saving network bandwidth and storage costs.

The eKuiper project usually has one release per quarter with new features and several patch versions afterwards. The current active release is v1.4 which is the biggest update after joining LF Edge! We have solid community contributions for thee release, including requirement collections, code contributions, testing and trials. So we are grateful to share the updates and hope they can benefit our users.

Rule Pipeline: Building Complex Businesses Logic with Flexibility

eKuiper uses SQL to define business logic, which lowers the threshold of development. Simple SQL statements can efficiently define business requirements such as filtering, aggregation, and data conversion in practical usage scenarios. However, for some complex scenarios, it is difficult to address by defining a single SQL statement; even if you can, the SQL statement itself is too complex and difficult to maintain.

Based on the new in-memory source and sink, the rule pipeline can connect multiple SQL rules easily, efficiently, and flexibly. The readability and maintainability of SQL statements can be improved when implementing complex business scenarios. The rules are connected with in-memory topics similar to the MQTT topics and support wildcard subscriptions, enabling an exceptionally flexible and efficient rules pipeline. While improving business expressiveness, rule pipelining can also improve runtime performance in certain complex scenarios. For example, multiple rules need to process data that has been filtered by a certain condition. By extracting that filtering condition as a predecessor rule, you can make the filtering calculated only once, significantly reducing the computation in the case of many rules.

Portable Plugin: Making Extensions Easier

The original version of eKuiper supported an extension scheme based on the Go native plug-in system, supporting individual extensions to source, sink and function (UDF). However, due to the limitations of the Go plugin system, writing and using plugins is not easy for users familiar with Go, let alone users of other languages. eKuiper has received a lot of feedback from users in the community about plugin development, operation and deployment, and various operational issues.

To balance development efficiency and runtime efficiency, v1.4.0 adds a new Portable plugin system to lower the threshold of plugin development. The new Portable plug-in is based on the nng protocol for inter-process communication and supports multiple languages, currently providing go and python SDKs, with more SDKs to be added in subsequent versions according to user requirements; simplifies the compilation/deployment process, and runs like a normal program written in various languages without additional restrictions. Due to the different operation mechanisms, portable plugin crashes will not affect eKuiper itself.

Native plug-ins and portable plug-ins can coexist. Users can choose the plug-in implementation or mix them according to their needs.

Shared Connections: Source/Sink Multiplexed Connections

eKuiper provides a rich set of sources and sinks to access and send results to external systems. Many of these sources and sinks are input/output pairs of the same external system type. For example, MQTT and EdgeX both have corresponding source and sink, and in the new version, users can do all connection-related configurations in the connection.yaml file; in the source/sink configuration, you can specify which connections to use without repeating the configuration. Shared connection instances reduce the additional consumption of multiple connections. In some scenarios, users may be limited in the number of connections and ports to external systems, and using shared connections can meet this limit. Also, based on shared connections, eKuiper can support connections to the EdgeX secure data bus.

Other enhancements

  • Support for configuration via environment variables
  • By default, eKuiper uses SQLite, an embedded database, to store metadata such as streams and rules, allowing for no external dependencies at runtime. In the new version, users can choose Redis as the metadata storage solution
  • Rule status returns error reason
  • Optimized SQL Runtime, reduce CPU usage up to 70% in a shared source user scenario
  • sink dynamic parameter support, e.g., MQTT sink can set the topic to a field value in the result so that the data received can be sent to a dynamic topic
  • Authentication support: user-configurable JWT-based authentication for REST API

2022 Roadmap

As a young project, eKuiper is still far from perfect. There are a long list of feature requests from the users and community. eKuiper will continue to grow in the new year and open to anyone who is interested to make edge computing powerful and easier to use at the same time. The team will focus on the four themes:

  • Enrich API and syntax
    • more SQL clause (in, between, like etc.)
    • streaming feature (dynamic table etc.)
    • more handy functions (changed, unnest, statistic functions)
  • Stability & Performance
    • Allow to select feature by build tag and provide a minimal core version
    • Incremental window
    • Continuously memory footprint and CPU optimization
  • Extension
    • Stabilize portable plugin runtime and support more languages
    • More http source type
  • Build & Deployment
    • Cluster and HA
    • Android support

Please check the 2022 Roadmap in our Github project for details, and stay tuned in with the eKuiper community.