All Posts By

LF Edge

The LF Edge Interactive Landscape

By Blog, Landscape, LF Edge, State of the Edge

New tool aims to help users understand and navigate the expansive edge computing ecosystem, requesting collaboration from the edge community.

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group

A few years ago, the Cloud Native Computing Foundation (CNCF) introduced their CNCF Cloud Native Interactive Landscape, which quickly became a go-to resource for the cloud-native ecosystem. Using this as a guide and framework, the State of the Edge project has been building the LF Edge Interactive Landscape.

The LF Edge Interactive Landscape is dynamically generated from data maintained in a community-supported Github account. Based on user inputs and overseen by the State of the Edge Landscape Working Group, the map categorizes LF Edge projects alongside edge-related organizations and technologies to provide a comprehensive overview of the edge ecosystem.

The State of the Edge Landscape Working Group needs help from the larger edge community to continue to build out and improve this resource. Pull requests and issue submissions are welcome and encouraged, whether for new additions or for edits to existing listings.

How to Add a New Listing to the LF Edge Interactive Landscape

To add a new listing to the LF Edge Interactive Landscape, follow the steps using one of the options below:

Option 1: Submit a PR

  1. Visit the community Github repository at
  2. Open a pull request to add your listing to landscape.yml. Follow formatting of peer listings, making sure to include all required information and logo file:
    1. Name of organization or technology
    2. Homepage url
    3. .svg logo (Important: Only .svg formatted logos are accepted.) – see for help converting/creating proper SVGs
    4. Twitter url (if applicable)
    5. Crunchbase url
    6. Assigned category (Descriptions for categories can be found in the

Full instructions available at

Option 2: Open an issue

  1. Visit the community Github repository at
  2. Open an issue that includes all required information and logo file (reference Option 1).

Option 3: Email

  1. Email with all of the required information and logo (reference Option 1).

How to Modify a Listing in the LF Edge Interactive Landscape

To modify or make suggestions on an existing listing in the LF Edge Interactive Landscape, open an issue in the Github repository and be sure to include the following information:

  • Name of organization or technology, as listed in the landscape.
  • Detailed description of the modifications that you are requesting.

For more detailed information and instructions, you may refer to the in the Github repository.

About State of the Edge

Founded in 2017, State of the Edge (recently acquired by LF Edge) provides a vendor-neutral, community-driven platform for open research on edge computing while also seeking to align the market on what edge computing truly is and what’s needed to implement it. State of the Edge publishes free research on Edge Computing, maintains the Open Glossary of Edge Computing and oversees the LF Edge Interactive Landscape. Follow State of the Edge on Twitter via @StateoftheEdge.

Molly Wojcik was recently appointed Chair of the State of the Edge Landscape Working Group. She is the Director of Education & Awareness at Section, an edge compute platform technology provider, an LF Edge member organization. Molly has been involved as an active contributor and facilitator within the Landscape working group since its beginnings with LF Edge in early 2019. If you have questions or would like to be involved int he LF Edge Landscape, feel free to email Molly at

EdgeX Foundry Use Case: Wipro’s Surface Quality Inspection for a Manufacturing Plant

By Blog, EdgeX Foundry, Use Cases

Written by LF Edge members from Wipro. For more information about Wipro, visit their website.

The use case is “Automated surface quality inspection” for a manufacturing plant that produces “Piston Rods” used in different applications like Utility, Mining, Construction and Earth Moving.

In the manufacturing plant, the production of piston rods goes through multiple stages like induction hardening, friction welding, threading & polishing.  Each of these stages have a quality checkpoint to detect defects in early stages and to ensure that the rods produced are in line with design specifications & function properly.  After it goes through rigorous multi stage process, it reaches final station where surface level quality inspection happens.  At this stage, the quality inspection happens manually, requiring highly skilled & experienced human inspector to look for different types of defects like scratches, material defects & handling defects on the surface of rods.  Based on the final quality inspection results, it goes either to packaging & shipment area or to rework or scrap area.

Quality check at every stage is extremely critical to prevent quality problems down the line leading to recalls & reputational damages.  The problem with manual inspections – they are costly, time consuming and heavily dependent on human intelligence & judgement.  “Automated surface quality inspection” is an image analytics solution based on AI / ML that helps overcome these challenges.  The solution identifies and classifies defects based on image analytics to enable quality assessment and provides real-time visibility to Key Performance Indicators through dashboards to monitor plant performance.

EdgeX architectural tenets, production grade readily available edge software stack, visibility of long term support with bi-annual release roadmap and user friendly licensing for commercial deployments made us adopt EdgeX as the base software stack.

The readily available IoT gateway functionalities helped us focus more on building business application specific components than the core software stack needed for an edge gateway.  This helped us in rapid development of proof of concept for the use case we envisioned.

Key components of the solution include:

  • Edge Gateway hardware, Industrial grade camera & a sensor to detect piston rods placed in final quality inspection station
  • Software components:
    • Gateway powered by EdgeX software stack with enhancements to support the use case
    • Deep learning model trained with previously analyzed & classified defects
    • Intel’s OpenVino toolkit for maximizing inference performance on the edge gateway
    • EdgeX Device Service Layer enhancements to enable missing southbound connectivity
    • Automated decision making in the edge by leveraging EdgeX provided local analytics capability
    • EdgeX AppSDK customizations to connect to business applications hosted on cloud
    • Manufacturing Performance Dashboard to define, create and monitor Key Performance Indicators

Once the rod reaches inspection station, the gateway triggers the camera to take surface pictures.  The image captured is fed into the inference engine running on gateway, which looks for the presence of different types of surface defects.  The inference output is fed into the business application hosted on cloud to provide actionable insights with rich visual aids.

The analysis of historical data for similar pattern of defects, correlating data from OT systems with inference output for a given time period and providing feedback to OT systems for potential corrective actions are possible future enhancements.


  • Reduced inspection cost: Automated quality inspections, reduced need for manual inspection
  • Centralized management: Real time visibility to plant operations from remote locations
  • Consistent performance: Less dependency on human judgement, which varies from one person to another
  • Reduced inspection time: Consistent & reliable results in a much shorter time
  • Learning on the go: Continuous retraining of deep learning models make automated inspections accurate & closer to reality
  • Available 24 x 7

If you have questions or would like more information about this use case or LF Edge member Wipro, please email

What is Baetyl?

By Baetyl, Blog

Baetyl is an open edge computing framework of LF Edge that extends cloud computing, data and service seamlessly to edge devices. It can provide temporary offline, low-latency computing services, and include device connect, message routing, remote synchronization, function computing, video access pre-processing, AI inference, device resources report etc.

About architecture design, Baetyl takes modularization and containerization design mode. Based on the modular design pattern, Baetyl splits the product to multiple modules, and make sure each one of them is a separate, independent module. In general, Baetyl can fully meet the conscientious needs of users to deploy on demand. Besides, Baetyl also takes containerization design mode to build images. Due to the cross-platform characteristics of docker to ensure the running environment of each operating system is consistent. In addition, Baetyl also isolates and limits the resources of containers, and allocates the CPU, memory and other resources of each running instance accurately to improve the efficiency of resource utilization.


  • Shielding Computing Framework: Baetyl provides two official computing modules(Local Function Module and Python Runtime Module), also supports customize module(which can be written in any programming language or any machine learning framework).
  • Simplify Application Production: Baetyl combines with Cloud Management Suite of BIE and many other productions of Baidu Cloud(such as CFCInfiniteEasyEdgeTSDBIoT Visualization) to provide data calculation, storage, visible display, model training and many more abilities.
  • Service Deployment on Demand: Baetyl adopts containerization and modularization design, and each module runs independently and isolated. Developers can choose modules to deploy based on their own needs.
  • Support multiple platforms: Baetyl supports multiple hardware and software platforms, such as X86 and ARM CPU, Linux and Darwin operating systems.


As an edge computing platform, Baetyl not only provides features such as underlying service management, but also provides some basic functional modules, as follows:

  • Baetyl Master is responsible for the management of service instances, such as start, stop, supervise, etc., consisting of Engine, API, Command Line. And supports two modes of running service: native process mode and docker container mode
  • The official module baetyl-agent is responsible for communication with the BIE cloud management suite, which can be used for application delivery, device information reporting, etc. Mandatory certificate authentication to ensure transmission security;
  • The official module baetyl-hub provides message subscription and publishing functions based on the MQTT protocol, and supports four access methods: TCP, SSL, WS, and WSS;
  • The official module baetyl-remote-mqtt is used to bridge two MQTT Servers for message synchronization and supports configuration of multiple message route rules. ;
  • The official module baetyl-function-manager provides computing power based on MQTT message mechanism, flexible, high availability, good scalability, and fast response;
  • The official module baetyl-function-python27 provides the Python2.7 function runtime, which can be dynamically started by baetyl-function-manager;
  • The official module baetyl-function-python36 provides the Python3.6 function runtime, which can be dynamically started by baetyl-function-manager;
  • The official module baetyl-function-node85 provides the Node 8.5 function runtime, which can be dynamically started by baetyl-function-manager;
  • SDK (Golang) can be used to develop custom modules.




If you are passionate about contributing to open source community, Baetyl will provide you with both code contributions and document contributions. More details, please see: How to contribute code or document to Baetyl.

Contact us

As the first open edge computing framework in China, Baetyl aims to create a lightweight, secure, reliable and scalable edge computing community that will create a good ecological environment. In order to create a better development of Baetyl, if you have better advice about Baetyl, please contact us:

EdgeX Foundry’s Geneva Release and Plans for Hanoi

By Blog, EdgeX Foundry

Written by Jim White, Co-Chair of the EdgeX Foundry Technical Steering Committee and CTO of IOTech

Every six months, the EdgeX Foundry technical community gets together to review its recent release (collecting lessons learned, assembling release notes, etc.) and to plan out its next release.  For the last three years, these meetings have been face-to-face events held in venues across the globe.  It has become an event which many in the community have come to enjoy as it allows us to celebrate our success, plan out the next release, and at the same time reconnect with people that have become close professional friends.

Alas, the global pandemic claimed another causality – albeit a far less important one in relation to world events – when we had to cancel our face-to-face meeting this spring.  We held a virtual event instead and, though different, it was still fun and successful – thanks to the incredible efforts of the development community that has been the life blood of the project since the project’s inception.  We had more than 35 members attending the online sessions each day and managed to get everything accomplished that we planned and even had time for a mini-educational conference (called the “Unconference”) that allowed community members to highlight new EdgeX-based solutions and new project ideas.

The Geneva Release

Our planning meeting is always at the tail end of our current release efforts.  We have an EdgeX development “freeze” period, which means no more feature coding can be done as we clean up bugs and tackle documentation.

Codenamed “Geneva” – this is is EdgeX Foundry’s sixth release but the first for 2020. The highlights of the Geneva release include:

  • Dynamic or automatic device on-boarding – for protocols where it makes sense, this allows EdgeX to automatically provision new sensors and have the sensor data start to flow through EdgeX
  • Alternate messaging support – where applicable, allows adopters to use any messaging implementation under the covers of EdgeX to include MQTT, 0MQ, or Redis Streams.
  • Better type information is associated to sensor data – allowing analytics packages and other users of EdgeX sensor data to more easily digest the data and aid in transformations on the data
  • REST device service
  • Batch and send export – allowing sensor data to be sent to cloud, on-prem or enterprise systems at designated times
  • Support for MQTTS and HTTPS export
  • Redis as the default DB – deprecating MongoDB for license, footprint/performance, and ARM support reasons
  • Adding the Kuiper rules engine – a new rules engine that is smaller and faster and written in Go which replaces EdgeX’s last Java micro service.
  • Improved Security
  • Interoperability testing
  • Improved DevOps CI/CD – now using Jenkins Pipelines to produce EdgeX Foundry project artifacts

The Hanoi Release

As for our semi-annual planning meeting, we set up the scope of our next release which is codenamed “Hanoi.”

There are some significant notable features in this upcoming release which is scheduled to arrive in the October/November 2020 timeframe.

  • Device service to application service message bus – allowing data to “stream” more directly (faster) through EdgeX without persistence.
  • Improve support for running device services on different hosts
  • Secure service to service communications – enabling EdgeX services to live on different hosts and communicate securely
  • Data filters in the device service – allowing repetitive or meaningless sensor data to be discarded sooner in the collection process
  • Performance guidance – giving adopters better understanding of hardware requirements and data flows which can help them architect edge solutions for their use case
  • Improved interoperability testing
  • Hardware Root of Trust abstraction – allowing EdgeX’s to get its secret store master key from hardware secure storage

EdgeX APIs and the EdgeX 2.0 Future

More importantly, this release will include work on a new API set for our microservices.  We call this the V2 API set.  As the name implies, it will be the EdgeX APIs that will be featured in version 2 of EdgeX in the near future (perhaps as soon as 2021).  A portion of the V2 APIs will be available for exploration with the Hanoi release – allowing developers to experiment and test the new APIs.

During the Hanoi release, as we work on V2 API, we are making the APIs easier to use, decoupling data elements used in the message communication from the operations, and generally allowing for better security, better testing, use of different transport infrastructure, and easier extension of the APIs going forward.  So, while Hanoi is expected to be a minor version, it provides the footing for a new major release (EdgeX 2.0) next year.

We invite you to have a look at the new EdgeX release – Geneva.  Learn more here: Here is hoping that the next six months bring health and safety back to all of our lives and that we’ll be able to reconvene the community of EdgeX Foundry experts soon.


EdgeX Foundry IoT Innovation Challenge – Phase 1 Complete

By Blog, EdgeX Foundry

Written by Clinton Bonner, VP of Marketing for Topcoder

We recently told you about an awesome challenge series on Topcoder—focused on using on-demand talent for EdgeX Foundry IOT innovation. Often a new tech philosophy needs an enabling catalyst for it to gain traction. For the Industrial IoT, EdgeX Foundry from the Linux Foundation is that catalyst. EdgeX Foundry is a vendor-neutral open source project hosted by LF Edge building a common open framework for IoT edge computing.

In Phase 1 of this series, EdgeX Foundry tapped the Topcoder community to conceptualize new and unique industry use cases that leveraged the EdgeX Platform. The concepts were procured through this work.

Well, the winners are in! LF Edge, EdgeX Foundry and the teams from Dell Technologies, Intel, HP, IOTech, and Wipro were impressed with the wide variety of innovative use case ideas and technical explanations as to how the EdgeX stack would be leveraged.

Here are the top 5 winning submissions:

1. cunhavictor (Brazil) – Fuel Inventory Control for Oil & Gas Downstream Industry

Synopsis: Product losses during fuel tank truck’s loading and offloading operations, as well as thefts, are common problems that translate to costs estimated around $133 Billion for players in the oil & gas downstream industry. The winning idea is a system for inventory control and loss/theft assessment that joins data from different sensors in a distributed processing architecture. Data integration is usually not handled properly by fuel distribution companies and the system would solve that.  This integration not only has potential to reduce losses but help the fuel distribution company, gas station owners (retail outlets), the end user, and even refineries, to understand how inventory losses occur and to protect their assets.

“Excellent articulation of the business problem, instrumentation options, and how the data will be integrated. You have great domain expertise & clarity of thought.” – Challenge judge

2. wiebket (Netherlands)  – Clinic Occupancy Monitoring System

Synopsis: Traveling to a clinic for a checkup is expensive and time consuming for many people in South Africa. Clinic managers have little oversight into patient flow through the clinic. The clinic occupancy monitoring system uses video cameras and door sensors connected to the EdgeX platform to accurately monitor unique patient visits and real time clinic occupancy without sending patient image data to the cloud. The beneficiaries of such a system are patients, clinic managers and the National Department of Health.

3. vivkv (India) – Machinery and Power Consumption Management

Synopsis: Gather sensor data through EdgeX device services and create an application/algorithm for machinery and power consumption management through the use of advanced analytics and machine learning solutions. This would be done in an integrated manner to maximize ROI on the assets and remain closely bound to the overall business budget.

4. Gungz (Indonesia) – Factory Smart Bin

Synopsis: Create an intelligent smart bin that can send the information about its fill level (volume) and weight to EdgeX platform and then, based on the fill level (volume) of the bin, EdgeX platform can send commands to AGV (Assisted Guided Vehicle) to automatically come and empty the trash bin. Using this approach, a factory can automate its waste tracking and waste disposal without human intervention.

5. kavukcutolga (Turkey) – EdgeX Parking Management

Synopsis: This proposal aims to decrease the time, money, and pollution caused by drivers spending time trying to find a parking spot by using IoT Devices such as Object and Distance sensors.

Congratulations to the top five winners. This concludes Phase 1 of this challenge series and now we’re moving on to Phase 2. The second phase will be a challenge where members pick three of the top ideas and build a Technical Design Document detailing the Technical gameplan of how an idea would be brought to life.

We’d like to thank LF Edge, EdgeX Foundry, Dell Technologies, Intel, HP, IOTech and Wipro for pushing innovation and collaboration that accelerates the deployment of IoT solutions. If you’d like to use on-demand talent in a similar manner to accelerate your technology goals, contact Topcoder here.

For additional information about LF Edge or EdgeX Foundry:

Winners of the Annual EdgeX Foundry Community Awards

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

Traditionally, at our Spring Face to Face meeting, LF Edge’s EdgeX Foundry honors members of the community who have helped us over the last year.  Well, there is nothing traditional about 2020.  Our Face to Face, scheduled for the end of April in Limerick, Ireland became a virtual event.  Thus, like most of the world, we switched from meeting in person to having Zoom meetings at home, while trying to balance getting work done, keeping up with our kids schooling, and doing all of the other things of our lives without leaving the house. While the meeting was different, we stuck to tradition and honored our 3rd Annual EdgeX Awards  at a ceremony yesterday with EdgeX Foundry TSC Co-chairs Keith Steele and Jim White.

From top left to right: Keith Steele, Maemalynn Meanor, Bryon Nevis. Michael Estrin, Lisa Rashidi-Ranjbar, James Gregg, Aaron Williams, Jim White

Congratulations to our winners Michael Estrin, James Gregg, Lisa Rashidi-Ranjbar and Bryon Nevis! Learn more about the awards and their dedication below.

Innovation Awards: The Innovation Award recognizes individuals who have contributed (contributing back to open source) the most innovative solution to the open source EdgeX Foundry project over the past year.

Winners: Michael Estrin and James Gregg

Michael was nominated for his bootstrap code effort that reduced the duplication and maintenance efforts by using dependency injection.  He was also one of the main contributors of the V2 APIs. Not only did he tirelessly work on the code, but he also had one-on-one meetings with members of the community to get their buy in and made sure that security was always planned from the beginning

“My involvement with EdgeX Foundry was driven by my employer — Dell Technologies.  Dell provided the opportunity to fully immerse myself in the open source project — as a committer, the current System Management Working Group Chair, and a member of the project’s Technical Steering Committee.  I’ve enjoyed working on EdgeX and I appreciate the project’s public recognition of my contributions over the past year.”Michael Estrin

Michael Estrin, Dell Technologies, Winner of the EdgeX Innovation Award

James was nominated for his leadership of the EdgeX DevOps community and worked tirelessly to improve our CI/CD pipes, stabilize our development and testing environments, and greatly improve our release processes and procedures.  He and his team added in Snyk integration, Nexus Cleanup, the introduction of GitHub Org Plugin, and basically improving all parts of our support tools.

“Thank you to the EdgeX Foundry open source community for the Innovation award.  I’m truly honored to receive it and congratulate the other recipients that won EdgeX Foundry awards in 2020. I would also like to recognize the talented Intel DevOps team that worked to transform the build automation and who all worked many hours to make it such a success.  Thanks to Ernesto Ojeda, Lisa Ranjbar, Emilio Reyes, and Bill Mahoney for their dedication and commitment.  My involvement has been largely in support of the Open Retail Initiative where Intel along with other top technology companies collaborate on open source initiatives that will accelerate iteration, flexibility and innovation at scale. This project exposes the contributors to many subject areas (IoT, microservice development, security) so there’s always an opportunity to participate in an area of interest.  Working within the EdgeX Foundry project is very rewarding and I’d encourage others to participate even if it’s the smallest contribution.” – James Gregg

James Gregg, Intel, Winner of the EdgeX Innovation Award

Contribution Awards: The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.

Winners:  Lisa Rashidi-Ranjbar and Bryon Nevis

Lisa was nominated for her work on improving EdgeX Foundry’s build process.  She transformed it so that today the build process uses modern Jenkins Pipelines.  This and the other build performance and optimizations greatly reduced the time to market for all of EdgeX’s microservices. This is the dev ops backbone that allows for continuous delivery and automated release of the EdgeX artifacts.  Plus, she worked with the developer training team to improve EdgeX’s formal education project.

“EdgeX Foundry is the first open source community I’ve been involved with and it’s been an awesome experience. The community is really active and welcomes contributions of all kinds to the project. My contributions are around transforming the continuous integration infrastructure and automating the release process. My motivation is to make the development of the project go more smoothly. I’ve always felt welcome and appreciated in the EdgeX community. That feeling of appreciation really goes a long way when you are solving software problems.”Lisa Rashidi-Ranjbar

Lisa Rashidi-Ranjbar, Intel, Winner of the EdgeX Contribution Award

Bryon lent his extensive security experience and knowledge to EdgeX and vastly improved our security infrastructure.  Namely he developed a V2 API security implementation, which will be released in later releases and fixed many security issues.  The V2 API used an Issue Management Standard in which he worked with the community to make this a formal process and guidance.  Simply, EdgeX Foundry is much more security than it was before he started working for the community and because of his hard work and leadership in developing the processes, EdgeX’s security will continue to improve.

“I am a security architect and developer and have been involved with the EdgeX Security Working Group since Fall of 2018. Security touches everything.  Raising the IoT security bar is an industry-wide challenge that touches all parts of the hardware and software stack.  Adding security to an in-flight project such as EdgeX—without breaking anything—is especially hard and requires a sustained effort of small changes over a long period of time.  For me, the work transcends EdgeX: it is my opportunity to teach my fellow tradesmen (and tradeswomen) by example how to do security.”Bryon Nevis

Byron Nevis, Intel, Winner of the EdgeX Contribution Award

Michael, James, Lisa, and Bryon join our previous winners:  Andy Foster, Drasko Draskovic, Micael Johanson, Lenny Goodell, Tony Espy, Trevor Conn, and Cloud Tsai, in earning the heartfelt gratitude of the community.  EdgeX Foundry has had more than 100 contributors from dozens of companies working for our community.  This work has enabled EdgeX to continue to grow. In fact, EdgeX Foundry passed the 1 million total downloads in October 2019, and just 5 months later we have more than 6 million.

To learn more about EdgeX Foundry, visit our website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 6 million and more downloads!





Creating the State of the Edge Report

By Blog, State of the Edge

This blog originally ran on as part of their IT Visionaries podcast that shares conversations with Fortune 1000 Tech Leaders. In this post, IT Visionaries talks with Matt Trifiro, co-chair of State of the Edge, LF Edge member and CMO at Vapor IO. To listen to the podcast, click here

Do you know the current state of the edge? If you do, it’s most likely because of the work that Matthew Trifiro has done in recent years. Matthew is the CMO at Vapor IO and Co-Chair at State of the Edge  and on this episode of IT Visionaries, he tells us about how and why he wanted to create the State of the Edge report. Plus, he explains what is going on in the world of edge computing and what might come next.

Key Takeaways

  • The State of the Edge report is an authority on what is really going on in the world of edge computing
  • Moving to edge computing is going to be a necessary part of the re-architecture of the internet
  • The speed with which we need to analyze data is increasing, which means we need to use edge computing to analyze in real-time

Creating State of the Edge

Matt and one of his peers were looking for a way to cut through all the noise in the world of edge computing. There was so much going on with edge and people have always defined “edge” in different ways, so Matt wanted to bring experts together to come up with something everyone could agree on. Working with partners and funders, Matt and his team were able to create an unbiased report that many look to as the definitive authority on the true “state of the edge.”

“Edge is kind of in the eye of the beholder. Each person’s edge is different. But what was really important is that if you think of edge computing as an extension of the internet, the edge we really care about is the edge of the last mile network. And nobody was really talking about that.” 

Challenges of creating the report

Originally, the goal was to put out a State of the Edge report every year, but it took 18 months between the first report’s publication and the second. The delay is to ensure that the information provided by Matt and everyone working on the report is properly vetted, and as accurate and useful as possible. And this is all being done by Matt and now just two employees, who all also have day jobs. This is a true passion project for those involved.

“We took a lot of time to have peers in the industry review and evaluate what we were putting out before we published it. And I think the quality report is super high because of that. But it also took a lot of time.”

What’s in the most recent report?

Matt says that this year’s report is twice as good as the first and one of the reasons is because in the report they answer a lot of the questions that people ask about edge all the time. Those questions include: how real is edge? What is being deployed? How much and where are things being deployed? By answering those questions, companies with an interest in investing capital in edge can make much more informed decisions. To answer those questions, Matt hired analysts and researchers who worked to come up with real data and models.

The report also reflects what Matt is excited about – a new form of the internet rearchitected with edge to achieve things we cannot at this time. Matt describes the arc of the internet as a three-act play, and the act we’re in now is one in which we’re moving from humans talking to machines more to machines talking to machines.

“The transformation in the edge industry is moving from people with frothy ideas slapping edge on some old product to make it seem cool to real practitioners deploying real dollars, putting capital to work to build the platforms on which this next generation of compute is going to be built. I’m fond of saying the internet is broken. …There are lots of applications we want to consume or bring about that simply cannot be done with the internet today. And we have to rearchitect it.”

“The real transformation is we are going to have billions and billions and billions of devices that aren’t being looked at by a human that are going to be transmitting data that needs to be analyzed and acted upon in near real-time. We’re moving from a world where the internet needs to function in ones of seconds to a world that it needs to function in tens of milliseconds, ones of milliseconds, and in some cases, microseconds. And that’s just a very different world. And it’s not a billion devices. It’s tens of billions and maybe trillions of devices. It’s a huge transformation.”

State of the Edge merged with LF Edge last month and is now an integral part of the ecosystem. To learn more about the project or join the mailing list, click here.

Exploration and Practices of Edge Computing

By Blog, EdgeX Foundry, Industry Article

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

Exploration and Practices of Edge Computing 

Chapter 1:

Cloud computing and edge computing

After nearly ten years of rapid growth, cloud computing has become a widely accepted and widely used centralized computing method in various industries around the world. In recent years, with the rise of “new infrastructure construction” such as big data, artificial intelligence, 5G and Industrial Internet, the demand for edge computing has soared. No matter from the perspective of telecom carriers, public cloud service providers, or enterprise users in various industries, they hope to create an edge computing solution that is most suitable for their scenario in some form.

Cloud edge collaboration

Whether it is Mobile Edge Computing (MEC), Cloud Edge, or factory-side Device Edge, users want to be able to have an edge computing platform that cooperates with the cloud platform. Cloud edge collaboration is not only about opening up the data channel at run-time, but also the overall collaboration from the perspective of the entire life cycle management and the full-stack technology. In addition to the highly fragmented edge computing field of various industry platforms, it is undoubtedly more challenging to get through full-stack integration with multiple cloud computing platforms.

The dilemma of edge computing

At the bottom of the edge computing platform is the part about infrastructure, or device management. Due to the natural characteristics of the edge computing model, available device resources usually have considerable constrains. Whether it is from hardware specifications such as CPU, memory, storage, network, accelerator, or from OS, application framework and other software specifications, they are far from comparable to cloud computing platforms or even ordinary PCs. On the other hand, edge applications running on devices often have very different requirements for CPU and OS due to historical reasons and vendor heterogeneity. No matter from the cost of space, time, capital and other aspects, it adds more pressure.

Containerization and virtualization

In order to achieve cloud-edge collaboration, solve the problems of device management and edge application management, the most common practice in various industries now is to run containerized applications on devices and managed by cloud platforms. Its advantage is to make full use of the proven technology, platform, personnel and skills in cloud computing. The disadvantage is that it is more suitable for modern edge applications, and it is more difficult to implement containerization for legacy systems with different OSes. The representative of containerization is the EdgeX Foundry edge computing framework hosted under the LF Edge umbrella.

For legacy systems with different OSes, the realistic way is to package the application in a virtual machine and run it on the hypervisor platform on the device side. Virtualization platforms for edge applications may be further integrated with containerized platforms to create an unified operating mechanism across OSes. For example, the ACRN project under the Linux Foundation, and the EVE project under LF Edge are virtualization platforms specifically for devices.

Edge computing operation and monitoring

Whether it is containerization, virtualization or a combination of the two, the practice of operating edge computing will eventually and only be done on the cloud side. Otherwise, for large-scale production deployment, the accumulation of efficiency, safety, cost and other issues will become an inevitable nightmare. So from the perspective of operation and management, although these devices are not necessarily located on the cloud side, because they are all managed from the cloud side, they are operated and maintained in a manner similar to cloud computing. Device Cloud in the sense.
As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website

Turning your Raspberry Pi 4 into an edge gateway – Part 2

By Blog, EdgeX Foundry

Written by Galem Kayo, EdgeX Foundry contributor and Product Manager for Ubuntu Core at Canonical, an LF Edge member company

This content original ran on the ubuntu blog. For more content like this, please visit their website

In the first part of this tutorial, we installed and configured EdgeX foundry on a Raspberry Pi 4, turning it into an edge gateway. In this tutorial, the gateway will be connected to dummy IoT devices. These dummy devices will be virtually simulated by software. They will send messages transmitting random measurements to an MQTT broker hosted in the cloud. These messages will be forwarded to MQTT clients that subscribes to receive them.

Connecting the Southbound

The IoT data flow from devices to the edge gateway is referred to as the Southbound. In EdgeX, communication between IoT devices and gateways is assured by device services. A device service is a microservice that implements an interface to an IoT communication protocol (Modbus, OPC-UA, REST, BLT, Zigbee, MQTT, BACNet, SNMP etc…).

The Virtual Device Service is convenient for configuring and testing  your Raspberry Pi 4 gateway. It allows you to carry out tests without any real IoT device. This way, microservices running on the gateway can be configured and troubleshooted without any IoT device connected. The Virtual Device Service also simulates dummy IoT devices that generate and transmit random numbers of different types: integer, unsigned integer, boolean and float.

The Virtual Device Service can be enabled very easily with the following command:

sudo snap set edgexfoundry device-virtual=on

Upon enablement, the service and the virtual devices it simulates become visible in the edgeX web UI.

Creating an MQTT server in the cloud

Edge gateways are the juncture between devices and servers. We will now configure the receiving end in the cloud: a server hosting a custom MQTT broker. The MQTT broker will receive the messages sent from the virtual devices. The messages will then be dispatched to subscribing client devices.

The MQTT broker is installed on a Ubuntu 18.04 (bionic) server instance running AWS cloud. MQTT is quite easily installed on the Ubuntu cloud instance in a snap, as follows:

sudo snap install mosquitto

The status of the broker can then be inquired:

sudo systemctl status snap.mosquitto.mosquitto.service

Connecting the Northbound

With the MQTT server hosted in the cloud, a Northbound connection needs to be established between the Raspberry Pi 4 gateway and the server. An EdgeX Export Service is a microservice that moves data collected by device services to a server in the cloud. An Export Service can be created as follows:

sudo snap set edgexfoundry export-client=on
sudo snap set edgexfoundry export-distro=on

Upon activation, data transfer to the MQTT server can be configured. Parameters like the server IP address, port, communication protocol, data format, topic and encryption are to be specified as described in the screenshot below.

Once saved, the Export Service will connect to the MQTT broker and create a topic. The Export Service will publish data collected from virtual devices through this topic.

Consuming the virtual IoT data flow

At this point, you have a Raspberry Pi 4 gateway listening to virtual IoT devices, and forwarding the data collected from devices to an MQTT server hosted on a public cloud.

Now, any MQTT client running on any device can consume the IoT data flow originating from virtual devices. Clients only need to subscribe to the associated topic via our MQTT broker.

To read this data flow, we will install an MQTT client on a Ubuntu desktop, and subscribe to the topic created above via the Export Service. It should be noted that this data flow can be accessed via any MQTT client software installed on any desktop device.

sudo snap install mosquitto
mosquitto_sub -h <your_ip_adress_here> -t <topic_name_here>

The virtual IoT data flow can be read in the terminal. The stream corresponds to the data created by the Virtual Devices, in the format specified by our Export Service.


Next steps

In the third part of this tutorial, the gateway will be connected to a physical IoT device. We will also test the device management capabilities of EdgeX Foundry.

Extending Kubernetes to the IoT Edge

By Blog, Project EVE

Written by Jason Shepherd, LF Edge member, VP of Ecosystem for Zededa and active leader in Project EVE

This post originally ran on the Zededa Medium blog. Click here for more articles like this one. 

When Kubernetes was first released onto the open source scene by Google in 2014, it marked an important milestone in application development for cloud environments. Docker had already begun to show the world the value of containers as a means to create modular, nimble applications that could be built using a microservices architecture. But containers were difficult to deploy at scale — particularly at the massive size of many cloud-native applications. Kubernetes filled that need by providing a comprehensive container orchestration solution.

Now, as the edge is emerging as the next frontier of computing, many businesses and developers are looking to replicate the benefits of cloud-native development closer to the data’s source. Kubernetes will play an important role in that process, but, for reasons we’ll explain in this post, it isn’t as easy as simply copy/pasting the traditional version of Kubernetes across the board.

Which edge are we talking about?
Of course, there are many different “edges” in edge computing, and it’s been easier to extend cloud-native principles to some more than others. For instance, for a service provider that creates a micro data center comprised of a rack of servers at the base of one of their cell towers, it’s fairly straightforward to use the same version of Kubernetes that runs in the cloud. These deployments are very similar to those found in a traditional data center, and so developing and running applications is also similar, including the way containers are used and orchestrated.

At the other extreme of the edge spectrum are devices like microcontrollers, which use deeply embedded code with tight coupling between hardware and software due to resource constraints in areas like memory and processing. These classes of devices also often leverage a Real-time Operating System (RTOS) to ensure determinism in responses. Given these constraints, it’s not technically feasible for these devices to leverage container-based architectures, regardless of whether there’s a benefit for modularity and portability.

In between these two ends of the edge computing spectrum are a large number of devices that make up what we consider to be the “IoT edge”: small- and medium-sized compute nodes that can run Linux and have sufficient memory (>512MB) to be able to support the overhead of virtualization. Compared to embedded devices that tend to be fixed in function, these compute nodes are used to aggregate business-critical data for enterprises and need to be able to run an evolving set of apps that process and make decisions based on that data. As app development for edge use cases continues to evolve and grow, companies will see the most benefit from making these IoT devices as flexible as possible, meaning they’ll want the capability to update software continuously and reconfigure applications as needed. This idea of being able to evolve software on a daily basis is a central tenet of cloud-native development practices, which is why we often talk about extending cloud-native benefits to the IoT edge.

Challenges of extending Kubernetes to the IoT edge
While Docker containers have been widely used in IoT edge deployments for several years, extending Kubernetes-based cloud-native development to the IoT edge isn’t a straightforward process. One reason why is that the footprint of the traditional Kubernetes that runs on servers in the data center is too large for constrained devices. Another challenge is the fact that many industrial operators are running legacy applications that are business-critical but not compatible with containerization. Companies in this situation need a hybrid solution that can still support their legacy apps on virtual machines (VMs), even as they introduce new containerized apps in parallel. There are data center solutions that support both VMs and containers today; however, footprint is still an issue for the IoT edge. Further, IoT edge compute nodes are highly distributed, deployed at scale outside of the confines of traditional data centers and even secure micro data centers, hence they require specific security considerations.

In addition to the technical challenges, there’s also a cultural shift that comes with the move toward a more cloud-native development mindset. Many industrial operators are used to control systems that are isolated from the internet and therefore only updated if absolutely necessary, because an update could bring down an entire process. Others are still accustomed to a waterfall model of development, where apps are updated only periodically, and the entire stack is taken down to do so. In either case, switching to a system of modularized applications and continuous delivery is a big transition, if not premature. It’s not always an easy change to make, and it’s common for organizations to find themselves stuck in a transitory state — especially for those that leverage a combination of partner offerings instead of controlling all aspects of app development and technology management in-house.

Net-net, there’s a lot of buzz today around solutions that promise to make cloud-native development and orchestration at the edge as easy as flipping on a switch, but the reality is that most enterprises aren’t ready to “rip and replace,” and therefore need a solution that provides a transition path.

Enter: Project EVE and Kubernetes
Simplifying the transition to a cloud-native IoT edge is the philosophy that underpins Project EVE, an open source project originally donated by ZEDEDA to LF Edge. EVE serves as an edge computing engine that sits directly on top of resource-constrained hardware and makes it possible to run both VMs and containers on the same device — thereby supporting both legacy and cloud-native apps. Additionally, EVE creates a foundation that abstracts hardware complexities, providing a zero-trust security environment and enabling zero-touch provisioning. The goal of EVE is to serve as a comprehensive and flexible foundation for IoT edge orchestration, especially when compared to other approaches which are either too resource intensive, only support containers, or are lacking security measures necessary for distributed deployments.

With EVE as a foundation, the next step is to bridge the gap between Kubernetes from the data center and the needs of the IoT edge. Companies such as Rancher have been doing great work with K3S, taking a “top-down” approach to shrink Kubernetes and extend the benefits to more constrained compute clusters at the edge. At ZEDEDA, we’ve been taking a “bottom-up” approach, working in the open source community to meet operations customers where they’re at today with the goal of then bridging Project EVE to Kubernetes. Through open source collaboration these two approaches will meet in the middle, resulting in a version of Kubernetes that’s lightweight enough to power compute clusters at the IoT edge while still taking the unique challenges around security and scale into account. Kubernetes will then successfully span from the cloud to the IoT edge.

A exciting road ahead, but it will be a journey
Kubernetes will play an important role in the future of edge computing, including for more constrained devices at the IoT edge. As companies look to continually update and reconfigure the applications running on their edge devices, they’ll want the flexibility and portability benefits of containerized app development, scalability, clustering for availability, and so forth. But for now, many companies also need to continue supporting their legacy applications that cannot be run in containers, which is why a hybrid approach between “traditional” and cloud-native development is necessary.

We at ZEDEDA recognize this need for transition, which is why we developed EVE to make sure that enterprises could support both technology approaches. With this flexibility, it affords companies that are still working through the technical and cultural shift to cloud-native development with the room to manage that transition.

As cloud-native applications begin to make up a greater percentage of workloads at the edge, though, we’ll need the tools to manage and orchestrate them effectively. This is why we feel it’s important to put a strong effort now behind developing a version of Kubernetes that works for the IoT edge. With EVE, we’ve created a secure, flexible foundation from which to bridge the gap, and we’re excited to continue working with the open source community to extend Kubernetes to this space. For more information on EVE or to get involved, visit the project page on the LF Edge site.

Zededa is a LF Edge member and active leader in Project EVE. For more details about LF Edge members, visit here. Join the LF Edge Slack Channel and interact with the project at #eve.