The Linux Foundation and its communities are deeply concerned about the rise in attacks against Asian Americans and condemn this violence. It is devastating to hear over and over again of the attacks and vitriol against Asian communities, which have increased substantially during the pandemic.
We stand in support with all those that have experienced this hate, and to the families of those who have been killed as a result. Racism, intolerance and inequality have no place in the world, our country, the tech industry or in open source communities.
We firmly believe that we are all at our best when we work together, treat each other with respect and equality and without hate or vitriol.
Written by Aaron Williams, LF Edge Developer Advocate
It’s our favorite time of the year – where we take a step back and reflect on what we have accomplished and honor the members of our community who have gone above and beyond to support EdgeX Foundry. We are pleased to announce the launch nominations for the 4th Annual EdgeX Community Awards.
Back in 2018 when we started the awards, we had about 1,000 Docker downloads and about 20 contributors. Today, we are well over 7 million downloads and have 4 times the number of contributors.
If you are already a member of the EdgeX Foundry community, please take a moment and visit our EdgeX Awards page and nominate the community member that you feel best exemplified innovation and/or contributed leadership and helped EdgeX Foundry advance and grow over the last year. Nominations are being accepted through April 7, 2021.
Innovation Award: The Innovation Award recognizes individuals who have contributed the most innovative solution to the project over the past year.
2018- Drasko Draskovic (Mainflux) and Tony Espy (Canonical)
2019- Trevor Conn (Dell) and Cloud Tsai (IOTech)
2020- Michael Estrin (Dell Technologies and James Gregg (Intel)
Contribution Award: The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.
2018- Andy Foster (IOTech) and Drasko Draskovic (Mainflux)
2019- Michael Johanson (Intel) and Lenny Goodell (Intel)
2020- Bryon Nevis (Intel) and Lisa Ranjbar (Intel)
From top left to right: Keith Steele, Maemalynn Meanor, Bryon Nevis. Michael Estrin, Lisa Rashidi-Ranjbar, James Gregg, Aaron Williams, Jim White
To learn more about EdgeX Foundry, visit our website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join, simply visit our wiki page and/or check out our Git Hub. And you can follow us on Twitter, and LinkedIn.
Similar to our previous experience last Fall, we had a wealth of potential applicants to choose from. Thanks in large part to being featured on the LFX Mentoring home page, our applicant pool grew from 30 last term to 43 this term. This time, the range of educational experience from candidates was even wider – from sophomores to doctoral candidates. The wealth of experience was much greater this term, and the drive and determination was incredible. Our methods of choosing the top candidates last time narrowed the pool from 30 to 12, but this time only took us from 43 to 30. After conducting interviews and matching applicant experience to our mentorship opportunities, we selected the final four candidates.
Two mentors returned for the Spring term, and we added two new mentors. Returning were:
David Booz (Dave) – Chief Architect and one of the project founders, Dave is also Chair of the Agent Working Group and a member over the last six years. Dave understands the complete breadth and depth of the project code.
Ben Courliss – Chair of the DevOps Working Group, Ben is a multi-decade veteran of open-source projects and has a wealth of knowledge and experience. Ben is also active with the LF Edge’s Shared Community Lab.
Dave and Bruce decided to collaborate and provide a challenging mentorship opportunity to the mentees by providing a linked project to work on. One mentee would collaborate with the Agent Working Group and one with the Management Hub Working Group. Together they would add the ability to support shared secrets. On the Agent side, Dave chose:
Debabrata Mandal (Deb) – Deb is a final year student at IIT Bombay in Mumbai. Deb described his current work, saying: “I am currently working on an issue … related to fetching the logging information from the specified service using the service url. While solving this issue I am getting a more detailed idea of the working of the agent component.”
Bruce wanted to work with:
Megha Varshney – Megha is a final year undergraduate at Indira Gandhi Delhi Technical University for Women. She is: “… currently working on [an] issue wherein I will be adding an API which would provide org specific status info.”
Los cooked up a challenge this year by adding support for the RISC-V microarchitecture to the Open Horizon project. For that effort, he selected:
Quang Hiệp Mai (Mike) – Mike is in the second year of his master’s degree program at Soongsil University in South Korea. He said: “Since RISC-V is blooming in China, supporting RISC-V arch for the agent device would attract more users in China market. So first I will port some of the examples to RISC-V arch then I will help to port Anax to RISC-V”
And Ben wanted to work with a mentee to evaluate and choose an appropriate technology and subsequently to migrate pipelines to that choice. He wanted to work with:
Mustafa Al-tekreeti – Mustafa is a Ph.D. educator transitioning to a career in software engineering. He explained: “I am working on developing a CI/CD pipeline for Anax, one of the Open-Horizon projects, using one of the state-of-art technologies. First, we evaluate three of the available options (Circle-CI, GitHub Actions, and Jenkins Job Builder hosted by LF Edge) and summarize their pros and cons. Then, we implement the pipeline using the chosen technology.”
We’re now three weeks into the Spring 2021 term and beginning to make plans for a potential Summer 2021 term, which would open for applicants beginning April 15th. In the meantime, we recently recorded a webinar with the graduates from our Fall 2020 term. It was a great opportunity to get everyone together on a call and discuss what we accomplished and learned and to provide helpful advice to future applicants.
Written by Taras Drozdovsky and Taewan Kim, LF Edge Home Edge Committer & Staff Engineer at Samsung Research as well as Peter Moonki Hong, LF Edge Governing Board member & Principal Engineer at Samsung Research
Every year the number of Free/Libre and Open Source Software (FLOSS) projects increases and the community has a goal to maintain high quality code, documentation, test coverage and, of course, a high level of security.
The Linux Foundation, in collaboration with and major IT companies in its Core Infrastructure Initiative (CII), has collected best practices for FLOSS projects and provided these criteria as CII Best Practices.
Thank you to the committers and key contributors of Home Edge, Taras Drozdovskyi, Taewan Kim, Somang Park, MyeongGi Jeong, Suresh L C, Sunchit Sharma, Dwarkaprasad Dayama, Peter Moonki Hong. We would also like to thank Jim White with EdgeX Foundry for helping to guide us and the entire LF Edge/Linux Foundation team for their support.
Benefits achieved on the way to getting the CII Best Practices passing badge:
Security and Testing policy
How to Contributing Guide
Descriptions External APIs
Improved the build and testing system
CI infrastructures: Github->Actions – 20 checks
Integrated of external software tools for analysis code:
We have reached a high level of code quality, but continue to improve our code!
Improving the Edge Orchestration project infrastructure (using many tools for analyzing code and searching for vulnerabilities) allowed us to increase not only the level of security, but also the reliability of our product. We hope that improving the documentation will reduce the time to enter the project, and therefore increase the number of external developers participating in the project. Their advice and input are very important for us.
It should also be noted that there were many areas of self-development for the members of the project team: developers became testers, technical writers, security officers. This is a wonderful experience.
Next step of Edge Orchestration team
Further improving the Edge-Home-Orchestration project and achieving “silver” and “gold” badges. Implementation of OpenSSF protected code development practices intoEdge-Home-Orchestration.
If you would like to learn more about the use cases for Home Edge or more technical details, check out the video of our October 2020 webinar. As part of the “On the Edge with LF Edge” webinar series, we shared the general overview for the project, how it fits into LF Edge, key features of the Coconut release, the roadmap, how to get involved and the landscape of the IoT Home Edge.
If you would like to contribute to Home Edge or share feedback, find the project on GitHub, the LF Edge Slack channel (#homeedge) or subscribe to our email list (email@example.com). We welcome new contributors to help make this project better and expand the LF Edge community.
Written by Jacob Smith, State of the Edge Co-Chair and VP Bare Metal Strategy at Equinix Metal
On Wednesday, LF Edge published their 2021 edition of the State of the Edge Report. As a co-chair and researcher for the report, I want to share insight into how this year’s report was created. In development, we focused on three key questions to ensure the report covered essential subject matter:
What’s new in edge computing since last year?
What are the challenges and opportunities facing the ecosystem?
How can we help?
Designed to be “vendor neutral” and to shy away from any bias, we had multiple authors contribute content and utilized an independent editor to balance the report to better serve the edge computing community. Several experts weighed in on what they were paying attention to in the edge space and identified four major stand-out points:
Critical infrastructure (data centers)
Hardware and silicon
Software as a whole
Networks and networking
It’s abundantly clear there is growth in edge computing. Over the last year, COVID-19 accelerated nearly every facet of digital transformation, and this factored into the growth of the edge computing space. But, it also might just be a long overdue increase in digital innovation and transformation.
We will continue to discover more about the edge and how to use it to optimize the computing ecosystem. For now, our attention is focused on how diverse the edge is and all of the solutions that are waiting to be discovered.
Summarized from Equinix Metal’s The State of Your Edge. Read the full story here. Download the report here.
COVID-19 highlighted that expertise in legacy data centers could be obsolete in the next few years as the pandemic forced the development of new tools enabled by edge computing for remote monitoring, provisioning, repair and management.
Open source hardware and software projects are driving innovation at the edge by accelerating the adoption and deployment of applications for cloud-native, containerized and distributed applications.
The LF Edge taxonomy, which offers terminology standardization with a balanced view of the edge landscape, is based on inherent technical and logistical tradeoffs spanning the edge to cloud continuum is gaining widespread industry adoption.
Seven out of 10 areas of edge computing experienced growth in 2020 with a number of new use cases that are driven by 5G.
SAN FRANCISCO – March 10, 2020 – State of the Edge, a project under theLF Edge umbrella organization that established an open, interoperable framework for edge independent of hardware, silicon, cloud, or operating system, today announced the release of the 4th annual, State of the Edge 2021 Report. The market and ecosystem report for edge computing shares insight and predictions on how the COVID-19 pandemic disrupted the status quo, how new types of critical infrastructure have emerged to service the next-level requirements, and open source collaboration as the only way to efficiently scale Edge Infrastructure.
Tolaga Research, which led the market forecasting research for this report, predicts that between 2019 and 2028, cumulative capital expenditures of up to $800 billion USD will be spent on new and replacement IT server equipment and edge computing facilities. These expenditures will be relatively evenly split between equipment for the device and infrastructure edges.
“Our 2021 analysis shows demand for edge infrastructure accelerating in a post COVID-19 world,” said Matt Trifiro, co-chair of State of the Edge and CMO of edge infrastructure company Vapor IO. “We’ve been observing this trend unfold in real-time as companies re-prioritize their digital transformation efforts to account for a more distributed workforce and a heightened need for automation. The new digital norms created in response to the pandemic will be permanent. This will intensify the deployment of new technologies like wireless 5G and autonomous vehicles, but will also impact nearly every sector of the economy, from industrial manufacturing to healthcare.”
The pandemic is accelerating digital transformation and service adoption.
Government lockdowns, social distancing and fragile supply chains had both consumers and enterprises using digital solutions last year that will permanently change the use cases across the spectrum. Expertise in legacy data centers could be obsolete in the next few years as the pandemic has forced the development of tools for remote monitoring, provisioning, repair and management, which will reduce the cost of edge computing. Some of the areas experiencing growth in the Global Infrastructure Edge Power are automotive, smart grid and enterprise technology. As businesses began spending more on edge computing, specific use cases increased including:
Manufacturing increased from 3.9 to 6.2 percent, as companies bolster their supply chain and inventory management capabilities and capitalize on automation technologies and autonomous systems.
Healthcare, which increased from 6.8 to 8.6 percent, was buoyed by increased expectations for remote healthcare, digital data management and assisted living.
Smart cities increased from 5.0 to 6.1 percent in anticipation of increased expenditures in digital infrastructure in the areas such as surveillance, public safety, city services and autonomous systems.
“In our individual lock-down environments, each of us is an edge node of the Internet and all our computing is, mostly, edge computing,” said Wenjing Chu, senior director of Open Source and Research at Futurewei Technologies, Inc. and LF Edge Governing Board member. “The edge is the center of everything.”
Open Source is driving innovation at the edge by accelerating the adoption and deployment of edge applications.
Open Source has always been the foundation of innovation and this became more prevalent during the pandemic as individuals continued to turn to these communities for normalcy and collaboration. LF Edge, which hosts nine projects including State of the Edge, is an important driver of standards for the telecommunications, cloud and IoT edge. Each project collaborates individually and together to create an open infrastructure that creates an ecosystem of support. LF Edge’s projects (Akraino Edge Stack, Baetyl, EdgeX Foundry, Fledge, Home Edge, Open Horizon, Project EVE, and Secure Device Onboard) support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and faster processing and mobility.
“State of the Edge is shaping the future of all facets of just edge computing and the ecosystem that surrounds it,” said Arpit Joshipura, General Manager of Networking, IoT and Edge. “The insights in the report reflect the entire LF Edge community and our mission to unify edge computing and support a more robust solution at the IoT, Enterprise, Cloud and Telco edge. We look forward to sharing the ongoing work State of the Edge that amplifies innovations across the entire landscape.”
Other report highlights and methodology
For the report, researchers modeled the growth of edge infrastructure from the bottom up, starting with the sector-by-sector use cases likely to drive demand. The forecast considers 43 use cases spanning 11 verticals in calculating the growth, including those represented by smart grids, telecom, manufacturing, retail, healthcare, automotive and mobile consumer services. The vendor-neutral report was edited by Charlie Ashton, Senior Director of Business Development at Napatech, with contributions from Phil Marshall, Chief Research officer at Tolaga Research; Phil Shih, Founder and Managing Director of Structure Research; Technology Journalists Mary Branscombe and Simon Bisson; and Fay Arjomandi, Founder and CEO of mimik. Other highlights from the State of the Edge 2021 Report include:
Off-the-shelf services and applications are emerging that accelerate and de-risk the rapid deployment of edge in these segments. The variety of emerging use cases is in turn driving a diversity in edge-focused processor platforms, which now include Arm-based solutions, SmartNICs with FPGA-based workload acceleration and GPUs.
Edge facilities will also create new types of interconnection. Similar to how data centers became meeting points for networks, the micro data centers at wireless towers and cable headends that will power edge computing often sit at the crossroads of terrestrial connectivity paths. These locations will become centers of gravity for local interconnection and edge exchange, creating new and newly efficient paths for data.
5G, next-generation SD-WAN and SASE have been standardized. They are well suited to address the multitude of edge computing use cases that are being adopted and are contemplated for the future. As digital services proliferate and drive demand for edge computing, the diversity of network performance requirements will continue to increase.
“The State of the Edge report is an important industry and community resource. This year’s report features the analysis of diverse experts, mirroring the collaborative approach that we see thriving in the edge computing ecosystem,” said Jacob Smith, co-chair of State of the Edge and Vice President of Bare Metal at Equinix. “The 2020 findings underscore the tremendous acceleration of digital transformation efforts in response to the pandemic, and the critical interplay of hardware, software and networks for servicing use cases at the edge.”
State of the Edge Co-Chairs Matt Trifiro and Jacob Smith, VP Bare Metal Strategy & Marketing of Equinix, will present highlights from the report in a keynote presentation at Open Networking & Edge Executive Forum, a virtual conference on March 10-12. Register here ($50 US) to watch the live presentation on March 12 at 7 am PT or access the video on-demand.
Trifiro and Smith will also host an LF Edge webinar to showcase the key findings on March 18 at 8 am PT. Register here.
About The Linux Foundation
Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.
# # #
The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.
Getting to “Advanced Class”, Including Focusing on Compounding Value with the Right Partners
By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA
This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.
Thanks for continuing to read this series on ecosystem development and the importance of an open edge for scaling to the true potential of digital transformation — interconnected ecosystems that drive new business models and customer outcomes. In parts one and two, I talked about various ecosystem approaches spanning open and closed philosophies, the new product mindset related to thinking about the cumulative lifetime value of an offer, and the importance of the network effect and various considerations for building ecosystems. This includes the dynamics across both technology choices and between people.
In this final installment I’ll dive into how we get to “advanced class,” including the importance of domain knowledge and thinking about cumulative value add vs. solutions looking for problems that can be solved in other, albeit brute force, ways.
Getting to advanced class
I talked about the importance of compounding value in part one of this series and will touch on this concept a little more here, in addition to the criticality of domain knowledge. Speaking of the later, hundreds of IoT platform providers claim they can do a use case like Predictive Maintenance (PdM) but few actually have the domain knowledge to do it. A PdM solution not only requires analytics tools and data science but also an intimate knowledge of failure patterns based on various attributes (vibration, oil particulates, temperature, motor current, etc.). Often an operator on a factory floor that has “been there done that” sits alongside a data scientist to help program the analytics algorithms based on his/her tribal knowledge. A former colleague once worked on a PdM solution with a line operator named Brad, who was the expert that helped the data scientist understand what machine and process behaviors were non-issues, despite looking like bad news, and vice versa. They jokingly called the end result “Bradalytics”.
Further, there’s a certain naivety in the industry today in terms of pushing IoT solutions when there’s already established “good enough” precedent. In the case of PdM, manual data acquisition with a handheld vibration meter once a month or so is a common practice to accurately predict impending machine failures because these failures don’t typically happen overnight. An industrial operator can justify this manual data collection as OpEx and the only thing permanently installed on their machines are brass pads that indicate where to apply the contact-based handheld sensor, from which data is manually loaded into an analytical tool.
Similar manual procedures are all too common in other industries, such as USB data loggers used to monitor temperature in cold chain logistics operations. In another example, structural engineers have historically used the practice of attaching an expensive sensor package to monitor the structural integrity of a bridge or building for a few weeks, only to then move this equipment on to the next piece of infrastructure.
The promise of IoT is that this instrumentation is becoming so inexpensive that it can be deployed permanently everywhere for acquiring real-time data from the physical world; however, the value of deploying this infrastructure still must be greater than the cost of deploying and maintaining it through its full lifecycle in addition to the associated risk in terms of maintaining security and privacy.
Don’t get me wrong — PdM enabled by IoT is a valuable use case — despite machines typically failing over a long period of time it’s expensive to roll a truck to do repairs, especially if an industrial process experiences a loss of production. For example, downtime in a major jetliner factory can be upwards of $20k a minute! It’s just important to think about the big picture and whether you’re trying to solve a problem that has already been solved.
Looking at the bigger picture, a supply chain linked to a manufacturing process is a great example of an ecosystem that easily justifies an IoT solution for real-time monitoring and analytics. In a multi-stage manufacturing operation, the cost of discovering a flaw within a given part increases steadily with each process stage. The cost is even higher if that part gets into the supply chain and it’s higher yet if a defective product gets to a customer, not to mention impacting the manufacturer’s brand. Here the cumulative value of instrumenting throughout the product lifecycle is very high and definitely warrants a solution that can report issues the moment they occur.
Speaking of the holistic value that I touched on in part one and above, the real potential is not just the remote monitoring of a single machine, rather a whole fleet of interconnected machines. Imagine a world where you can send a tech out with work instructions to preventatively repair a fleet of machines in order to get the most out of a truck roll to a given location. This is similar to the story of “Fred and Phil” in part one, in which Phil wanted the propane tanks to be bone dry before rolling a truck. And on top of that — imagine that the algorithm could tell you that it will cost you less money in the long run to replace a specific machine altogether, rather than trying to repair it yet again.
It goes beyond IoT and AI, last I checked, systems ranging from machines to factory floors and vehicles are composed of subsystems from various suppliers. As such, open interoperability is also critical when it comes to Digital Twins. I think this is a great topic for another blog!
Sharpening the edge
In my recent Edge AI blog I highlighted the new LF Edge taxonomy white paper and how we think this taxonomy will help put an end to the current market confusion caused by various industries (cloud, telco, IT, OT/industrial, consumer) defining the edge with a strong bias towards their interests/PoV, not to mention taxonomies that use ambiguous terms such as “thin and thick” and “near and far” that mean different things to different people. The team at ZEDEDA had a heavy hand in shaping this with the LF Edge community. In fact, a lot of the core thinking stems from my “Getting a Deeper Edge on the Edge” blog from last year.
As highlighted in the paper, the IoT component of the Smart Device Edge (e.g., compared to client devices like smartphones, PCs, etc. that also live at this sub-category) spans a single node with 256MB of memory up to a small server cluster, deployed outside of a physically-secure data center or Modular Data Center (MDC). The upper end of this spectrum is increasingly blurring into the Kubernetes paradigm thanks to efforts like K3S. However, due to resource constraints and other factors these nodes will not have the exact same functionality as higher edge tiers leveraging full-blown Kubernetes.
Below the Smart Device Edge is the “Constrained Device Edge”. This sub-tier consists of a highly fragmented landscape of microcontroller-based devices that typically have their own custom OTA tools. Efforts like Microsoft’s Sphere OS are trying to address this space and it’s important to band together on efforts like this due to the immense fragmentation at this tier.
Ultimately it’s important to think of the edge to cloud as a continuum and that there isn’t an “industrial edge” vs. an “enterprise edge” and a “consumer edge” as some would contend. Rather, it’s about building an ecosystem of technology and services providers that create necessarily unique value-add on top of more consistent infrastructure while taking into account the necessary tradeoff across the continuum.
We build the guts so you can bring the glory
You can think of ZEDEDA’s SaaS-based orchestration solution as being like VMware Tanzu in principle (in the sense that we support both VMs and containers) but optimized for IoT-centric edge compute hardware and workloads at the “Smart Device Edge” as defined by the LF Edge taxonomy. We’re not trying to compete with great companies like VMware and Nutanix who shine at the On-prem Data Center Edge up through the Service Provider Edge and into centralized data centers in the cloud. In fact, we can help industry leaders like these, telcos and cloud service providers extend their offerings including managed services down into the lighter compute edges.
Our solution is based on the open source Project EVE within LF Edge which provides developers with maximum flexibility for evolving their ecosystem strategy with their choice of technologies and partners, regardless of whether they ultimately choose to take an open or more closed approach. EVE aims to do for IoT what Android did for the mobile component of the Smart Device Edge by simplifying the orchestration of IoT edge computing at scale, regardless of applications, hardware or cloud used.
The open EVE orchestration APIs also provide an insurance policy in terms of developers and end users not being locked into only our commercial cloud-based orchestrator. Meanwhile, it’s not easy to build this kind of controller for scale, and ours places special emphasis on ease of use in addition to security. It can be leveraged as-is by end users, or white-labeled by solution OEMs to orchestrate their own branded offers and related ecosystems. In all cases, the ZEDEDA cloud is not in the data path so all permutations of edge data sources flowing to any on-premises or cloud-based system are supported across a mix of providers.
As I like to say, we build the guts so our partners and customers can bring the glory!
I’ll close with a highlight of the classic Clayton Christiansen book Innovator’s Dilemma. In my role I talk with a lot of technology providers and end users, and I often see people stuck in this pattern, thinking “I’ll turn things around if I can just do what I’ve always done better”. This goes not just for large incumbents but also fairly young startups!
One interaction in particular that has stuck with me over the years was a IoT strategy discussion with a large credit card payments provider. They were clearly trapped in the Innovator’s Dilemma, wondering what to do about the mobile payment players like Square that had really disrupted their market. As they were talking about how they didn’t want another Square situation to happen again, I asked them “have you thought about when machines start making payments?”. The fact that this blew their minds is exactly why another Square is going to happen to them, if they don’t get outside of their own headspace.
When approaching digital transformation and ecosystem development it’s often best to start small, however equally important is to think holistically for the long term. This includes building on an open foundation that you can grow with and that enables you to focus your investments on business relationships and innovation rather than reinvention. Sure, you may be able to build your own orchestration solution like we have at ZEDEDA, or your own full turn-key stack, but if you do either in a proprietary silo then you’ve just locked yourself out of the biggest potential — creating a network effect across various markets through increasingly interconnected ecosystems! Again, imagine if the internet was owned by one company.
We invite you to get involved with Project EVE, and more broadly-speaking LF Edge as we jointly build out this foundation. You’re of course free to build your own controller with EVE’s open APIs, or reach out to us here at ZEDEDA and we can help you orchestrate your commercial IoT edge ecosystem. We’re here to help you start small and scale big and have a growing entourage of technology and service partners to drive outcomes for your business.
Written by Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM
On Thursday, March 4, 2021 the Open Horizon open source project was officially promoted to stage two maturity. This was the culmination of a process that began on January 27, when I presented the proposal to the LF Edge Technical Advisory Committee (TAC).
The Open Horizon software project was seeded by a contribution from IBM to the Linux Foundation in April 2020. The project’s mission: to develop open-source software that manages the service software lifecycle of containerized applications on edge compute nodes. Since that initial code contribution, the project and community has grown as it has attracted new project partners and contributors.
About Stage Two
Reaching stage two is a significant milestone for open-source software projects because it is a strong indicator of both the organization’s healthy growth, and potential stakeholder interest in using the software as a solution for specific use cases. In the LF Edge umbrella organization, projects have to meet the following criteria to achieve stage two:
Past community participation met previous growth plans
An ongoing flow of merged code commits and other contributions
Documented proofs of concept using the software
Collaboration with other LF Edge projects
Growth plan, including projected feature sets and community targets
The project has also been actively involved in reaching out to students and universities. Thanks to funding from the Linux Foundation and Intel, the LFX Mentorship program was able to provide stipends for mentees who complete a term with Linux Foundation projects. Open Horizon joined the LFX program and was able to mentor four students and early career persons from October to December 2020. This mentorship program continues, and Open Horizon has just begun the Spring 2021 term with four new mentees.
Using the Platform
Last year, IBM created a commercially supported distribution of Open Horizon that was named IBM Edge Application Manager (IEAM). We’ve seen Open Horizon, or Open Horizon-based distributions, being used in an autonomous ship, vertical farming solutions, regional climate protection management, shipping ports, and even the International Space Station to deliver applications and related machine learning assets. And last week HP Inc, Intel, and IBM presented a webinar to invite retailers and vendors to participate in creating an Open Retail Reference Architecture based on the EdgeX Foundry, Open Horizon, and SDO projects.
Creating an Open Edge Computing Framework
The LF Edge organization provides a structure to support open edge computing projects. This should allow those projects to collectively form a framework for edge computing if they support common standards and interoperability. The Open Horizon project is working to further that goal by both working with other LF Edge projects to create end-to-end edge computing solutions when combined, but also to support existing open standards and to create new standards where none exist today. An example of supporting existing standards is in the area of zero-touch device provisioning where Open Horizon incorporates SDO services, a reference implementation for the FIDO IoT v1 specification.
Now that Open Horizon has demonstrated its value as a platform and a community, it is preparing to expand the community by adding new SIGs, Partners, and contributors to the project. To work with the project community to shape application delivery and lifecycle management within edge computing, consider attending one of the Working Group meetings, contributing code, working on standards, or even installing the software.
Written by Glen Darling, contributor to Open Horizon and Advisory Software Engineer at IBM
If they are debian/ubuntu-style distros with Docker installed, or RedHat-style distros with docker or podman installed, Open Horizon can make this easy for you. Open Horizon is an LF Edge open source project originally contributed by IBM that is designed to manage application containers on potentially very large numbers of edge machines. IBM’s commercial distribution of Open Horizon for example supports 30,000 Linux hosts from a single Management Hub!
How can this scale be achieved? On each Linux host (called a “node” in Open Horizon) a small autonomous Open Horizon Agent must be installed. Since these Agents act autonomously, this minimizes the need for connections to the central Open Horizon Management Hub and also minimizes the network traffic required. In fact, the local Agent can continue to perform many of its monitoring and management functions for your applications even when completely disconnected from the Management Hub! Also, the Management Hub never initiates communications with any Agents, so no firrewall ports need to be opened on your edge nodes. Each Agent is responsible for its own node and it reaches out to contact the Management Hub to receive new information as appropriate based on its configuration.
Open Horizon’s design also simplifies operations at scale. No longer will you need to maintain hard-coded lists of nodes and the software that’s appropriate for each of them. When a fleet of devices is diverse maintaining complex overlapping software requirements can quickly become unmanageable with even relatively small scale. Instead, Open Horizon lets you specify your intent in “policies”, and then the Agents, in collaboration with the Agreement Robots (AgBots for short) in the Management Hub will work to achieve your intent across your whole fleet. You can specify policies for individual nodes, and/or policies for your services (collections of one or more containers deployed as a unit), and/or policies to govern deployment of your applications. Policies also enable your large data files, like neural network models (e.g., large edge weight files) to have lifecycles independent of your service containers.
I recently presented a hands-on Open Horizon deployment example at Cisco Systems’ “DevNet 2020” developer conference. In this example I showed how to take an existing container, publish it to DockerHub, define Open Horizon artifacts to describe it as a service, and a simple deployment pattern to deploy it. The example application detects the presence or absence of a facial mask when provided an image over an HTTP REST API. Cisco developers, and most developers working with Linux machines, can use Open Horizon as I did in this example to deploy their software across small or large fleets of machines.
You can view my Cisco DevNet 2020 presentation at the link below: