Category

Blog

How Do We Prevent Breaches Like Verkada, Mirai, Stuxnet, and Beyond? It Starts with Zero Trust.

By Blog, Project EVE

By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

News of the Verkada hack that breached 150,000 surveillance cameras sent shockwaves through the security world last week. For those of us in the edge computing industry, it simply underscored what we already knew — securing distributed devices is hardAs intelligent systems increasingly expand into the field, we’ll see more and more attacks of this sort if we continue to leverage the same security stance and tools that we’ve used for decades within perimetered locations like data centers and secure telecommunication facilities.

In this article, I will go through a few of the widely-cited edge breaches in the industry and highlight how a zero trust security model optimized for the unique challenges of the edge, such as employed by ZEDEDA, would have helped prevent them from happening.

In a hack reminiscent of Verkada, the 2016 Mirai virus infected millions of cameras and turned them into bots that together launched a massive DDOS attack on upstream networks, briefly taking down the internet in the Northeastern US. Something on the order of under twenty combinations of username and password got into all those cameras, because the developers made it too easy to change, or not even possible to change, these security credentials. Often this was due to prioritizing usability and instant gratification for users over security.

Another commonly-cited example is the massive Target data breach in 2014 that was a result of attackers accessing millions of customers’ credit card information by way of a networked HVAC system. The hackers stole credentials from a HVAC contractor and were able to access the payment system because the operations network the HVAC was on wasn’t properly segmented from the IT network.

In a final example, the 2010 Stuxnet breach involved malware that was loaded into process control systems by using a USB flash drive to bypass the network air gap. The worm then propagated across the internal process control network, scanning for Siemens S7 software on industrial PCs. When successful, the virus would send unexpected commands to PLCs controlling industrial processes while giving the operators a view of normal operation.

Viruses like Stuxnet that focus on compromising industrial systems are especially concerning because attacks can lead to immediate loss of production, or worse life. This is compared to breaches of IT systems which typically play out over long periods of time, with compromises to privacy, financial data and IP.

With these examples in mind, what is unique about the proverbial edge that makes security such a challenge?

  • : Part of the value of IoT and edge computing comes from having devices connected across the organization, providing a holistic view of operations. Over time we will see edge device deployments grow into the trillions, and traditional data center solutions for security and management are not designed for this kind of scale.
  •  Distributed edge computing resources rarely have the defense of four physical walls or a traditional network perimeter. This requires a security approach that assumes that these resources can be physically tampered with and doesn’t depend upon an owned network for defense.
  • The edge is at the convergence of the physical and digital worlds. In addition to a highly heterogeneous landscape of technologies, we also have to account for diverse skill sets spanning IT and OT (e.g. network and security admins, DevOps, production, quality and maintenance engineers, data scientists).
  •  As OT and IT converge at the edge, each organization’s often conflicting priorities must be considered. While OT typically cares about uptime and safety, IT prioritizes data security, privacy and governance. Security solutions must balance these priorities in order to be successful.
  • Many IoT devices are too resource-constrained to host security measures such as encryption, plus the broad install base of legacy systems in the field was never intended to be connected to broader networks, let alone the internet. Because of these limitations, these devices and systems need to rely on more capable compute nodes immediately upstream to serve as the first line of defense, providing functions such as root of trust and encryption.

The Verkada hack and its predecessors make it clear that edge computing requires a zero trust architecture that addresses the unique security requirements of the edge. Zero trust begins with a basic tenant — trust no one and verify everything.

At ZEDEDA, we have built an industry-leading zero trust security model into our orchestration solution for distributed edge computing. This robust security architecture is manifested in our easy-to-use, cloud-based UI and is rooted in the open source EVE-OS which is purpose-built for secure edge computing.

ZEDEDA contributed EVE-OS to form Project EVE in 2019 as a founding member of the Linux Foundation’s LF Edge organization, with the goal of delivering an open source, vendor-agnostic and standardized foundation for hosting distributed edge computing workloads. EVE-OS is a lightweight, secure, open, universal and Linux-based distributed edge operating system with open, vendor-neutral APIs for remote lifecycle management. The solution can run on any hardware (e.g., x86, Arm, GPU) and leverages different hypervisors and container runtimes to ensure policy-based isolation between applications, host hardware, and networks. The Project EVE community is now over 60 unique developers, with more than half not being affiliated with ZEDEDA.

EVE-OS Zero Trust Components

Let’s take a look at the individual components of the EVE-OS zero trust security framework.

  • EVE-OS leverages the cryptographic identity created in the factory or supply chain in the form of a private key generated in a hardware security model (e.g., TPM chip). This identity never leaves that chip and the root of trust is also used to store additional keys (e.g., for an application stack such as Azure IoT Edge). In turn, the public key is stored in the remote console (e.g., ZEDCloud, in the case of ZEDEDA).
  • An edge compute node running EVE-OS leverages its silicon-based trust anchor (e.g., TPM) for identity and communicates directly with the remote controller to verify itself. This eliminates having a username and password for each edge device in the field, instead all access is governed through role-based access control (RBAC) in a centralized console. Hackers with physical access to an edge computing node have no way of logging into the device locally.
  • EVE-OS has granular, software-defined networking controls built in, enabling admins to govern traffic between applications, compute resources, and other network resources based on policy. The distributed firewall can be used to govern communication between applications on an edge node and on-prem and cloud systems, and detect any abnormal patterns in network traffic. As a bare metal solution, EVE-OS also provides admins with the ability to remotely block unused I/O ports on edge devices such as USB, Ethernet and serial. Combined with there being no local login credentials, this physical port blocking provides an effective measure against insider attacks leveraging USB sticks.
  • All of these tools are implemented in a curated, layered fashion to establish defense in depth with considerations for people, process, and technology.
  • All features within EVE-OS are exposed through an open, vendor-neutral API that is accessed remotely through the user’s console of choice. Edge nodes block unsolicited inbound instruction, instead reaching out to their centralized management console at scheduled intervals and establishing a secure connection before implementing any updates.

Returning to the above examples of security breaches, what would the impact of these attacks have looked like if the systems were running on top of EVE-OS? In short, there would have been multiple opportunities for the breaches to be prevented, or at least discovered and mitigated immediately.

  •  In the Verkada and Mirai examples, the entry point would have had to be the camera operating system itself, running in isolation on top of top EVE-OS. However, this would not have been possible because EVE-OS itself has no direct login capabilities. The same benefit would have applied in the Target example, and in the case of Stuxnet, admins could have locked down USB ports on local industrial PCs to prevent a physical insider attack.
  • In all of these example attacks, the distributed firewall within EVE-OS would have limited the communications of applications, intercepting any attempts of compromised devices to communicate with any systems not explicitly allowed. Further, edge computing nodes running EVE-OS deployed immediately upstream of the target devices would have provided additional segmentation and protection.
  • EVE-OS would have provided detailed logs of all of the hackers’ activities. It’s unlikely that the hackers would have realized that the operating system they breached was actually virtualized on top of EVE-OS.
  • Security policies established within the controller and enforced locally by EVE-OS would have detected unusual behavior by each of these devices at the source and immediately cordoned them off from the rest of the network, preventing hackers from inflicting further damage.
  • Centralized management from any dashboard means that updates to applications and their operating systems (e.g. a camera OS) could have been deployed to an entire fleet of edge hardware powered by EVE-OS with a single click. Meanwhile, any hacked application operating system running above EVE-OS would be preserved for subsequent forensic analysis by the developer.
  • Thezero-trust approach, comprehensive security policies, and immediate notifications would have drastically limited the scope and damage of each of these breaches, preserving the company’s brand in addition to mitigating direct effects of the attack.

The diverse technologies and expertise required to deploy and maintain edge computing solutions can make security especially daunting. The shared technology investment of developing EVE-OS through vendor-neutral open source collaboration is important because it provides a high level of transparency and creates an open anchor point around which to build an ecosystem of hardware, software and services experts. The open, vendor neutral API within EVE-OS prevents lock-in and enables anyone to build their own controller. In this regard, you can think of EVE-OS as the “Android of the Edge”.

ZEDEDA’s open edge ecosystem is unified by EVE-OS and enables users with choice of hardware and applications for data management and analytics, in addition to partner applications for additional security tools such as SD-WAN, virtual firewalls and protocol-specific threat detection. These additional tools augment the zero trust security features within EVE-OS and are easily deployed on edge nodes through our built-in app marketplace based on need.

Finally, in the process of addressing all potential threat vectors, it’s important to not make security procedures so cumbersome that users try to bypass key protections, or refuse to use the connected solution at all. Security usability is especially critical in IoT and edge computing due to the highly diverse skill sets in the field. In one example, while developers of the many smart cameras that have been hacked in the field made it easy for users to bypass the password change for instant gratification, EVE-OS provides similar zero touch usability without security compromise by automating the creation of a silicon-based digital ID during onboarding.

Our solution is architected to streamline usability throughout the lifecycle of deploying and orchestrating distributed edge computing solutions so users have the confidence to connect and can focus on their business. While the attack surface for the massive 2020 SolarWinds hack was the centralized IT data center vs. the edge, it’s a clear example of the importance of having an open, transparent foundation that enables you to understand how a complex supply chain is accessing your network.

At ZEDEDA, we believe that security at the distributed edge begins with a zero-trust foundation, a high degree of usability, and open collaboration. We are working with the industry to take the guesswork out so customers can securely orchestrate their edge computing deployments with choice of hardware, applications, and clouds, with limited IT knowledge required. Our goal is to enable users to adopt distributed edge computing to drive new experiences and improve business outcomes, without taking on unnecessary risk.

To learn more about our comprehensive, zero-trust approach, download ZEDEDA’s security white paper. And to continue the conversation, join us on LinkedIn.

The Linux Foundation Supports Asian Communities

By Blog, LF Edge, Linux Foundation News

The Linux Foundation and its communities are deeply concerned about the rise in attacks against Asian Americans and condemn this violence. It is devastating to hear over and over again of the attacks and vitriol against Asian communities, which have increased substantially during the pandemic.

We stand in support with all those that have experienced this hate, and to the families of those who have been killed as a result. Racism, intolerance and inequality have no place in the world, our country, the tech industry or in open source communities.

We firmly believe that we are all at our best when we work together, treat each other with respect and equality and without hate or vitriol.

This blog originally ran on the Linux Foundation website.

Nominations are Open for the Annual EdgeX Community Awards!

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

It’s our favorite time of the year – where we take a step back and reflect on what we have accomplished and honor the members of our community who have gone above and beyond to support EdgeX Foundry.  We are pleased to announce the launch nominations for the 4th Annual EdgeX Community Awards.

Milestones

Back in 2018 when we started the awards, we had about 1,000 Docker downloads and about 20 contributors.  Today, we are well over 7 million downloads and have 4 times the number of contributors.

In 2020, we expanded into Asia, especially China, which was demonstrated by the amazing turn out at EdgeX Foundry China Day 2020.  We also kicked off the EdgeX Foundry Adopter Series, where the community learns more about how companies around the world use EdgeX.  In addition to two more successful releases, we also announced the creation of the Open Retail Reference Architecture (ORRA) – a project to deliver a base edge foundation to the retail industry – with a presentation from IBM, Intel, and HP.  There were many more events and milestones (like debuting our brand-new website) that we could dive into, but we want to focus in the opening for nominations of the EdgeX Awards!

How to Participate

If you are already a member of the EdgeX Foundry community, please take a moment and visit our EdgeX Awards page and nominate the community member that you feel best exemplified innovation and/or contributed leadership and helped EdgeX Foundry advance and grow over the last year.  Nominations are being accepted through April 7, 2021.

The Awards

Innovation Award:  The Innovation Award recognizes individuals who have contributed the most innovative solution to the project over the past year.

Previous Winners:

2018- Drasko Draskovic (Mainflux) and Tony Espy (Canonical)

2019- Trevor Conn (Dell) and Cloud Tsai (IOTech)

2020- Michael Estrin (Dell Technologies and James Gregg (Intel)

Contribution Award:  The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.

Previous Winners:

2018- Andy Foster (IOTech) and Drasko Draskovic (Mainflux)

2019- Michael Johanson (Intel) and Lenny Goodell (Intel)

2020- Bryon Nevis (Intel) and Lisa Ranjbar (Intel)

From top left to right: Keith Steele, Maemalynn Meanor, Bryon Nevis. Michael Estrin, Lisa Rashidi-Ranjbar, James Gregg, Aaron Williams, Jim White

To learn more about EdgeX Foundry, visit our website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join, simply visit our wiki page and/or check out our Git Hub.  And you can follow us on Twitter, and LinkedIn.

 

Open Horizon Begins Second Mentorship Term, Looks Back on the First

By Blog, Open Horizon

Written by Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM

This spring, the open-source Open Horizon project continued working with the LFX Mentorship Program.  As part the LF Edge umbrella organization, Open Horizon had access to both the Mentorship Program and the COVID-related funding provided by both the Linux Foundation and Intel which paid stipends to participating mentees.

Similar to our previous experience last Fall, we had a wealth of potential applicants to choose from.  Thanks in large part to being featured on the LFX Mentoring home page, our applicant pool grew from 30 last term to 43 this term.  This time, the range of educational experience from candidates was even wider – from sophomores to doctoral candidates.  The wealth of experience was much greater this term, and the drive and determination was incredible.  Our methods of choosing the top candidates last time narrowed the pool from 30 to 12, but this time only took us from 43 to 30.  After conducting interviews and matching applicant experience to our mentorship opportunities, we selected the final four candidates.

Two mentors returned for the Spring term, and we added two new mentors.  Returning were:

  • David Booz (Dave) – Chief Architect and one of the project founders, Dave is also Chair of the Agent Working Group and a member over the last six years. Dave understands the complete breadth and depth of the project code.
  • Liang Wang (Los) – Senior Technical Staff Member (STSM), Architect, and Chair of the China Manufacturing Special Interest Group (SIG). Los is creating a community around manufacturing and Industry 4.0 use cases.

The two new mentors were:

Dave and Bruce decided to collaborate and provide a challenging mentorship opportunity to the mentees by providing a linked project to work on.  One mentee would collaborate with the Agent Working Group and one with the Management Hub Working Group.  Together they would add the ability to support shared secrets.  On the Agent side, Dave chose:

  • Debabrata Mandal (Deb) – Deb is a final year student at IIT Bombay in Mumbai. Deb described his current work, saying: “I am currently working on an issue … related to fetching the logging information from the specified service using the service url. While solving this issue I am getting a more detailed idea of the working of the agent component.”

Bruce wanted to work with:

  • Megha Varshney – Megha is a final year undergraduate at Indira Gandhi Delhi Technical University for Women. She is: “… currently working on [an] issue wherein I will be adding an API which would provide org specific status info.”

Los cooked up a challenge this year by adding support for the RISC-V microarchitecture to the Open Horizon project.  For that effort, he selected:

  • Quang Hiệp Mai (Mike) – Mike is in the second year of his master’s degree program at Soongsil University in South Korea. He said: “Since RISC-V is blooming in China, supporting RISC-V arch for the agent device would attract more users in China market. So first I will port some of the examples to RISC-V arch then I will help to port Anax to RISC-V”

And Ben wanted to work with a mentee to evaluate and choose an appropriate technology and subsequently to migrate pipelines to that choice.  He wanted to work with:

  • Mustafa Al-tekreeti – Mustafa is a Ph.D. educator transitioning to a career in software engineering. He explained: “I am working on developing a CI/CD pipeline for Anax, one of the Open-Horizon projects, using one of the state-of-art technologies. First, we evaluate three of the available options (Circle-CI, GitHub Actions, and Jenkins Job Builder hosted by LF Edge) and summarize their pros and cons. Then, we implement the pipeline using the chosen technology.”

We’re now three weeks into the Spring 2021 term and beginning to make plans for a potential Summer 2021 term, which would open for applicants beginning April 15th.  In the meantime, we recently recorded a webinar with the graduates from our Fall 2020 term.  It was a great opportunity to get everyone together on a call and discuss what we accomplished and learned and to provide helpful advice to future applicants.

Home Edge Earns the CII Best Practices Badge

By Blog, Home Edge

Home Edge

Written by Taras Drozdovsky and Taewan Kim, LF Edge Home Edge Committer & Staff Engineer at Samsung Research as well as Peter Moonki Hong, LF Edge Governing Board member & Principal Engineer at Samsung Research

Every year the number of Free/Libre and Open Source Software (FLOSS) projects increases and the community has a goal to maintain high quality code, documentation, test coverage and, of course, a high level of security.

The Linux Foundation, in collaboration with and major IT companies in its Core Infrastructure Initiative (CII), has collected best practices for FLOSS projects and provided these criteria as CII Best Practices.

In December 2020, LF Edge’s Home Edge team set itself the goal of getting a CII Best Practices Badge. After working tirelessly on the requirements, we are happy to announce Home Edge has received the CII Best Practices Badge!

Thank you to the committers and key contributors of Home Edge, Taras Drozdovskyi, Taewan Kim, Somang Park, MyeongGi Jeong, Suresh L C, Sunchit Sharma, Dwarkaprasad Dayama, Peter Moonki Hong. We would also like to thank Jim White with EdgeX Foundry for helping to guide us and the entire LF Edge/Linux Foundation team for their support.

Benefits achieved on the way to getting the CII Best Practices passing badge:

  • Improved documentation:
    • Security and Testing policy
    • How to Contributing Guide
    • Descriptions External APIs
  • Improved the build and testing system
    • CI infrastructures: Github->Actions – 20 checks
    • Integrated of external software tools for analysis code:
      • gofmt – 92%;
      • go_vet – 100%;
      • golint – 76%;
      • SonarCloud: Security Hotspots – 37 -> 0; Code Smells – 253 -> 50; Duplications – 7.8% -> 3%
    • Improved security analysis:
      • Integrated CodeQL Analysis, LGTM services: 17 -> 0 Security Alerts

We have reached a high level of code quality, but continue to improve our code!

Improving the Edge Orchestration project infrastructure (using many tools for analyzing code and searching for vulnerabilities) allowed us to increase not only the level of security, but also the reliability of our product. We hope that improving the documentation will reduce the time to enter the project, and therefore increase the number of external developers participating in the project. Their advice and input are very important for us.

It should also be noted that there were many areas of self-development for the members of the project team: developers became testers, technical writers, security officers. This is a wonderful experience.

Next step of Edge Orchestration team

Further improving the Edge-Home-Orchestration project and achieving “silver” and “gold” badges. Implementation of OpenSSF protected code development practices intoEdge-Home-Orchestration.

If you would like to learn more about the use cases for Home Edge or more technical details, check out the video of our October 2020 webinar. As part of the “On the Edge with LF Edge” webinar series, we shared the general overview for the project, how it fits into LF Edge, key features of the Coconut release, the roadmap, how to get involved and the landscape of the IoT Home Edge.

If you would like to contribute to Home Edge or share feedback, find the project on GitHub, the LF Edge Slack channel (#homeedge) or subscribe to our email list (homeedge-tsc@lists.lfedge.org). We welcome new contributors to help make this project better and expand the LF Edge community.

Additional Home Edge Resources:

1. Coconut release code : https://github.com/lf-edge/edge-home-orchestration-go/releases/tag/coconut

2. Release notes

3. Home Edge Wiki: https://wiki.lfedge.org/display/HOME/Home+Edge+Project

 

State of the Edge 2021 Report

By Blog, LF Edge, State of the Edge

Written by Jacob Smith, State of the Edge Co-Chair and VP Bare Metal Strategy at Equinix Metal

On Wednesday, LF Edge published their 2021 edition of the State of the Edge Report.  As a co-chair and researcher for the report, I want to share insight into how this year’s report was created. In development, we focused on three key questions to ensure the report covered essential subject matter:

  • What’s new in edge computing since last year?
  • What are the challenges and opportunities facing the ecosystem?
  • How can we help?

Designed to be “vendor neutral” and to shy away from any bias, we had multiple authors contribute content and utilized an independent editor to balance the report to better serve the edge computing community. Several experts weighed in on what they were paying attention to in the edge space and identified four major stand-out points:

  • Critical infrastructure (data centers)
  • Hardware and silicon
  • Software as a whole
  • Networks and networking

It’s abundantly clear there is growth in edge computing. Over the last year, COVID-19 accelerated nearly every facet of digital transformation, and this factored into the growth of the edge computing space. But, it also might just be a long overdue increase in digital innovation and transformation.

We will continue to discover more about the edge and how to use it to optimize the computing ecosystem. For now, our attention is focused on how diverse the edge is and all of the solutions that are waiting to be discovered.

Summarized from Equinix Metal’s The State of Your Edge. Read the full story here. Download the report here.   

 

 

Scaling Ecosystems Through an Open Edge (Part Three)

By Blog, LF Edge, Project EVE, Trend

Getting to “Advanced Class”, Including Focusing on Compounding Value with the Right Partners

By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA

 

Image for post

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Thanks for continuing to read this series on ecosystem development and the importance of an open edge for scaling to the true potential of digital transformation — interconnected ecosystems that drive new business models and customer outcomes. In parts one and two, I talked about various ecosystem approaches spanning open and closed philosophies, the new product mindset related to thinking about the cumulative lifetime value of an offer, and the importance of the network effect and various considerations for building ecosystems. This includes the dynamics across both technology choices and between people.

In this final installment I’ll dive into how we get to “advanced class,” including the importance of domain knowledge and thinking about cumulative value add vs. solutions looking for problems that can be solved in other, albeit brute force, ways.

Getting to advanced class

I talked about the importance of compounding value in part one of this series and will touch on this concept a little more here, in addition to the criticality of domain knowledge. Speaking of the later, hundreds of IoT platform providers claim they can do a use case like Predictive Maintenance (PdM) but few actually have the domain knowledge to do it. A PdM solution not only requires analytics tools and data science but also an intimate knowledge of failure patterns based on various attributes (vibration, oil particulates, temperature, motor current, etc.). Often an operator on a factory floor that has “been there done that” sits alongside a data scientist to help program the analytics algorithms based on his/her tribal knowledge. A former colleague once worked on a PdM solution with a line operator named Brad, who was the expert that helped the data scientist understand what machine and process behaviors were non-issues, despite looking like bad news, and vice versa. They jokingly called the end result “Bradalytics”.

Further, there’s a certain naivety in the industry today in terms of pushing IoT solutions when there’s already established “good enough” precedent. In the case of PdM, manual data acquisition with a handheld vibration meter once a month or so is a common practice to accurately predict impending machine failures because these failures don’t typically happen overnight. An industrial operator can justify this manual data collection as OpEx and the only thing permanently installed on their machines are brass pads that indicate where to apply the contact-based handheld sensor, from which data is manually loaded into an analytical tool.

Similar manual procedures are all too common in other industries, such as USB data loggers used to monitor temperature in cold chain logistics operations. In another example, structural engineers have historically used the practice of attaching an expensive sensor package to monitor the structural integrity of a bridge or building for a few weeks, only to then move this equipment on to the next piece of infrastructure.

The promise of IoT is that this instrumentation is becoming so inexpensive that it can be deployed permanently everywhere for acquiring real-time data from the physical world; however, the value of deploying this infrastructure still must be greater than the cost of deploying and maintaining it through its full lifecycle in addition to the associated risk in terms of maintaining security and privacy.

Don’t get me wrong — PdM enabled by IoT is a valuable use case — despite machines typically failing over a long period of time it’s expensive to roll a truck to do repairs, especially if an industrial process experiences a loss of production. For example, downtime in a major jetliner factory can be upwards of $20k a minute! It’s just important to think about the big picture and whether you’re trying to solve a problem that has already been solved.

Looking at the bigger picture, a supply chain linked to a manufacturing process is a great example of an ecosystem that easily justifies an IoT solution for real-time monitoring and analytics. In a multi-stage manufacturing operation, the cost of discovering a flaw within a given part increases steadily with each process stage. The cost is even higher if that part gets into the supply chain and it’s higher yet if a defective product gets to a customer, not to mention impacting the manufacturer’s brand. Here the cumulative value of instrumenting throughout the product lifecycle is very high and definitely warrants a solution that can report issues the moment they occur.

Speaking of the holistic value that I touched on in part one and above, the real potential is not just the remote monitoring of a single machine, rather a whole fleet of interconnected machines. Imagine a world where you can send a tech out with work instructions to preventatively repair a fleet of machines in order to get the most out of a truck roll to a given location. This is similar to the story of “Fred and Phil” in part one, in which Phil wanted the propane tanks to be bone dry before rolling a truck. And on top of that — imagine that the algorithm could tell you that it will cost you less money in the long run to replace a specific machine altogether, rather than trying to repair it yet again.

It goes beyond IoT and AI, last I checked, systems ranging from machines to factory floors and vehicles are composed of subsystems from various suppliers. As such, open interoperability is also critical when it comes to Digital Twins. I think this is a great topic for another blog!

Sharpening the edge

In my recent Edge AI blog I highlighted the new LF Edge taxonomy white paper and how we think this taxonomy will help put an end to the current market confusion caused by various industries (cloud, telco, IT, OT/industrial, consumer) defining the edge with a strong bias towards their interests/PoV, not to mention taxonomies that use ambiguous terms such as “thin and thick” and “near and far” that mean different things to different people. The team at ZEDEDA had a heavy hand in shaping this with the LF Edge community. In fact, a lot of the core thinking stems from my “Getting a Deeper Edge on the Edge” blog from last year.

As highlighted in the paper, the IoT component of the Smart Device Edge (e.g., compared to client devices like smartphones, PCs, etc. that also live at this sub-category) spans a single node with 256MB of memory up to a small server cluster, deployed outside of a physically-secure data center or Modular Data Center (MDC). The upper end of this spectrum is increasingly blurring into the Kubernetes paradigm thanks to efforts like K3S. However, due to resource constraints and other factors these nodes will not have the exact same functionality as higher edge tiers leveraging full-blown Kubernetes.

Below the Smart Device Edge is the “Constrained Device Edge”. This sub-tier consists of a highly fragmented landscape of microcontroller-based devices that typically have their own custom OTA tools. Efforts like Microsoft’s Sphere OS are trying to address this space and it’s important to band together on efforts like this due to the immense fragmentation at this tier.

Ultimately it’s important to think of the edge to cloud as a continuum and that there isn’t an “industrial edge” vs. an “enterprise edge” and a “consumer edge” as some would contend. Rather, it’s about building an ecosystem of technology and services providers that create necessarily unique value-add on top of more consistent infrastructure while taking into account the necessary tradeoff across the continuum.

We build the guts so you can bring the glory

You can think of ZEDEDA’s SaaS-based orchestration solution as being like VMware Tanzu in principle (in the sense that we support both VMs and containers) but optimized for IoT-centric edge compute hardware and workloads at the “Smart Device Edge” as defined by the LF Edge taxonomy. We’re not trying to compete with great companies like VMware and Nutanix who shine at the On-prem Data Center Edge up through the Service Provider Edge and into centralized data centers in the cloud. In fact, we can help industry leaders like these, telcos and cloud service providers extend their offerings including managed services down into the lighter compute edges.

Our solution is based on the open source Project EVE within LF Edge which provides developers with maximum flexibility for evolving their ecosystem strategy with their choice of technologies and partners, regardless of whether they ultimately choose to take an open or more closed approach. EVE aims to do for IoT what Android did for the mobile component of the Smart Device Edge by simplifying the orchestration of IoT edge computing at scale, regardless of applications, hardware or cloud used.

The open EVE orchestration APIs also provide an insurance policy in terms of developers and end users not being locked into only our commercial cloud-based orchestrator. Meanwhile, it’s not easy to build this kind of controller for scale, and ours places special emphasis on ease of use in addition to security. It can be leveraged as-is by end users, or white-labeled by solution OEMs to orchestrate their own branded offers and related ecosystems. In all cases, the ZEDEDA cloud is not in the data path so all permutations of edge data sources flowing to any on-premises or cloud-based system are supported across a mix of providers.

As I like to say, we build the guts so our partners and customers can bring the glory!

In closing

I’ll close with a highlight of the classic Clayton Christiansen book Innovator’s Dilemma. In my role I talk with a lot of technology providers and end users, and I often see people stuck in this pattern, thinking “I’ll turn things around if I can just do what I’ve always done better”. This goes not just for large incumbents but also fairly young startups!

One interaction in particular that has stuck with me over the years was a IoT strategy discussion with a large credit card payments provider. They were clearly trapped in the Innovator’s Dilemma, wondering what to do about the mobile payment players like Square that had really disrupted their market. As they were talking about how they didn’t want another Square situation to happen again, I asked them “have you thought about when machines start making payments?”. The fact that this blew their minds is exactly why another Square is going to happen to them, if they don’t get outside of their own headspace.

When approaching digital transformation and ecosystem development it’s often best to start small, however equally important is to think holistically for the long term. This includes building on an open foundation that you can grow with and that enables you to focus your investments on business relationships and innovation rather than reinvention. Sure, you may be able to build your own orchestration solution like we have at ZEDEDA, or your own full turn-key stack, but if you do either in a proprietary silo then you’ve just locked yourself out of the biggest potential — creating a network effect across various markets through increasingly interconnected ecosystems! Again, imagine if the internet was owned by one company.

We invite you to get involved with Project EVE, and more broadly-speaking LF Edge as we jointly build out this foundation. You’re of course free to build your own controller with EVE’s open APIs, or reach out to us here at ZEDEDA and we can help you orchestrate your commercial IoT edge ecosystem. We’re here to help you start small and scale big and have a growing entourage of technology and service partners to drive outcomes for your business.

Open Horizon Moves to Stage 2!

By Blog, LF Edge, Open Horizon

Written by Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM

On Thursday, March 4, 2021 the Open Horizon open source project was officially promoted to stage two maturity.  This was the culmination of a process that began on January 27, when I presented the proposal to the LF Edge Technical Advisory Committee (TAC).

Introduction

The Open Horizon software project was seeded by a contribution from IBM to the Linux Foundation in April 2020.  The project’s mission: to develop open-source software that manages the service software lifecycle of containerized applications on edge compute nodes.  Since that initial code contribution, the project and community has grown as it has attracted new project partners and contributors.

About Stage Two

Reaching stage two is a significant milestone for open-source software projects because it is a strong indicator of both the organization’s healthy growth, and potential stakeholder interest in using the software as a solution for specific use cases.  In the LF Edge umbrella organization, projects have to meet the following criteria to achieve stage two:

  • Past community participation met previous growth plans
  • An ongoing flow of merged code commits and other contributions
  • Documented proofs of concept using the software
  • Collaboration with other LF Edge projects
  • Growth plan, including projected feature sets and community targets

Getting Started

Since the project began last spring, it formed a Technical Steering Committee (TSC) and began hosting public meetings last July.  The TSC then authorized the creation of seven Working Groups which are responsible for overseeing the daily work of the project.  Then a Special Interest Group (SIG) proposal was presented for the formation of a group to promote manufacturing and Industry 4.0 use cases last August, which was approved.

Growing the Community

The project has also been actively involved in reaching out to students and universities.  Thanks to funding from the Linux Foundation and Intel, the LFX Mentorship program was able to provide stipends for mentees who complete a term with Linux Foundation projects.  Open Horizon joined the LFX program and was able to mentor four students and early career persons from October to December 2020.  This mentorship program continues, and Open Horizon has just begun the Spring 2021 term with four new mentees.

Using the Platform

Last year, IBM created a commercially supported distribution of Open Horizon that was named IBM Edge Application Manager (IEAM).  We’ve seen Open Horizon, or Open Horizon-based distributions, being used in an autonomous ship, vertical farming solutions, regional climate protection management, shipping ports, and even the International Space Station to deliver applications and related machine learning assets.  And last week HP Inc, Intel, and IBM presented a webinar to invite retailers and vendors to participate in creating an Open Retail Reference Architecture based on the EdgeX Foundry, Open Horizon, and SDO projects.

Creating an Open Edge Computing Framework

The LF Edge organization provides a structure to support open edge computing projects.  This should allow those projects to collectively form a framework for edge computing if they support common standards and interoperability.  The Open Horizon project is working to further that goal by both working with other LF Edge projects to create end-to-end edge computing solutions when combined, but also to support existing open standards and to create new standards where none exist today.  An example of supporting existing standards is in the area of zero-touch device provisioning where Open Horizon incorporates SDO services, a reference implementation for the FIDO IoT v1 specification.

Join Us

Now that Open Horizon has demonstrated its value as a platform and a community, it is preparing to expand the community by adding new SIGs, Partners, and contributors to the project.  To work with the project community to shape application delivery and lifecycle management within edge computing, consider attending one of the Working Group meetings, contributing code, working on standards, or even installing the software.

Additional Resources:

How do you manage applications on thousands of Linux hosts?

By Blog, Open Horizon

Written by Glen Darling, contributor to Open Horizon and Advisory Software Engineer at IBM

If they are debian/ubuntu-style distros with Docker installed, or RedHat-style distros with docker or podman installed, Open Horizon can make this easy for you. Open Horizon is an LF Edge open source project originally contributed by IBM that is designed to manage application containers on potentially very large numbers of edge machines. IBM’s commercial distribution of Open Horizon for example supports 30,000 Linux hosts from a single Management Hub!

How can this scale be achieved? On each Linux host (called a “node” in Open Horizon) a small autonomous Open Horizon Agent must be installed. Since these Agents act autonomously, this minimizes the need for connections to the central Open Horizon Management Hub and also minimizes the network traffic required. In fact, the local Agent can continue to perform many of its monitoring and management functions for your applications even when completely disconnected from the Management Hub! Also, the Management Hub never initiates communications with any Agents, so no firrewall ports need to be opened on your edge nodes. Each Agent is responsible for its own node and it reaches out to contact the Management Hub to receive new information as appropriate based on its configuration.

Open Horizon’s design also simplifies operations at scale. No longer will you need to maintain hard-coded lists of nodes and the software that’s appropriate for each of them. When a fleet of devices is diverse maintaining complex overlapping software requirements can quickly become unmanageable with even relatively small scale. Instead, Open Horizon lets you specify your intent in “policies”, and then the Agents, in collaboration with the Agreement Robots (AgBots for short) in the Management Hub will work to achieve your intent across your whole fleet. You can specify policies for individual nodes, and/or policies for your services (collections of one or more containers deployed as a unit), and/or policies to govern deployment of your applications. Policies also enable your large data files, like neural network models (e.g., large edge weight files) to have lifecycles independent of your service containers.

I recently presented a hands-on Open Horizon deployment example at Cisco Systems’ “DevNet 2020” developer conference. In this example I showed how to take an existing container, publish it to DockerHub, define Open Horizon artifacts to describe it as a service, and a simple deployment pattern to deploy it. The example application detects the presence or absence of a facial mask when provided an image over an HTTP REST API. Cisco developers, and most developers working with Linux machines, can use Open Horizon as I did in this example to deploy their software across small or large fleets of machines.

You can view my Cisco DevNet 2020 presentation at the link below:

Additional Resources:

Decentralized ID and Access Management (DIAM) for IoT Networks

By Blog, Hyperledger, LF Edge

Written by Nima Afraz, Connect Centre for Future Networks and Communications, Trinity College Dublin

The Telecom Special Interest group in collaboration with the Linux Foundation’s LF Edge initiative has published a solution brief addressing the issues concerning the centralized ID and Access Management (IAM) in IoT Networks and introducing a distributed alternative using Hyperledger Fabric.

The ever-growing number of IoT devices means data vulnerability is an ongoing risk. Existing centralized IoT ecosystems have led to concerns about security, privacy, and data use. This solution brief shows that a decentralized ID and access management (DIAM) system for IoT devices provides the best solution for those concerns, and that Hyperledger offers the best technology for such a system.

The IoT is growing quickly. IDC predicts that by 2025 there will be 55.7 billion connected devices in the world. Scaling and securing a network of billions of IoT devices starts with a robust device. Data security also requires a strong access management framework that can integrate and interoperate with existing legacy systems. Each IoT device should carry a unique global identifier and have a profile that governs access to the device.

In this solution brief, we propose a decentralized approach to validate and verify the identity of IoT devices, data, and applications. In particular, we propose using two frameworks from the Linux Foundation: Hyperledger Fabric for the distributed ledger (DLT) and Hyperledger Indy for the decentralized device IDs. These two blockchain frameworks provide the core components to address end-to-end IoT device ID and access management (IAM).

The Problem: IoT Data Security

The ambitious IoT use cases including smart transport infer a massive volume of vehicle-to-vehicle (V2V) and vehicle-to-road communications that must be safeguarded to prevent malicious activity and malfunctions due to single points of failure.

The Solution: Decentralized Identity

IoT devices collect, handle, and act on data as proxies for a wide range of users, such as a human, a government agency, or a multinational enterprise.With tens of billions of IoT devices to be connected over the next few years, numerous IoT devices may represent a single person or institution in multiple roles. And IoT devices may play roles that no one has yet envisioned.

A decentralized ID management system removes the need for any central governing authority and makes way for new models of trust among organizations. All this provides more transparency, improves communications, and saves costs.

The solution is to use Hyperledger technology to create a trusted platform for a telecom ecosystem that can support IoT devices throughout their entire lifecycle and guarantee a flawless customer experience. The solution brief includes Reference Architecture and a high-level architecture view of the proof of concept (PoC) that IBM is working on with some enterprise clients. This PoC uses Hyperledger Fabric as described above.

Successful Implementations of Hyperledger-based IoT Networks

IBM and its partners have successfully developed several global supply-chain ecosystems using IoT devices, IoT network services, and Hyperledger blockchain software. Two examples of these implementations are Food Trust and TradeLens.

The full paper is available to read at: https://www.hyperledger.org/wp-content/uploads/2021/02/HL_LFEdge_WhitePaper_021121_3.pdf

Get Involved with the Group

To learn more about the Hyperledger Telecom Special Interest Group, check out the group’s wiki and mailing list. If you have questions, comments or suggestions, feel free to post messages to the list.  And you’re welcome to join any of the group’s upcoming calls to meet group members and learn how to get involved.

Acknowledgements

The Hyperledger Telecom Special Interest Group would like to thank the following people who contributed to this solution brief: Nima Afraz, David Boswell, Bret Michael Carpenter, Vinay Chaudhary, Dharmen Dhulla, Charlene Fu, Gordon Graham, Saurabh Malviya, Lam Duc Nguyen, Ahmad Sghaier Omar, Vipin Rathi, Bilal Saleh, Amandeep Singh, and Mathews Thomas.