Monthly Archives

February 2021

Decentralized ID and Access Management (DIAM) for IoT Networks

By Blog, Hyperledger, LF Edge

Written by Nima Afraz, Connect Centre for Future Networks and Communications, Trinity College Dublin

The Telecom Special Interest group in collaboration with the Linux Foundation’s LF Edge initiative has published a solution brief addressing the issues concerning the centralized ID and Access Management (IAM) in IoT Networks and introducing a distributed alternative using Hyperledger Fabric.

The ever-growing number of IoT devices means data vulnerability is an ongoing risk. Existing centralized IoT ecosystems have led to concerns about security, privacy, and data use. This solution brief shows that a decentralized ID and access management (DIAM) system for IoT devices provides the best solution for those concerns, and that Hyperledger offers the best technology for such a system.

The IoT is growing quickly. IDC predicts that by 2025 there will be 55.7 billion connected devices in the world. Scaling and securing a network of billions of IoT devices starts with a robust device. Data security also requires a strong access management framework that can integrate and interoperate with existing legacy systems. Each IoT device should carry a unique global identifier and have a profile that governs access to the device.

In this solution brief, we propose a decentralized approach to validate and verify the identity of IoT devices, data, and applications. In particular, we propose using two frameworks from the Linux Foundation: Hyperledger Fabric for the distributed ledger (DLT) and Hyperledger Indy for the decentralized device IDs. These two blockchain frameworks provide the core components to address end-to-end IoT device ID and access management (IAM).

The Problem: IoT Data Security

The ambitious IoT use cases including smart transport infer a massive volume of vehicle-to-vehicle (V2V) and vehicle-to-road communications that must be safeguarded to prevent malicious activity and malfunctions due to single points of failure.

The Solution: Decentralized Identity

IoT devices collect, handle, and act on data as proxies for a wide range of users, such as a human, a government agency, or a multinational enterprise.With tens of billions of IoT devices to be connected over the next few years, numerous IoT devices may represent a single person or institution in multiple roles. And IoT devices may play roles that no one has yet envisioned.

A decentralized ID management system removes the need for any central governing authority and makes way for new models of trust among organizations. All this provides more transparency, improves communications, and saves costs.

The solution is to use Hyperledger technology to create a trusted platform for a telecom ecosystem that can support IoT devices throughout their entire lifecycle and guarantee a flawless customer experience. The solution brief includes Reference Architecture and a high-level architecture view of the proof of concept (PoC) that IBM is working on with some enterprise clients. This PoC uses Hyperledger Fabric as described above.

Successful Implementations of Hyperledger-based IoT Networks

IBM and its partners have successfully developed several global supply-chain ecosystems using IoT devices, IoT network services, and Hyperledger blockchain software. Two examples of these implementations are Food Trust and TradeLens.

The full paper is available to read at: https://www.hyperledger.org/wp-content/uploads/2021/02/HL_LFEdge_WhitePaper_021121_3.pdf

Get Involved with the Group

To learn more about the Hyperledger Telecom Special Interest Group, check out the group’s wiki and mailing list. If you have questions, comments or suggestions, feel free to post messages to the list.  And you’re welcome to join any of the group’s upcoming calls to meet group members and learn how to get involved.

Acknowledgements

The Hyperledger Telecom Special Interest Group would like to thank the following people who contributed to this solution brief: Nima Afraz, David Boswell, Bret Michael Carpenter, Vinay Chaudhary, Dharmen Dhulla, Charlene Fu, Gordon Graham, Saurabh Malviya, Lam Duc Nguyen, Ahmad Sghaier Omar, Vipin Rathi, Bilal Saleh, Amandeep Singh, and Mathews Thomas.

Linux Foundation, LF Networking, and LF Edge Announce Speaker Line-up for Open Networking & Edge Executive Forum, March 10-12

By Announcement, Event, LF Edge

Technology leaders, change makers and visionaries from across the global networking & edge communities will gather virtually for this unique, one-of-a-kind executive event focusing on deployment progress, 2021 priorities, challenges and more. 

SAN FRANCISCO, February 25, 2020 The Linux Foundation, the nonprofit organization enabling mass innovation through open source, along with co-hosts LF Networking, the umbrella organization fostering collaboration and innovation across the entire open networking stack, and LF Edge, the umbrella organization building an open source framework for the edge, announced today the speaker line-up for Open Networking & Edge Executive Forum. The schedule can be viewed here and the speaker details can be viewed here.  

Open Networking & Edge Executive Forum (ONEEF) is a special edition of Open Networking & Edge Summit, the industry’s premier open networking & edge event, gathering senior technologists and executive leaders from enterprises, telecoms and cloud providers for timely discussions on the state of the industry, imminent priorities and insights into Service Provider, Cloud, Enterprise Networking, and Edge/IOT requirements. 

ONEEF will take place virtually, March 10-12. Times vary each day to best accommodate the global audience. Attendees will be able to interact with speakers and attendees directly via chat, schedule 1:1 meetings and more as they participate in this community call to action. 

“ONEEF is a great opportunity for the community to come together virtually after a very hard year,” said Arpit Joshipura, General Manager, Networking, Edge, and IoT, The Linux Foundation. “We have an impressive line-up of speakers from across a diverse set of global organizations, ready to share their knowledge and passion about what’s next for our burgeoning industry. Hope you can join us!”

Confirmed Keynote Speakers Include:

  • Madeleine Noland, President, Advanced Television Systems Committee
  • Andre Fuetsch, Executive Vice President & Chief Technology Officer, AT&T Services, Inc.
  • Steve Mullaney, Chief Executive Officer & President, Aviatrix
  • Jacob Smith, Vice President, Bare Metal Marketing & Strategy, Equinix
  • Dr. Junlan Feng, Chief Scientist & General Manager, China Mobile Research
  • Sun Quiong, SDN Research Center Director, China Telecom Research Institute
  • Dr. Jonathan Smith, Program Manager, Information Innovation Office (I2O), DARPA
  • Tom Arthur, Chief Executive Officer, Dianomic
  • Chris Bainter, Vice President, Global Business Development, FLIR Systems
  • George Nazi, Global Vice President, Telco, Media & Entertainment Industry Solutions Lead, Google Cloud
  • Amol Phadke, Managing Director: Global Telecom Industry Solutions, Google Cloud
  • Shawn Zandi, Head of Network Engineering, LinkedIn
  • Tareq Amin, Group Chief Technology Officer, Rakuten
  • Johan Krebbers, IT Chief Technology Officer & Vice President, TaCIT Architecture, Shell
  • Pablo Espinosa, Vice President, Network Engineering, Target
  • Manish Mangal, Chief Technology Officer, Network Services, Tech Mahindra
  • Matt Trifiro, Chief Marketing Officer, Vapor IO
  • Subha Tatavarti, Sr. Director Technology Commercialization, Walmart
  • Said Ouissal, Founder & CEO, ZEDEDA

Registration for the virtual event is open and is just US$50. Members of The Linux Foundation, LF Networking and LF Edge can attend for free – members can contact us to request a member discount code. The Linux Foundation provides diversity and need-based registration scholarships for this event to anyone that needs it; for information on eligibility and to apply, click here. Visit our website and follow us on Twitter, Facebook, and LinkedIn for all the latest event updates and announcements.

Members of the press who would like to request a media pass should contact Jill Lovato.

ONEEF sponsorship opportunities are available through Tuesday, March 2. All packages include a keynote speaking opportunity, prominent branding, event passes and more. View the sponsorship prospectus here or email us to learn more.  

About The Linux Foundation
The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

Akraino Release 4 Enables Kubernetes Across Multiple Edges, Integrates across O-RAN, Magma, and More

By Akraino, Announcement
  • 7 New Akraino R4 Blueprints (total of 25+)  
  • Akraino is Kubernetes-ready with K8s- enabled blueprints across 4 different edge segments (Industrial IOT, ML, Telco, and Public Cloud)
  • New and updated blueprints also target ML, Connected Car, Telco Edge, Enterprise, AI, and more 

SAN FRANCISCO  February 25,  2021LF Edge, an umbrella organization within the Linux Foundation that creates an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced the availability of Akraino Release 4 (“Akraino R4”).  Akraino’s fourth release enables additional blueprints that support various deployments of Kubernetes across the edge, from Industrial IoT, to Public Cloud, Telco, and Machine Learning (ML). 

Launched in 2018, and now a Stage 3 (or “Impact” stage) project under the LF Edge umbrella, Akraino delivers an open source software stack that supports a high-availability cloud stack optimized for edge computing systems and applications. Designed to improve the state of carrier edge networks, edge cloud infrastructure for enterprise edge, and over-the-top (OTT) edge, it enables flexibility to scale edge cloud services quickly, maximize applications and functions supported at the edge, and to improve the reliability of systems that must be up at all times. 

“Now on its fourth release, Akraino is a fully-functioning open edge stack,” said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. “This latest release brings new blueprints that focus on Kubernetes-enablement across edges, as well as enhancements to other areas like connected cars, ML, telco and more. This is the power of open source in action and I am eager to see even more use cases of blueprint deployments.”  

About Akraino R4

Building on the success of R3, Akraino R4 again delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R4, Akraino blueprints enable additional use cases, deployments, and PoCs with support for new levels of flexibility that scale Public Cloud Edge, Cloud Native Edge, 5G, Industrial IoT, Telco, and Enterprise Edge Cloud services quickly, by delivering community-vetted and tested edge cloud blueprints to deploy edge services.  

In addition, new use cases and new and existing blueprints provide an edge stack for Industrial Edge, Public Cloud Edge Interface, Federated ML, KubeEdge, Private LTE/5G, SmartDevice Edge, Connected Vehicle, AR/VR, AI at the Edge, Android Cloud Native, SmartNICs, Telco Core and Open-RAN, NFV, IOT, SD-WAN, SDN, MEC, and more.

Akraino R4 includes seven new blueprints for a total of 25+, all tested and validated on real hardware labs supported by users and community members, and API map.

The 25+ “ready and proven” blueprints include both updates and long-term support to existing R1, R2, and R3 blueprints, and the introduction of seven new blueprints:

More information on Akraino R4, including links to documentation, code, installation docs for all Akraino Blueprints from R1-R4, can be found here. For details on how to get involved with LF Edge and its projects, visit https://www.lfedge.org/

Looking Ahead

The community is already planning R5, which will include more implementation of public cloud and telco synergy, more platform security architecture, increased alliance with upstream and downstream communities, and development of IoT Area and Rural Edge for tami-COVID19. Additionally, the community is expecting new blueprints as well as additional enhancements to existing blueprints. 

Upcoming Events

The Akraino community is hosting its developer-focused Spring virtualTechnical Meetings  March 1-3. To learn more, visit https://events.linuxfoundation.org/akraino-technical-meetings-spring/ 

Don’t miss the Open Networking and Edge Executive Forum (ONEEF), a virtual event happening March 10-12, where subject matter experts from Akraino and other LF Edge communities will present on the latest open source edge developments. Registration is now open! 

Join Kubernetes on Edge Day, co-located with aKubeCon + CloudNativeCon, to get in on the ground floor and shape the future intersection of cloud native and edge computing. The virtual event takes place May 4 and registration is just $20 US. 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

 

LF Edge Member Spotlight: Red Hat

By Blog, LF Edge, Member Spotlight

The LF Edge community is comprised of a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Lisa Caywood, Principal Community Architect at Red Hat, to discuss the their history in open source, participation in Akraino and the Kubernetes-Native Infrastructure (KNI) Blueprint Family, and the benefits of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Red Hat is the world’s leading provider of enterprise open source solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. We help you standardize across environments, develop cloud-native applications, and integrate, automate, secure, and manage complex environments with award-winning support, training, and consulting services.

Why is your organization adopting an open-source approach?

Open Source has always been at the core of Red Hat’s core values. As the largest open source company in the world, we believe using an open development model helps create more stable, secure, and innovative technologies.  We’ve spent more than two decades collaborating on community projects and protecting open source licenses so we can continue to develop software that pushes the boundaries of technological ability. For more about our open source commitment or background, visit https://www.redhat.com/en/about/open-source.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

The network edge is the focus of intensive reinvention and investment in the telco industry and beyond. With a wide array of use cases, and equally wide array of technology options for enabling them, supporting adoption of many new technologies and approaches requires having a common forum for working out design and operations guidelines as well as common approaches to interoperability. IoT requirements aren’t strongly featured at the moment, but we foresee opportunities to focus here in the future. In all of the above cases we strongly believe that the market is seeking open source solutions, and the LF Edge umbrella is a key to fostering many of these projects.

What do you see as the top benefits of being part of the LF Edge community?

  • Forum for engaging productively with other members of the Edge ecosystem on interoperability concerns
  • Brings together work done in other Edge-related groups in an framework focused on implementation

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

We have primarily participated in the development of Akraino Blueprints such as the Kubernetes-Native Infrastructure (KNI) Blueprint Family. Specifcially, the KNI Provider Access Edge Blueprint, which leverages the best-practices and tools from the Kubernetes community to declaratively manage edge computing stacks at scale and with a consistent, uniform user experience from the infrastructure up to the services and from developer environments to production environments on bare metal or on public cloud, and the KNI Industrial Edge Blueprint. We are also active on the LF Edge Governing Board and other committees that help form and guide the project.

What do you think sets LF Edge apart from other industry alliances?

Close interaction with LF Networking and CNCF communities.

How will LF Edge help your business?

Red Hat’s partners and customers are strongly heading towards areas in RAN and other workload virtualization and containerization, 4g/5g, Edge, MEC and other variations on those areas. The Linux Foundation Edge’s umbrella is one of the premier places for organizations focusing on creating open source solutions in these areas to converge.

What advice would you give to someone considering joining LF Edge?

As with any open source engagement, prospective members should have clear, concrete and well-documented objectives they wish to achieve as a result of their engagement. These may include elaboration of specific technical capabilities, having a structured engagement strategy with certain key partners, or exploration of a new approach to emerging challenges. Take advantage of onboarding support provided by LF staff and senior contributors in your projects of interest.

To find out more about LF Edge members or how to join, click here.

To learn more about Akraino, click here. Additionally, if you have questions or comments, visit the LF Edge Slack Channel and share your thoughts in the #akraino, #akraino-help, #akraino-tsc and #akraino-blueprints channels.

Managing in the dark: facilities in COVID

By Blog, Trend

Written by Scott Smith,  Industry Principal for Facilities and Data Centers for OSIsoft, an LF Edge member company

This article originally ran on the OSIsoft website – click here for more content like this. 

The job of facilities managers used to be pretty straightforward: keep the lights on, keep people comfortable. It was all about availability, uptime, and comfort. However; the world is changing, and now facility management must respond to failures before people even notice them. They must predict maintenance based on conditions rather than time, and all this needs to be done with less people and less energy. With the critical role facilities play in the health and safety of occupants, there’s no room for missteps.

It seems like there are not enough hours in a day to accomplish all that we face.  So, what does an ideal scenario look like for facilities managers?

First, we need to step back and realize we will not be adding all the necessary staff that we require, so we need to find a substitution.  The first place to look is how to leverage the data we already collect and how to enable the use of the data as our “eyes on” and provide us real time situational awareness.  If we can enhance our data with logic-based analytics and provide information, we can shift our resources to critical priorities and ensure the availability of our facilities.

Second, this operational information can notify the facilities team of problems and allow us to be proactive in solutions and communications instead of waiting for complaints to roll in.  We can even go one step further; we predict our facility performance and use that information to identify underlying issues we may have never found until it was too late.

This is not about implementing thousands of rules and getting overwhelmed with “alarm fatigue,” but about enhancing your own data, based on your own facilities, based on your needs.  It allows you to break down silos and use basic information from meters, building automation and plant systems to provide you an integrated awareness.

University of California, Davis is a great example of this. In 2013, they set the goal of net zero greenhouse gas emissions from the campus’s building and vehicle fleet by 2025. With more than 1,000 buildings comprising 11.3 million square feet, this was no small feat. Beyond the obvious projects, UC Davis turned to the PI System to drive more efficient use of resources. By looking at the right data, UC Davis has been able to better inform their strategy for creating a carbon neutral campus.

UC Davis a perfect example of simply starting and starting simple. The data will provide the guidance on where there are opportunities to meet your objectives.

OSIsoft is an LF Edge member company. For more information about OSIsoft, visit their website.

New Training Program Helps You Set Up an Open Source Program Office

By Blog, Training

Written by Dan Brown, Senior Manager, Content & Social Media, LF Training

Is your organization considering setting up an open source program office (OSPO) but you aren’t sure where to start? Or do you already have an OSPO but want to learn best practices to make it more effective? Linux Foundation Training & Certification has released a new series of seven training courses designed to help.

The Open Source Management & Strategy program is designed to help executives, managers, software developers and engineers understand and articulate the basic concepts for building effective open source practices within their organization. Developed by Guy Martin, Executive Director of OASIS Open, an internationally recognized standards development and open source projects consortium, the series of online courses provides the foundational knowledge and skills to build out an effective OSPO.

The courses in the program are designed to be modular, so participants only need to take those of relevance to them. This enables organizations to let each member of the team train in the skills they need to be successful in their given roles. The courses included in the program are:

  • LFC202 – Open Source Introduction – covers the basic components of open source and open standards
  • LFC203 – Open Source Business Strategy – discusses the various open source business models and how to develop practical strategies and policies for each
  • LFC204 – Effective Open Source Program Management – explains how to build an effective OSPO and the different types of roles and responsibilities needed to run it successfully
  • LFC205 – Open Source Development Practices – talks about the role of continuous integration and testing in a healthy open source project
  • LFC206 – Open Source Compliance Programs – covers the importance of effective open source license compliance and how to build programs and processes to ensure safe and effective consumption of open source
  • LFC207 – Collaborating Effectively with Open Source Projects – discusses how to work effectively with upstream open source projects and how to get the maximum benefit from working with project communities
  • LFC208 – Creating Open Source Projects – explains the rationale and value for creating new open source projects as well as the required legal, business and development processes needed to launch new projects

The Open Source Management & Strategy program is available to begin immediately. The $499 enrollment fee provides unlimited access to all seven courses for one year, as well as a certificate upon completion. Interested individuals may enroll here. The program is also included in all corporate training subscriptions.

 

Kubernetes Is Paving the Path for Edge Computing Adoption

By Blog, State of the Edge, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

For more content like this, please visit the Section website

Since Kubernetes was released five years ago by Google, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.

When it comes to the Infrastructure (or Service Provider) Edge, Kubernetes is increasingly being adopted as a key component of edge computing. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.

A Shared Operational Paradigm

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.

As defined by the State of the Edge’s Open Glossary, an “edge-native application” is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.

Kubernetes-Based Edge Architecture

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are three approaches to using Kubernetes in edge-based architecture to manage workloads and resource deployments.

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.

The three approaches for edge-based scenarios can be summarized as follows:

  1. The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine. K3s is the reference architecture for this solution.
  2. The next approach comes from KubeEdge, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.
  3. The third approach is hierarchical cloud plus edge, using a virtual kubelet as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.

Section’s Migration to Kubernetes

Section migrated to Kubernetes from a legacy homegrown scheduler last year. Instead of building our own fixed hardware network, Section distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, Packet, DigitalOcean, CenturyLink, and RackCorp. Kubernetes allows us to be infrastructure-agnostic and seamlessly manage a diverse set of workloads.

Our first-hand experience of the many benefits of Kubernetes at the edge include:

  • Flexible tooling, allowing our developers to interact with the edge as they need to;
  • Our users can run edge workloads anywhere along the edge continuum;
  • Scaling up and out as needed through our vendor-neutral worldwide network;
  • High availability of services;
  • Fewer service interruptions during upgrades;
  • Greater resource efficiency – in particular, we use the Kubernetes Horizontal Pod Autoscaler, which automatically scales the number of pods up or down according to defined latency or volume thresholds;
  • Full visibility into production workloads through the built-in monitoring system;
  • Improved performance.

Summary

As more organizations and operators continue to adopt and support Kubernetes-based cloud-edge patterns, the ecosystem will continue to mature. However, not every organization will have the resources and/or expertise to build these systems themselves. This is where edge platforms (like Section) bridge those gaps, offering DevOps teams familiar tooling to take advantage of the benefits that Kubernetes has to offer without the complexities that come along with it.

Interesting Developments In Edge Hypervisors

By Blog, Industry Article, Project EVE, State of the Edge

Written by Rex St. John, EGX Developer Relations at NVIDIA

This article originally ran on Rex’s LinkedIn page. For more content like this, connect with him on LinkedIn. 

After building Edge Computing ecosystems at Intel and Arm, I have recently made the switch to working on Edge Computing at NVIDIA. Several people have asked me to share my perspective and learnings, so I am starting this informal, personal series on the topic. All opinions shared here are my own personal opinions and not those of my employer.

Objective

In this article, I will share two reasons why some experts in the industry are investing in hypervisor technology as well as two interesting open source edge hypervisor solutions to be aware of. For edge definition nitpickers (you know who you are), I am going to be referring to the “Device Edge” here. There are many other definition for “Edge,” if you are curious, read this LF Edge white paper.

The Hovercraft Analogy

For those of you who are unfamiliar, a hypervisor is kind of like a hovercraft that your programs can sit inside. Like hovercrafts, hypervisors can provide protective cushions which allow your applications to smoothly transition from one device to another, shielding the occupants from the rugged nature of the terrain below. With hypervisors, the bumps in the terrain (differences between hardware devices), are minimized and mobility of the application is increased.

Benefits of Hypervisors

Benefits of hypervisors include security, portability and reduced need to perform cumbersome customization to run on specific hardware. Hypervisors also allow a device to concurrently run multiple, completely different, operating systems. Hypervisors also can help partition applications from one another for security and reliability purposes. You can read more about hypervisors here. They frequently are compared to, used together with or even compete with containers for similar use cases, though they historically require more processing overhead to run.

Two Reasons Why Some (Very Smart) Folks Are Choosing Hypervisors For The Edge

A core challenge in Edge Computing is the extreme diversity in hardware that applications are expected to run on. This, in turn, creates challenges in producing secure, maintainable, scalable applications capable of running across all possible targets.

Unlike their heavier datacenter-based predecessors, light-weight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge. Here are two reasons why some in the industry are taking a careful look at edge hypervisors.

Reason 1: Avoiding The Complexity And Overhead of Kubernetes

One potential reason for taking a hypervisor-based approach at the edge is that there may be downsides in pursuing Kubernetes for smaller clusters. These include the difficulty in building and managing a team who can properly setup and scale a real-world Kubernetes application due to the overhead and complexity of Kubernetes itself. In some cases, such as in running a cluster of 4-5 nodes, it might be desirable to use more streamlined approaches involving a controller and light-weight hypervisors. This is the approach taken by EVE, mentioned in more detail below.

Reason 2: Ease Of Modernizing Brown-Field Industrial IT

Another pressing reason for choosing edge hypervisors is that “brown-field” installations of existing edge hardware are extremely expensive to upgrade to follow modern IT “best practices.” Hypervisors provide a path forward that does not involve rewriting old systems from scratch as the code running on older machines can frequently be shifted into a hypervisor and neatly managed and secured from there (a process referred to as “Workload Consolidation.”)

Let’s take a look at two interesting examples of edge hypervisors to understand further.

Hypervisor #1: Project ACRN

No alt text provided for this image

The first edge hypervisor we will look at is called ACRN, which is a project hosted by the Linux Foundation. ACRN has a well documented architecture and offers a wide range of capabilities and configurations depending on the situation or desired outcome.

No alt text provided for this image

ACRN seeks to support industrial use cases by offering a specific partitioning between high-reliability processes and those which do not need to receive preferential security and processing priority. ACRN accomplishes this separation by specifying a regime for sandboxing different hypervisor instances running on the device as shown above. I recommend keeping an eye on ACRN as it seems to have significant industry support. ACRN supported platforms currently tend to be strongly x86-based.

Hypervisor #2: EVE (part of LF Edge)

Also a project hosted on the Linux Foundation, EVE differs from ACRN in that it belongs to the LFEdge project cluster. Unlike ACRN, EVE also tends to be more agnostic about supported devices and architectures. Following the instructions hosted on the EVE Github page, I was able to build and run it on a Raspberry Pi 4 within the space of ten minutes, for example.

No alt text provided for this image

In terms of design philosophy, EVE is positioning itself as the “Android of edge devices.” You can learn more about EVE by watching this recent webinar featuring the Co-Founder of Zededa, Roman Shaposhnik. EVE makes use of a “Controller” structure which provisions and manages edge nodes to simplify the overhead of operating an edge cluster.

Wrapping Up

Expect to see more happening in the hypervisor space as the technology continues to evolve. Follow me to stay up to date with the latest developments in Edge Computing.

Cultivating Giants to Stand On: Extending Kubernetes to the Edge

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – click here for more content like this.

Kubernetes is more than just a buzzword. With Gartner predicting that by the end of 2025, 90% of applications at the edge will be containerized, it’s clear that organizations will be looking to leverage Kubernetes across their enterprises, but this isn’t a straightforward proposition. There’s much more involved than just repurposing the architecture we use in the data center in a smaller or more rugged form factor at the edge.

The edge environment has several major distinctions from data centers that must be addressed in order to successfully leverage Kubernetes:

Computers, whether massive data center machines or small nodes on the smart device edge, are essentially three parts — hardware, operating system (OS) and runtime — running in support of some sort of application. And within that, an operating system is just a program that allows the execution of other programs. We went through a time in the 1990s where it was believed that the OS was the only part that mattered, and the goal was to find the best one, like the OS was a titan with the entire world on its shoulders. The reality though is that it’s actually turtles all the way down! By this I mean that with virtualization, computers are not limited to just three parts. We can slice each individual section in many different ways, with hardware emulation, hypervisors, etc.

Image for post

Split it or join it: either way you get something exciting

So how can we look at these building blocks in the most optimal way in 2020?

We first have to talk about where we find computers — the spectrum of computers being deployed today is vast. From giant machines in data centers doing big things all the way to specialized computers that might be a smart light bulb or sensor. In the middle is the proverbial edge, which we call the Smart Device Edge.

Image for post

Image courtesy of LF Edge

As we look at how to best run Kubernetes on the smart device edge, the answer is that it’s a triplet of K3s, some kind of operating system (or support for K3s) and some sort of hardware.

And so then, if we have K3s and we have hardware, what’s the best possible way to run K3s on hardware? The answer is a specialized operating system.

Just like the team behind Docker used a specialized operating system when they had to run Docker on a MacBook Pro — they created Docker Desktop, which is a specialized engine — like an operating system — that’s only there in support of Docker. And so for the smart device edge, we’ve created EVE, a lightweight, secure, open, and universal operating system built to address the unique security and scale requirements of edge nodes deployed outside of the data center.

Image for post
EVE is to Edge what Android is to Mobile

What makes EVE different? It’s the only OS that enables organizations to extend their cloud-like experience to edge deployments outside of the data center while also supporting legacy software investments. It provides an abstraction layer that decouples software from the diverse landscape of IoT edge hardware to make application development and deployment easier, secure and interoperable. The hosting of Project EVE under LF Edge ensures vendor-neutral governance and community-driven development.

Ready to learn more and to see EVE in action? Check out the full discussion.

Hitting Performance Targets at the IoT Edge

By Blog, EdgeX Foundry

Written by James Butcher, EdgeX Foundry QA&Test Working Group Chair and Edge Xpert Product Manager at IOTech Systems

 

Edge computing is all about the premise of an organization processing its operational data at the edge of the network. That means performing data collection functions and running decision making and control algorithms as close to the edge sensors and OT devices as possible.

The EdgeX Foundry framework helps makes that premise a reality and provides an open, vendor-neutral approach for doing so. However, we must never lose sight of the fact that businesses will want to understand the full implications, including the fully loaded cost, of rolling out an edge system.

The EdgeX QA & Test Working Group

The EdgeX QA & Test working group is responsible for ensuring that releases meet the high standards of reliability and robustness that are required at the Industrial IoT edge. Our remit includes verifying the framework is well tested and ensuring we have the processes in place to maintain and monitor these high standards.

Solid Testing as Standard

EdgeX Foundry is an open source community project under the LF Edge umbrella and we take testing and reliability very seriously. We developed our own EdgeX testing hardness on top of a test framework called TAF. You’ll hear us speak about TAF a lot – it stands for Test Automation Framework and supports the functional, integration and performance testing needs that we have across the project. A version of EdgeX is only released when the TAF reports all tests are passing, for example.

EdgeX is transitioning to a second version of its APIs so there has been a big emphasis on ensuring the V2 API is accurately tested. TAF has been invaluable here because we have been able to easily update and add new tests for the new API. We are pushing ahead for the first release of the new APIs in the EdgeX Ireland release due in the spring.

Gathering Performance Metrics

The QA & Test group also record and gather performance metrics associated with the usage of the EdgeX platform. A common question from users is what sort of hardware is EdgeX aimed at and how fast can it perform? Well, of course, EdgeX has been designed to be hardware and operating system independent, but key metrics like footprint, CPU usage and latency of data flow do need to be provided so users can plan which IoT edge gateways and edge servers to use. Sometimes users have the luxury of being able to select brand new hardware to deploy in their edge projects, but often it is necessary to re-use existing already-deployed hardware.

Over the last release cycle, culminating in the EdgeX Hanoi release, the QA & Test group have put renewed focus on gathering and reporting these performance metrics. We have also improved the accuracy of the figures by increasing the number of tests ran and stating metric averages along with minimum and maximum readings, etc. Note that TAF can also monitor performance regressions. We’ve added threshold testing such that failures are reported if a code commit adversely affects any of the performance metrics. These are all tools to ensure that EdgeX quality and performance remains as high as possible as the project evolves.

Presenting the Performance

We’ve decided to go beyond just providing the raw metrics and instead have created a report that presents the key numbers along with some comment and explanation. Please find the Hanoi Performance Report here.

I hope you find this information useful. We are aiming to provide an updated version of this document for each EdgeX release and will track the progression as we go.

Commercial Support is Key

EdgeX is a fantastic project for helping to simplify the challenges of edge computing. The open-source and collaborative nature of the project brings together ideas from edge experts around the world. However, to pick up the point about businesses needing to understand the full implications of selecting an edge solution, companies are usually reluctant to deploy open source code without having guaranteed help at hand and support contracts in place.

So, switching to my other role in the EdgeX ecosystem for moment, let me mention how companies like IOTech provide products and services to mitigate that risk…

I am product manager of Edge Xpert which is IOTech’s value-add and commercially supported implementation of EdgeX. Our Edge Xpert 1.8 release is based on Hanoi and adds another level of features and services to help you. Head over to our website to read about how Edge Xpert can get your EdgeX systems to market much quicker and with less risk.

Future Plans

Obviously quality and robustness is very important to me in both my EdgeX and IOTech roles. Testing is never finished of course, and we are always looking to add to our test and performance tracking capability. Future iterations of the performance testing will include additions for the security services, scalability metrics and performance associated with the EdgeX Device Services – for example, how fast can Modbus data be collected, and how many data points can be received in a certain period, etc. These factors are all important in understanding how the platform will scale as it is deployed and understanding the number of deployments – and thus gateways and edge servers that will be needed.

Open Discussions

Feel free the join the weekly EdgeX QA & Test calls where we discuss progress and issues that need to be addressed. We meet every Tuesday at 8am PST. You can find the meeting links on our page here. To learn more about EdgeX Foundry’s Hanoi Release, click here.