Category

Blog

Home Edge Earns the CII Best Practices Badge

By Blog, Home Edge

Home Edge

Written by Taras Drozdovsky and Taewan Kim, LF Edge Home Edge Committer & Staff Engineer at Samsung Research as well as Peter Moonki Hong, LF Edge Governing Board member & Principal Engineer at Samsung Research

Every year the number of Free/Libre and Open Source Software (FLOSS) projects increases and the community has a goal to maintain high quality code, documentation, test coverage and, of course, a high level of security.

The Linux Foundation, in collaboration with and major IT companies in its Core Infrastructure Initiative (CII), has collected best practices for FLOSS projects and provided these criteria as CII Best Practices.

In December 2020, LF Edge’s Home Edge team set itself the goal of getting a CII Best Practices Badge. After working tirelessly on the requirements, we are happy to announce Home Edge has received the CII Best Practices Badge!

Thank you to the committers and key contributors of Home Edge, Taras Drozdovskyi, Taewan Kim, Somang Park, MyeongGi Jeong, Suresh L C, Sunchit Sharma, Dwarkaprasad Dayama, Peter Moonki Hong. We would also like to thank Jim White with EdgeX Foundry for helping to guide us and the entire LF Edge/Linux Foundation team for their support.

Benefits achieved on the way to getting the CII Best Practices passing badge:

  • Improved documentation:
    • Security and Testing policy
    • How to Contributing Guide
    • Descriptions External APIs
  • Improved the build and testing system
    • CI infrastructures: Github->Actions – 20 checks
    • Integrated of external software tools for analysis code:
      • gofmt – 92%;
      • go_vet – 100%;
      • golint – 76%;
      • SonarCloud: Security Hotspots – 37 -> 0; Code Smells – 253 -> 50; Duplications – 7.8% -> 3%
    • Improved security analysis:
      • Integrated CodeQL Analysis, LGTM services: 17 -> 0 Security Alerts

We have reached a high level of code quality, but continue to improve our code!

Improving the Edge Orchestration project infrastructure (using many tools for analyzing code and searching for vulnerabilities) allowed us to increase not only the level of security, but also the reliability of our product. We hope that improving the documentation will reduce the time to enter the project, and therefore increase the number of external developers participating in the project. Their advice and input are very important for us.

It should also be noted that there were many areas of self-development for the members of the project team: developers became testers, technical writers, security officers. This is a wonderful experience.

Next step of Edge Orchestration team

Further improving the Edge-Home-Orchestration project and achieving “silver” and “gold” badges. Implementation of OpenSSF protected code development practices intoEdge-Home-Orchestration.

If you would like to learn more about the use cases for Home Edge or more technical details, check out the video of our October 2020 webinar. As part of the “On the Edge with LF Edge” webinar series, we shared the general overview for the project, how it fits into LF Edge, key features of the Coconut release, the roadmap, how to get involved and the landscape of the IoT Home Edge.

If you would like to contribute to Home Edge or share feedback, find the project on GitHub, the LF Edge Slack channel (#homeedge) or subscribe to our email list (homeedge-tsc@lists.lfedge.org). We welcome new contributors to help make this project better and expand the LF Edge community.

Additional Home Edge Resources:

1. Coconut release code : https://github.com/lf-edge/edge-home-orchestration-go/releases/tag/coconut

2. Release notes

3. Home Edge Wiki: https://wiki.lfedge.org/display/HOME/Home+Edge+Project

 

State of the Edge 2021 Report

By Blog, LF Edge, State of the Edge

Written by Jacob Smith, State of the Edge Co-Chair and VP Bare Metal Strategy at Equinix Metal

On Wednesday, LF Edge published their 2021 edition of the State of the Edge Report.  As a co-chair and researcher for the report, I want to share insight into how this year’s report was created. In development, we focused on three key questions to ensure the report covered essential subject matter:

  • What’s new in edge computing since last year?
  • What are the challenges and opportunities facing the ecosystem?
  • How can we help?

Designed to be “vendor neutral” and to shy away from any bias, we had multiple authors contribute content and utilized an independent editor to balance the report to better serve the edge computing community. Several experts weighed in on what they were paying attention to in the edge space and identified four major stand-out points:

  • Critical infrastructure (data centers)
  • Hardware and silicon
  • Software as a whole
  • Networks and networking

It’s abundantly clear there is growth in edge computing. Over the last year, COVID-19 accelerated nearly every facet of digital transformation, and this factored into the growth of the edge computing space. But, it also might just be a long overdue increase in digital innovation and transformation.

We will continue to discover more about the edge and how to use it to optimize the computing ecosystem. For now, our attention is focused on how diverse the edge is and all of the solutions that are waiting to be discovered.

Summarized from Equinix Metal’s The State of Your Edge. Read the full story here. Download the report here.   

 

 

Scaling Ecosystems Through an Open Edge (Part Three)

By Blog, LF Edge, Project EVE, Trend

Getting to “Advanced Class”, Including Focusing on Compounding Value with the Right Partners

By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA

 

Image for post

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Thanks for continuing to read this series on ecosystem development and the importance of an open edge for scaling to the true potential of digital transformation — interconnected ecosystems that drive new business models and customer outcomes. In parts one and two, I talked about various ecosystem approaches spanning open and closed philosophies, the new product mindset related to thinking about the cumulative lifetime value of an offer, and the importance of the network effect and various considerations for building ecosystems. This includes the dynamics across both technology choices and between people.

In this final installment I’ll dive into how we get to “advanced class,” including the importance of domain knowledge and thinking about cumulative value add vs. solutions looking for problems that can be solved in other, albeit brute force, ways.

Getting to advanced class

I talked about the importance of compounding value in part one of this series and will touch on this concept a little more here, in addition to the criticality of domain knowledge. Speaking of the later, hundreds of IoT platform providers claim they can do a use case like Predictive Maintenance (PdM) but few actually have the domain knowledge to do it. A PdM solution not only requires analytics tools and data science but also an intimate knowledge of failure patterns based on various attributes (vibration, oil particulates, temperature, motor current, etc.). Often an operator on a factory floor that has “been there done that” sits alongside a data scientist to help program the analytics algorithms based on his/her tribal knowledge. A former colleague once worked on a PdM solution with a line operator named Brad, who was the expert that helped the data scientist understand what machine and process behaviors were non-issues, despite looking like bad news, and vice versa. They jokingly called the end result “Bradalytics”.

Further, there’s a certain naivety in the industry today in terms of pushing IoT solutions when there’s already established “good enough” precedent. In the case of PdM, manual data acquisition with a handheld vibration meter once a month or so is a common practice to accurately predict impending machine failures because these failures don’t typically happen overnight. An industrial operator can justify this manual data collection as OpEx and the only thing permanently installed on their machines are brass pads that indicate where to apply the contact-based handheld sensor, from which data is manually loaded into an analytical tool.

Similar manual procedures are all too common in other industries, such as USB data loggers used to monitor temperature in cold chain logistics operations. In another example, structural engineers have historically used the practice of attaching an expensive sensor package to monitor the structural integrity of a bridge or building for a few weeks, only to then move this equipment on to the next piece of infrastructure.

The promise of IoT is that this instrumentation is becoming so inexpensive that it can be deployed permanently everywhere for acquiring real-time data from the physical world; however, the value of deploying this infrastructure still must be greater than the cost of deploying and maintaining it through its full lifecycle in addition to the associated risk in terms of maintaining security and privacy.

Don’t get me wrong — PdM enabled by IoT is a valuable use case — despite machines typically failing over a long period of time it’s expensive to roll a truck to do repairs, especially if an industrial process experiences a loss of production. For example, downtime in a major jetliner factory can be upwards of $20k a minute! It’s just important to think about the big picture and whether you’re trying to solve a problem that has already been solved.

Looking at the bigger picture, a supply chain linked to a manufacturing process is a great example of an ecosystem that easily justifies an IoT solution for real-time monitoring and analytics. In a multi-stage manufacturing operation, the cost of discovering a flaw within a given part increases steadily with each process stage. The cost is even higher if that part gets into the supply chain and it’s higher yet if a defective product gets to a customer, not to mention impacting the manufacturer’s brand. Here the cumulative value of instrumenting throughout the product lifecycle is very high and definitely warrants a solution that can report issues the moment they occur.

Speaking of the holistic value that I touched on in part one and above, the real potential is not just the remote monitoring of a single machine, rather a whole fleet of interconnected machines. Imagine a world where you can send a tech out with work instructions to preventatively repair a fleet of machines in order to get the most out of a truck roll to a given location. This is similar to the story of “Fred and Phil” in part one, in which Phil wanted the propane tanks to be bone dry before rolling a truck. And on top of that — imagine that the algorithm could tell you that it will cost you less money in the long run to replace a specific machine altogether, rather than trying to repair it yet again.

It goes beyond IoT and AI, last I checked, systems ranging from machines to factory floors and vehicles are composed of subsystems from various suppliers. As such, open interoperability is also critical when it comes to Digital Twins. I think this is a great topic for another blog!

Sharpening the edge

In my recent Edge AI blog I highlighted the new LF Edge taxonomy white paper and how we think this taxonomy will help put an end to the current market confusion caused by various industries (cloud, telco, IT, OT/industrial, consumer) defining the edge with a strong bias towards their interests/PoV, not to mention taxonomies that use ambiguous terms such as “thin and thick” and “near and far” that mean different things to different people. The team at ZEDEDA had a heavy hand in shaping this with the LF Edge community. In fact, a lot of the core thinking stems from my “Getting a Deeper Edge on the Edge” blog from last year.

As highlighted in the paper, the IoT component of the Smart Device Edge (e.g., compared to client devices like smartphones, PCs, etc. that also live at this sub-category) spans a single node with 256MB of memory up to a small server cluster, deployed outside of a physically-secure data center or Modular Data Center (MDC). The upper end of this spectrum is increasingly blurring into the Kubernetes paradigm thanks to efforts like K3S. However, due to resource constraints and other factors these nodes will not have the exact same functionality as higher edge tiers leveraging full-blown Kubernetes.

Below the Smart Device Edge is the “Constrained Device Edge”. This sub-tier consists of a highly fragmented landscape of microcontroller-based devices that typically have their own custom OTA tools. Efforts like Microsoft’s Sphere OS are trying to address this space and it’s important to band together on efforts like this due to the immense fragmentation at this tier.

Ultimately it’s important to think of the edge to cloud as a continuum and that there isn’t an “industrial edge” vs. an “enterprise edge” and a “consumer edge” as some would contend. Rather, it’s about building an ecosystem of technology and services providers that create necessarily unique value-add on top of more consistent infrastructure while taking into account the necessary tradeoff across the continuum.

We build the guts so you can bring the glory

You can think of ZEDEDA’s SaaS-based orchestration solution as being like VMware Tanzu in principle (in the sense that we support both VMs and containers) but optimized for IoT-centric edge compute hardware and workloads at the “Smart Device Edge” as defined by the LF Edge taxonomy. We’re not trying to compete with great companies like VMware and Nutanix who shine at the On-prem Data Center Edge up through the Service Provider Edge and into centralized data centers in the cloud. In fact, we can help industry leaders like these, telcos and cloud service providers extend their offerings including managed services down into the lighter compute edges.

Our solution is based on the open source Project EVE within LF Edge which provides developers with maximum flexibility for evolving their ecosystem strategy with their choice of technologies and partners, regardless of whether they ultimately choose to take an open or more closed approach. EVE aims to do for IoT what Android did for the mobile component of the Smart Device Edge by simplifying the orchestration of IoT edge computing at scale, regardless of applications, hardware or cloud used.

The open EVE orchestration APIs also provide an insurance policy in terms of developers and end users not being locked into only our commercial cloud-based orchestrator. Meanwhile, it’s not easy to build this kind of controller for scale, and ours places special emphasis on ease of use in addition to security. It can be leveraged as-is by end users, or white-labeled by solution OEMs to orchestrate their own branded offers and related ecosystems. In all cases, the ZEDEDA cloud is not in the data path so all permutations of edge data sources flowing to any on-premises or cloud-based system are supported across a mix of providers.

As I like to say, we build the guts so our partners and customers can bring the glory!

In closing

I’ll close with a highlight of the classic Clayton Christiansen book Innovator’s Dilemma. In my role I talk with a lot of technology providers and end users, and I often see people stuck in this pattern, thinking “I’ll turn things around if I can just do what I’ve always done better”. This goes not just for large incumbents but also fairly young startups!

One interaction in particular that has stuck with me over the years was a IoT strategy discussion with a large credit card payments provider. They were clearly trapped in the Innovator’s Dilemma, wondering what to do about the mobile payment players like Square that had really disrupted their market. As they were talking about how they didn’t want another Square situation to happen again, I asked them “have you thought about when machines start making payments?”. The fact that this blew their minds is exactly why another Square is going to happen to them, if they don’t get outside of their own headspace.

When approaching digital transformation and ecosystem development it’s often best to start small, however equally important is to think holistically for the long term. This includes building on an open foundation that you can grow with and that enables you to focus your investments on business relationships and innovation rather than reinvention. Sure, you may be able to build your own orchestration solution like we have at ZEDEDA, or your own full turn-key stack, but if you do either in a proprietary silo then you’ve just locked yourself out of the biggest potential — creating a network effect across various markets through increasingly interconnected ecosystems! Again, imagine if the internet was owned by one company.

We invite you to get involved with Project EVE, and more broadly-speaking LF Edge as we jointly build out this foundation. You’re of course free to build your own controller with EVE’s open APIs, or reach out to us here at ZEDEDA and we can help you orchestrate your commercial IoT edge ecosystem. We’re here to help you start small and scale big and have a growing entourage of technology and service partners to drive outcomes for your business.

Open Horizon Moves to Stage 2!

By Blog, LF Edge, Open Horizon

Written by Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM

On Thursday, March 4, 2021 the Open Horizon open source project was officially promoted to stage two maturity.  This was the culmination of a process that began on January 27, when I presented the proposal to the LF Edge Technical Advisory Committee (TAC).

Introduction

The Open Horizon software project was seeded by a contribution from IBM to the Linux Foundation in April 2020.  The project’s mission: to develop open-source software that manages the service software lifecycle of containerized applications on edge compute nodes.  Since that initial code contribution, the project and community has grown as it has attracted new project partners and contributors.

About Stage Two

Reaching stage two is a significant milestone for open-source software projects because it is a strong indicator of both the organization’s healthy growth, and potential stakeholder interest in using the software as a solution for specific use cases.  In the LF Edge umbrella organization, projects have to meet the following criteria to achieve stage two:

  • Past community participation met previous growth plans
  • An ongoing flow of merged code commits and other contributions
  • Documented proofs of concept using the software
  • Collaboration with other LF Edge projects
  • Growth plan, including projected feature sets and community targets

Getting Started

Since the project began last spring, it formed a Technical Steering Committee (TSC) and began hosting public meetings last July.  The TSC then authorized the creation of seven Working Groups which are responsible for overseeing the daily work of the project.  Then a Special Interest Group (SIG) proposal was presented for the formation of a group to promote manufacturing and Industry 4.0 use cases last August, which was approved.

Growing the Community

The project has also been actively involved in reaching out to students and universities.  Thanks to funding from the Linux Foundation and Intel, the LFX Mentorship program was able to provide stipends for mentees who complete a term with Linux Foundation projects.  Open Horizon joined the LFX program and was able to mentor four students and early career persons from October to December 2020.  This mentorship program continues, and Open Horizon has just begun the Spring 2021 term with four new mentees.

Using the Platform

Last year, IBM created a commercially supported distribution of Open Horizon that was named IBM Edge Application Manager (IEAM).  We’ve seen Open Horizon, or Open Horizon-based distributions, being used in an autonomous ship, vertical farming solutions, regional climate protection management, shipping ports, and even the International Space Station to deliver applications and related machine learning assets.  And last week HP Inc, Intel, and IBM presented a webinar to invite retailers and vendors to participate in creating an Open Retail Reference Architecture based on the EdgeX Foundry, Open Horizon, and SDO projects.

Creating an Open Edge Computing Framework

The LF Edge organization provides a structure to support open edge computing projects.  This should allow those projects to collectively form a framework for edge computing if they support common standards and interoperability.  The Open Horizon project is working to further that goal by both working with other LF Edge projects to create end-to-end edge computing solutions when combined, but also to support existing open standards and to create new standards where none exist today.  An example of supporting existing standards is in the area of zero-touch device provisioning where Open Horizon incorporates SDO services, a reference implementation for the FIDO IoT v1 specification.

Join Us

Now that Open Horizon has demonstrated its value as a platform and a community, it is preparing to expand the community by adding new SIGs, Partners, and contributors to the project.  To work with the project community to shape application delivery and lifecycle management within edge computing, consider attending one of the Working Group meetings, contributing code, working on standards, or even installing the software.

Additional Resources:

How do you manage applications on thousands of Linux hosts?

By Blog, Open Horizon

Written by Glen Darling, contributor to Open Horizon and Advisory Software Engineer at IBM

If they are debian/ubuntu-style distros with Docker installed, or RedHat-style distros with docker or podman installed, Open Horizon can make this easy for you. Open Horizon is an LF Edge open source project originally contributed by IBM that is designed to manage application containers on potentially very large numbers of edge machines. IBM’s commercial distribution of Open Horizon for example supports 30,000 Linux hosts from a single Management Hub!

How can this scale be achieved? On each Linux host (called a “node” in Open Horizon) a small autonomous Open Horizon Agent must be installed. Since these Agents act autonomously, this minimizes the need for connections to the central Open Horizon Management Hub and also minimizes the network traffic required. In fact, the local Agent can continue to perform many of its monitoring and management functions for your applications even when completely disconnected from the Management Hub! Also, the Management Hub never initiates communications with any Agents, so no firrewall ports need to be opened on your edge nodes. Each Agent is responsible for its own node and it reaches out to contact the Management Hub to receive new information as appropriate based on its configuration.

Open Horizon’s design also simplifies operations at scale. No longer will you need to maintain hard-coded lists of nodes and the software that’s appropriate for each of them. When a fleet of devices is diverse maintaining complex overlapping software requirements can quickly become unmanageable with even relatively small scale. Instead, Open Horizon lets you specify your intent in “policies”, and then the Agents, in collaboration with the Agreement Robots (AgBots for short) in the Management Hub will work to achieve your intent across your whole fleet. You can specify policies for individual nodes, and/or policies for your services (collections of one or more containers deployed as a unit), and/or policies to govern deployment of your applications. Policies also enable your large data files, like neural network models (e.g., large edge weight files) to have lifecycles independent of your service containers.

I recently presented a hands-on Open Horizon deployment example at Cisco Systems’ “DevNet 2020” developer conference. In this example I showed how to take an existing container, publish it to DockerHub, define Open Horizon artifacts to describe it as a service, and a simple deployment pattern to deploy it. The example application detects the presence or absence of a facial mask when provided an image over an HTTP REST API. Cisco developers, and most developers working with Linux machines, can use Open Horizon as I did in this example to deploy their software across small or large fleets of machines.

You can view my Cisco DevNet 2020 presentation at the link below:

Additional Resources:

Decentralized ID and Access Management (DIAM) for IoT Networks

By Blog, Hyperledger, LF Edge

Written by Nima Afraz, Connect Centre for Future Networks and Communications, Trinity College Dublin

The Telecom Special Interest group in collaboration with the Linux Foundation’s LF Edge initiative has published a solution brief addressing the issues concerning the centralized ID and Access Management (IAM) in IoT Networks and introducing a distributed alternative using Hyperledger Fabric.

The ever-growing number of IoT devices means data vulnerability is an ongoing risk. Existing centralized IoT ecosystems have led to concerns about security, privacy, and data use. This solution brief shows that a decentralized ID and access management (DIAM) system for IoT devices provides the best solution for those concerns, and that Hyperledger offers the best technology for such a system.

The IoT is growing quickly. IDC predicts that by 2025 there will be 55.7 billion connected devices in the world. Scaling and securing a network of billions of IoT devices starts with a robust device. Data security also requires a strong access management framework that can integrate and interoperate with existing legacy systems. Each IoT device should carry a unique global identifier and have a profile that governs access to the device.

In this solution brief, we propose a decentralized approach to validate and verify the identity of IoT devices, data, and applications. In particular, we propose using two frameworks from the Linux Foundation: Hyperledger Fabric for the distributed ledger (DLT) and Hyperledger Indy for the decentralized device IDs. These two blockchain frameworks provide the core components to address end-to-end IoT device ID and access management (IAM).

The Problem: IoT Data Security

The ambitious IoT use cases including smart transport infer a massive volume of vehicle-to-vehicle (V2V) and vehicle-to-road communications that must be safeguarded to prevent malicious activity and malfunctions due to single points of failure.

The Solution: Decentralized Identity

IoT devices collect, handle, and act on data as proxies for a wide range of users, such as a human, a government agency, or a multinational enterprise.With tens of billions of IoT devices to be connected over the next few years, numerous IoT devices may represent a single person or institution in multiple roles. And IoT devices may play roles that no one has yet envisioned.

A decentralized ID management system removes the need for any central governing authority and makes way for new models of trust among organizations. All this provides more transparency, improves communications, and saves costs.

The solution is to use Hyperledger technology to create a trusted platform for a telecom ecosystem that can support IoT devices throughout their entire lifecycle and guarantee a flawless customer experience. The solution brief includes Reference Architecture and a high-level architecture view of the proof of concept (PoC) that IBM is working on with some enterprise clients. This PoC uses Hyperledger Fabric as described above.

Successful Implementations of Hyperledger-based IoT Networks

IBM and its partners have successfully developed several global supply-chain ecosystems using IoT devices, IoT network services, and Hyperledger blockchain software. Two examples of these implementations are Food Trust and TradeLens.

The full paper is available to read at: https://www.hyperledger.org/wp-content/uploads/2021/02/HL_LFEdge_WhitePaper_021121_3.pdf

Get Involved with the Group

To learn more about the Hyperledger Telecom Special Interest Group, check out the group’s wiki and mailing list. If you have questions, comments or suggestions, feel free to post messages to the list.  And you’re welcome to join any of the group’s upcoming calls to meet group members and learn how to get involved.

Acknowledgements

The Hyperledger Telecom Special Interest Group would like to thank the following people who contributed to this solution brief: Nima Afraz, David Boswell, Bret Michael Carpenter, Vinay Chaudhary, Dharmen Dhulla, Charlene Fu, Gordon Graham, Saurabh Malviya, Lam Duc Nguyen, Ahmad Sghaier Omar, Vipin Rathi, Bilal Saleh, Amandeep Singh, and Mathews Thomas.

LF Edge Member Spotlight: Red Hat

By Blog, LF Edge, Member Spotlight

The LF Edge community is comprised of a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Lisa Caywood, Principal Community Architect at Red Hat, to discuss the their history in open source, participation in Akraino and the Kubernetes-Native Infrastructure (KNI) Blueprint Family, and the benefits of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Red Hat is the world’s leading provider of enterprise open source solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. We help you standardize across environments, develop cloud-native applications, and integrate, automate, secure, and manage complex environments with award-winning support, training, and consulting services.

Why is your organization adopting an open-source approach?

Open Source has always been at the core of Red Hat’s core values. As the largest open source company in the world, we believe using an open development model helps create more stable, secure, and innovative technologies.  We’ve spent more than two decades collaborating on community projects and protecting open source licenses so we can continue to develop software that pushes the boundaries of technological ability. For more about our open source commitment or background, visit https://www.redhat.com/en/about/open-source.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

The network edge is the focus of intensive reinvention and investment in the telco industry and beyond. With a wide array of use cases, and equally wide array of technology options for enabling them, supporting adoption of many new technologies and approaches requires having a common forum for working out design and operations guidelines as well as common approaches to interoperability. IoT requirements aren’t strongly featured at the moment, but we foresee opportunities to focus here in the future. In all of the above cases we strongly believe that the market is seeking open source solutions, and the LF Edge umbrella is a key to fostering many of these projects.

What do you see as the top benefits of being part of the LF Edge community?

  • Forum for engaging productively with other members of the Edge ecosystem on interoperability concerns
  • Brings together work done in other Edge-related groups in an framework focused on implementation

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

We have primarily participated in the development of Akraino Blueprints such as the Kubernetes-Native Infrastructure (KNI) Blueprint Family. Specifcially, the KNI Provider Access Edge Blueprint, which leverages the best-practices and tools from the Kubernetes community to declaratively manage edge computing stacks at scale and with a consistent, uniform user experience from the infrastructure up to the services and from developer environments to production environments on bare metal or on public cloud, and the KNI Industrial Edge Blueprint. We are also active on the LF Edge Governing Board and other committees that help form and guide the project.

What do you think sets LF Edge apart from other industry alliances?

Close interaction with LF Networking and CNCF communities.

How will LF Edge help your business?

Red Hat’s partners and customers are strongly heading towards areas in RAN and other workload virtualization and containerization, 4g/5g, Edge, MEC and other variations on those areas. The Linux Foundation Edge’s umbrella is one of the premier places for organizations focusing on creating open source solutions in these areas to converge.

What advice would you give to someone considering joining LF Edge?

As with any open source engagement, prospective members should have clear, concrete and well-documented objectives they wish to achieve as a result of their engagement. These may include elaboration of specific technical capabilities, having a structured engagement strategy with certain key partners, or exploration of a new approach to emerging challenges. Take advantage of onboarding support provided by LF staff and senior contributors in your projects of interest.

To find out more about LF Edge members or how to join, click here.

To learn more about Akraino, click here. Additionally, if you have questions or comments, visit the LF Edge Slack Channel and share your thoughts in the #akraino, #akraino-help, #akraino-tsc and #akraino-blueprints channels.

Managing in the dark: facilities in COVID

By Blog, Trend

Written by Scott Smith,  Industry Principal for Facilities and Data Centers for OSIsoft, an LF Edge member company

This article originally ran on the OSIsoft website – click here for more content like this. 

The job of facilities managers used to be pretty straightforward: keep the lights on, keep people comfortable. It was all about availability, uptime, and comfort. However; the world is changing, and now facility management must respond to failures before people even notice them. They must predict maintenance based on conditions rather than time, and all this needs to be done with less people and less energy. With the critical role facilities play in the health and safety of occupants, there’s no room for missteps.

It seems like there are not enough hours in a day to accomplish all that we face.  So, what does an ideal scenario look like for facilities managers?

First, we need to step back and realize we will not be adding all the necessary staff that we require, so we need to find a substitution.  The first place to look is how to leverage the data we already collect and how to enable the use of the data as our “eyes on” and provide us real time situational awareness.  If we can enhance our data with logic-based analytics and provide information, we can shift our resources to critical priorities and ensure the availability of our facilities.

Second, this operational information can notify the facilities team of problems and allow us to be proactive in solutions and communications instead of waiting for complaints to roll in.  We can even go one step further; we predict our facility performance and use that information to identify underlying issues we may have never found until it was too late.

This is not about implementing thousands of rules and getting overwhelmed with “alarm fatigue,” but about enhancing your own data, based on your own facilities, based on your needs.  It allows you to break down silos and use basic information from meters, building automation and plant systems to provide you an integrated awareness.

University of California, Davis is a great example of this. In 2013, they set the goal of net zero greenhouse gas emissions from the campus’s building and vehicle fleet by 2025. With more than 1,000 buildings comprising 11.3 million square feet, this was no small feat. Beyond the obvious projects, UC Davis turned to the PI System to drive more efficient use of resources. By looking at the right data, UC Davis has been able to better inform their strategy for creating a carbon neutral campus.

UC Davis a perfect example of simply starting and starting simple. The data will provide the guidance on where there are opportunities to meet your objectives.

OSIsoft is an LF Edge member company. For more information about OSIsoft, visit their website.

New Training Program Helps You Set Up an Open Source Program Office

By Blog, Training

Written by Dan Brown, Senior Manager, Content & Social Media, LF Training

Is your organization considering setting up an open source program office (OSPO) but you aren’t sure where to start? Or do you already have an OSPO but want to learn best practices to make it more effective? Linux Foundation Training & Certification has released a new series of seven training courses designed to help.

The Open Source Management & Strategy program is designed to help executives, managers, software developers and engineers understand and articulate the basic concepts for building effective open source practices within their organization. Developed by Guy Martin, Executive Director of OASIS Open, an internationally recognized standards development and open source projects consortium, the series of online courses provides the foundational knowledge and skills to build out an effective OSPO.

The courses in the program are designed to be modular, so participants only need to take those of relevance to them. This enables organizations to let each member of the team train in the skills they need to be successful in their given roles. The courses included in the program are:

  • LFC202 – Open Source Introduction – covers the basic components of open source and open standards
  • LFC203 – Open Source Business Strategy – discusses the various open source business models and how to develop practical strategies and policies for each
  • LFC204 – Effective Open Source Program Management – explains how to build an effective OSPO and the different types of roles and responsibilities needed to run it successfully
  • LFC205 – Open Source Development Practices – talks about the role of continuous integration and testing in a healthy open source project
  • LFC206 – Open Source Compliance Programs – covers the importance of effective open source license compliance and how to build programs and processes to ensure safe and effective consumption of open source
  • LFC207 – Collaborating Effectively with Open Source Projects – discusses how to work effectively with upstream open source projects and how to get the maximum benefit from working with project communities
  • LFC208 – Creating Open Source Projects – explains the rationale and value for creating new open source projects as well as the required legal, business and development processes needed to launch new projects

The Open Source Management & Strategy program is available to begin immediately. The $499 enrollment fee provides unlimited access to all seven courses for one year, as well as a certificate upon completion. Interested individuals may enroll here. The program is also included in all corporate training subscriptions.

 

Kubernetes Is Paving the Path for Edge Computing Adoption

By Blog, State of the Edge, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

For more content like this, please visit the Section website

Since Kubernetes was released five years ago by Google, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.

When it comes to the Infrastructure (or Service Provider) Edge, Kubernetes is increasingly being adopted as a key component of edge computing. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.

A Shared Operational Paradigm

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.

As defined by the State of the Edge’s Open Glossary, an “edge-native application” is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.

Kubernetes-Based Edge Architecture

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are three approaches to using Kubernetes in edge-based architecture to manage workloads and resource deployments.

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.

The three approaches for edge-based scenarios can be summarized as follows:

  1. The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine. K3s is the reference architecture for this solution.
  2. The next approach comes from KubeEdge, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.
  3. The third approach is hierarchical cloud plus edge, using a virtual kubelet as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.

Section’s Migration to Kubernetes

Section migrated to Kubernetes from a legacy homegrown scheduler last year. Instead of building our own fixed hardware network, Section distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, Packet, DigitalOcean, CenturyLink, and RackCorp. Kubernetes allows us to be infrastructure-agnostic and seamlessly manage a diverse set of workloads.

Our first-hand experience of the many benefits of Kubernetes at the edge include:

  • Flexible tooling, allowing our developers to interact with the edge as they need to;
  • Our users can run edge workloads anywhere along the edge continuum;
  • Scaling up and out as needed through our vendor-neutral worldwide network;
  • High availability of services;
  • Fewer service interruptions during upgrades;
  • Greater resource efficiency – in particular, we use the Kubernetes Horizontal Pod Autoscaler, which automatically scales the number of pods up or down according to defined latency or volume thresholds;
  • Full visibility into production workloads through the built-in monitoring system;
  • Improved performance.

Summary

As more organizations and operators continue to adopt and support Kubernetes-based cloud-edge patterns, the ecosystem will continue to mature. However, not every organization will have the resources and/or expertise to build these systems themselves. This is where edge platforms (like Section) bridge those gaps, offering DevOps teams familiar tooling to take advantage of the benefits that Kubernetes has to offer without the complexities that come along with it.