All Posts By

LF Edge

Introduction to Project EVE, an open-source edge node architecture

By Blog, Project EVE
Written by Brad Corrion, active member of the Project EVE community and Strategic Architect at Intel 
No alt text provided for this image

This article originally ran as a LinkedIn article last month. 

At the inaugural August ZEDEDA Transform conference, I participated in a panel discussion entitled “Edge OSS Landscape, Intro to Project EVE and Bridging to Kubernetes”. Project EVE, an LF Edge project, delivers an open-source edge node, on which applications are deployed as either containers or VMs. As some audience members noted, they were pleasantly surprised that we didn’t spend an hour talking explicitly about the Project EVE architecture. Instead, we considered many aspects of a container-first edge, community open-source investments, and whether technologies like Kubernetes can be useful for IoT applications.

The Edge Buzz

There’s a lot of buzz around the edge, and many see it as the next big thing since the adoption of cloud computing. Just as the working definition of the cloud has morphed over time and conceptualized as a highly scalable, micro-service-based application hosted on computing platforms around the globe, so has the edge come to represent a computing style. To embrace the edge means to place your computing power as close to the data that it is processing, to balance the cost and latency of moving high volumes of data across the network. Most have also used the transition to edge computing to adopt the same “cloud-native” technologies and processes to manage their compute workloads in a similar fashion, regardless of where the compute is deployed, be it cloud, data center, or some remote edge environment.

Turning the Problem on its Head

That last part is what enthuses me about the shift to edge computing: we move away from highly curated enterprise IT devices (whose management tends to prevent change and modification) and move towards cloud-like, dynamic, scalable assets (whose management technologies are designed for innovation and responding to ever-changing circumstances). This is the classic example of “pets vs. cattle“, but sprinkled in with IoT challenges like systems that are distributed at extreme ends of the world, with costly trip charges required to manage with an on-site technician. The solution turns the problem on its it head. It requires organizations, from IT to line of business, to adopt practices of agility and innovation, so that they can manage and deploy solutions to the edge as nimbly as they experiment and innovate in the cloud.

Next up, we discussed the benefits of utilizing open source software for projects like Project EVE. Making an edge node easy to deploy, secure to boot, and remotely manageable is not trivial, and it is not worth competing over. The community, including competitors, can create a solid, open-source edge node architecture, such as Project EVE, and all parties can benefit from the group investment. With a well-accepted, common edge architecture, innovators can focus instead on the applications, the orchestration, and the usability of the edge. Even using the same basic edge node architecture, there is more than ample surface area left to compete for service value, elegance, and solution innovation. Just as the typical investment around Kubernetes is allowing masses of projects and companies to improve on nearly every aspect of orchestration, without making each project re-invent the basics of orchestration, we don’t need tens of companies re-inventing secure device deployments, secure boot, and container protection. Get the basics done (the “boring stuff” as I called it) and focus on the specialization around it.

Can container first concepts, and projects like Kubernetes, be effective at the edge and solving IoT problems? Yes. No doubt, there are differences in using Kubernetes in the cloud from using it to manage edge nodes, and some of those challenges include limited power infrastructure, communications and connectivity, and cost considerations. However, these technologies are very adaptable. A container will run as happily on an IOT device, an IoT gateway, or an edge server. How you connect to and orchestrate the containers on those devices will vary. Most edges won’t need a local Kubernetes cluster. Distant Kubernetes infrastructures could remotely orchestrate a local edge node or local edge cluster.

Infrastructure as Code

A common theme in edge orchestration architectures is local orchestration, which provides enough power to a small orchestrator running at the edge to make decisions while offline from a central orchestrator. Projects like Open Horizon, which IBM recently open-sourced to the LF Edge, is designed to bridge traditional cloud orchestration to edge devices with a novel distributed policy engine. This distributed policy engine is executing and responding to changing conditions even when disconnected. Adopting an “infrastructure as code” mentality provides administrators a high degree of configuration and control of the target environment, even over remote networks. There is high confidence in the resulting infrastructure configurations, but with the variability as to “when” the changes are received due to bandwidth considerations. Could this be used on oil and gas infrastructure in the jungles of Ecuador? Yes. However, the challenge is in deciding which of the philosophies and projects are best suited to the needs of the situation.

If you find yourself architecting your edge node or using a simple Linux installation, while kicking the security can down the road, or are otherwise bogged down by remote manageability challenges when you’d rather be innovating and solving domain-specific problems, look to the open-source community. Specifically to projects like Project EVE, to give you a leg up for your edge architecture.

Please feel free to connect with me and start a conversation about this article. Here are some additional resources:

  • Explore how Intel is connecting open source communities with the retail market through the Open Retail Initiative
  • Be sure to explore Project EVE, Open Horizon, EdgeX Foundry and more projects at the LF Edge

Where the Edges Meet: Public Cloud Edge Interface

By Akraino, Akraino Edge Stack, Blog

Written by Oleg Berzin, Ph.D., a member of the Akraino Technical Steering Committee and Senior Director Technology Innovation at Equinix

Introduction

Why 5G

5G will provide significantly higher throughput than existing 4G networks. Currently, 4G LTE is limited to around 150 Mbps. LTE Advanced increases the data rate to 300 Mbps and LTE Advanced Pro to 600Mbps-1 Gbps. The 5G downlink speeds can be up to 20 Gbps. 5G can use multiple spectrum options, including low band (sub 1 GHz), mid-band (1-6 GHz) and mmWave (28, 39 GHz). The mmWave spectrum has the largest available contiguous bandwidth capacity (~1000 MHz) and promises dramatic increases in user data rates. 5G enables advanced air interface formats and transmission scheduling procedures that decrease access latency in the Radio Access Network by a factor of 10 compared to 4G LTE.

The Slicing Must Go On

Among advanced properties of the 5G architecture, Network Slicing enables the use of 5G network and services for a wide variety of use cases on the same infrastructure. Network Slicing (NS) refers to the ability to provision a common physical system to provide resources necessary for delivering service functionality under specific performance (e.g. latency, throughput, capacity, reliability) and functional (e.g. security, applications/services) constraints.

Network Slicing is particularly relevant to the subject matter of the Public Cloud Edge Interface (PCEI) Blueprint. As shown in the figure below, there is a reasonable expectation that applications enabled by the 5G performance characteristics will need access to diverse resources. This includes conventional traffic flows, such as access from mobile devices to the core clouds (public and/or private) as well as the general access to the Internet, edge traffic flows, such as low latency/high speed access to edge compute workloads placed in close physical proximity to the User Plane Functions (UPF), as well as the hybrid traffic flows that require a combination of the above for distributed applications (e.g. online gaming, AI at the edge, etc). One point that is very important is that the network slices provisioned in the mobile network must extend beyond the N6/SGi interface of the UPF all the way to the workloads running on the edge computing hardware and on the Public/Private Cloud infrastructure. In other words, “The Slicing Must Go On” in order to ensure continuity of intended performance for the applications.


The Mobile Edge

The technological capabilities defined by the standards organizations (e.g. 3GPP, IETF) are the necessary conditions for the development of 5G. However, the standards and protocols are not sufficient on their own. The realization of the promises of 5G depends directly on the availability of the supporting physical infrastructure as well as the ability to instantiate services in the right places within the infrastructure.

Latency can be used as a very good example to illustrate this point. One of the most intriguing possibilities with 5G is the ability to deliver very low end to end latency. A common example is the 5ms round-trip device to application latency target. If we look closely at this latency budget, it is not hard to see that to achieve this goal a new physical aggregation infrastructure is needed. This is because the 5ms budget includes all radio/mobile core, transport and processing delays on the path between the application running on User Equipment (UE) and the application running on the compute/server side. Given that at least 2ms will be required for the “air interface”, the remaining 3ms is all that’s left for the radio/packet core processing, network transport and the compute/application processing budget. The figure below illustrates an example of the end-to-end latency budget in a 5G network.

The Edge-in and Cloud-out Effect

Public Cloud Service Providers and 3rd-Party Edge Compute (EC) Providers are deploying Edge instances to better serve their end-users and applications, A multitude of these applications require close inter-working with the Mobile Edge deployments to provide predictable latency, throughput, reliability, and other requirements.

The need to interface and exchange information through open APIs will allow competitive offerings for Consumers, Enterprises, and Vertical Industry end-user segments. These APIs are not limited to providing basic connectivity services but will include the ability to deliver predictable data rates, predictable latency, reliability, service insertion, security, AI and RAN analytics, network slicing, and more.

These capabilities are needed to support a multitude of emerging applications such as AR/VR, Industrial IoT, autonomous vehicles, drones, Industry 4.0 initiatives, Smart Cities, Smart Ports. Other APIs will include exposure to edge orchestration and management, Edge monitoring (KPIs), and more. These open APIs will be the foundation for service and instrumentation capabilities when integrating with public cloud development environments.

Public Cloud Edge Interface (PCEI)

Overview

The purpose of Public Cloud Edge Interface (PCEI) Blueprint family is to specify a set of open APIs for enabling Multi-Domain Inter-working across functional domains that provide Edge capabilities/applications and require close cooperation between the Mobile Edge, the Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute hardware and Networks. The Compute hardware is optimized and power efficient for Edge such as the Arm64 architecture.

The high-level relationships between the functional domains are shown in the figure below:

The Data Center Facility (DCF) Domain. The DCF Domain includes Data Center physical facilities that provide the physical location and the power/space infrastructure for other domains and their respective functions.

The Interconnection of Core and Edge (ICE) Domain. The ICE Domain includes the physical and logical interconnection and networking capabilities that provide connectivity between other domains and their respective functions.

The Mobile Network Operator (MNO) Domain. The MNO Domain contains all Access and Core Network Functions necessary for signaling and user plane capabilities to allow for mobile device connectivity.

The Public Cloud Core (PCC) Domain. The PCC Domain includes all IaaS/PaaS functions that are provided by the Public Clouds to their customers.

The Public Cloud Edge (PCE) Domain. The PCE Domain includes the PCC Domain functions that are instantiated in the DCF Domain locations that are positioned closer (in terms of geographical proximity) to the functions of the MNO Domain.

The 3rd party Edge (3PE) Domain. The 3PE domain is in principle similar to the PCE Domain, with a distinction that the 3PE functions may be provided by 3rd parties (with respect to the MNOs and Public Clouds) as instances of Edge Computing resources/applications.

Architecture

The PCEI Reference Architecture and the Interface Reference Points (IRP) are shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

Use Cases

The PCEI working group identified the following use cases and capabilities for Blueprint development:

  1. Traffic Steering/UPF Distribution/Shunting capability — distributing User Plane Functions in the appropriate Data Center Facilities on qualified compute hardware for routing the traffic to desired applications and network/processing functions/applications.
  2. Local Break-Out (LBO) – Examples: video traffic offload, low latency services, roaming optimization.
  3. Location Services — location of a specific UE, or identification of UEs within a geographical area, facilitation of server-side application workload distribution based on UE and infrastructure resource location.
  4. QoS acceleration/extension – provide low latency, high throughput for Edge applications. Example: provide continuity for QoS provisioned for subscribers in the MNO domain, across the interconnection/networking domain for end-to-end QoS functionality.
  5. Network Slicing provisioning and management – providing continuity for network slices instantiated in the MNO domain, across the Public Cloud Core/Edge as well as the 3Rd-Party Edge domains, offering dedicated resources specifically tailored for application and functional needs (e.g. security) needs.
  6. Mobile Hybrid/Multi-Cloud Access – provide multi-MNO, multi-Cloud, multi-MEC access for mobile devices (including IoT) and Edge services/applications
  7. Enterprise Wireless WAN access – provide high-speed Fixed Wireless Access to enterprises with the ability to interconnect to Public Cloud and 3rd-Party Edge Functions, including the Network Functions such as SD-WAN.
  8. Distributed Online/Cloud Gaming.
  9. Authentication – provided as service enablement (e.g., two-factor authentication) used by most OTT service providers 
  10. Security – provided as service enablement (e.g., firewall service insertion)

The initial focus of the PCEI Blueprint development will be on the following use cases:

  • User Plane Function Distribution
  • Local Break-Out of Mobile Traffic
  • Location Services

User Plane Function Distribution and Local Break-Out

The UPF Distribution use case distinguishes between two scenarios:

  • UPF Interconnection. The UPF/SPGW-U is located in the MNO network and needs to be interconnected on the N6/SGi interface to 3PE and/or PCE/PCC.
  • UPF Placement. The MNO wants to instantiate a UPF/SPGW-U in a location that is different from their network (e.g. Customer Premises, 3rd Party Data Center)

UPF Interconnection Scenario

UPF Placement Scenario

UPF Placement, Interconnection and Local Break-Out Examples

Location Services (LS)

This use case targets obtaining geographic location of a specific UE provided by the 4G/5G network, identification of UEs within a geographical area as well as facilitation of server-side application workload distribution based on UE and infrastructure resource location.

 

Acknowledgements

Project Technical Lead: Oleg Berzin

Committers: Suzy GuTina Tsou Wei Chen, Changming Bai, Alibaba; Jian Li, Kandan Kathirvel, Dan Druta, Gao Chen, Deepak Kataria, David Plunkett, Cindy Xing

Contributors: Arif , Jane Shen, Jeff Brower, Suresh Krishnan, Kaloom, Frank Wang, Ampere

Exploration and Practices of Edge Computing: Cloud Managing Containerized Devices

By Blog, EdgeX Foundry, Industry Article, Trend

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.

Introduction

The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.

Nebula Architecture

Project Nebula is designed based on the following key ideas:

  • Agnostic to device CPU architecture, supporting both x86 and ARM;
  • Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
  • Agnostic to data analytics services, supporting on-premise and cloud deployment;
  • Support small to large scale deployment;
  • Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture

Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.

Nebula Demo

Installation

Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.

The basic resource requirement of Nebula is:

  • CPU: 2 virtual CPU cores
  • Memory: 8GB
  • Storage: 150GB

Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.

Nebula Service Console

Vendor Portal

After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.

Nebula Management Portal

In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.

Edge Service Hierarchy

For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.

Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.

After the release, users can see and deploy these edge services.

Device Registration

Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.

Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.

./install.sh init -u user-acccount -p user-account-password -n user-device-name

User Portal

After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.

After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.

After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.

Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.

Next

From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge

Finalists for the 2020 Edge Woman of the Year Award!

By Blog, State of the Edge

Written by Candice Digby, Partner and Events Manager at Vapor IO, a LF Edge member and active community member in the State of the Edge Project

Last year’s Edge Woman of the Year winner Farah Papaioannou is ready to pass the torch.

“I was honored to have been chosen as Edge Woman of the Year 2019 and to be recognized alongside many inspiring and innovative women across the industry,” said Farah Papaioannou, Co-Founder and President of Edgeworx, Inc. “I am thrilled to pay that recognition forward and participate in announcing this year’s Edge Woman of the Year 2020 finalist categories; together we have a lot to accomplish.”

(left to right) Matt Trifiro, Farah Papaioannou, Gavin Whitechurch

With more nominations in the 2nd annual competition, it was difficult for State of the Edge and Edge Computing World to select only ten top finalists. The Edge Woman of the Year 2020 nominees represent industry leaders in roles that are impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems.

The Edge Woman of the Year Award represents a long-term industry commitment to highlight the growing importance of the contributions and accomplishments made by women in edge computing.  The award is presented at the annual Edge Computing World event which gathers the whole edge computing ecosystem, from network to cloud and application to infrastructure end-users and developers while also sharing edge best practices.

The annual Edge Woman of the Year Award is presented to outstanding female and non-binary professionals in edge computing for outstanding performance in their roles elevating Edge. The 2020 award committee selected the following 10 finalists for their excellent work in the named categories:

  • Leadership in Edge Startups
    • Kathy Do, VP, Finance and Operations at MemVerge
  • Leadership in Edge Open Source Contributions
    • Malini Bhandaru, Open Source Lead for IoT & Edge at VMware
  • Leadership in Edge at a Large Organization
    • Jenn Didoni, Head of Cloud Portfolio at Vodafone Group Business
  • Leadership in Edge Security
    • Ramya Ravichandar, VP of Product Management at FogHorn
  • Leadership in Edge Innovation and Research
    • Kathleen Kallot, Director, AI Ecosystem, arm
  • Leadership in Edge Industry and Technology
    • Fay Arjomandi, Founder and CEO, mimik technology, Inc.
  • Leadership in Edge Best Practices
    • Nurit Sprecher, Head of Management & Virtualization Standards, Nokia
  • Leadership in Edge Infrastructure
    • Meredith Schuler, Financial & Strategic Operations Manager, SBA Edge
  • Overall Edge Industry Leadership
    • Nancy Shemwell, Chief Operating Officer, Trilogy Networks, Inc.
  • Leadership in Executing Edge Strategy
    • Angie McMillin, Vice President and General Manager, IT Systems, Vertiv

The “Top Ten Women in Edge” finalists are selected from nominations and submissions submitted by experts in edge from around the world. The final winner will be chosen by a panel of industry judges. The winner of the Edge Woman of the Year 2020 will be announced during this year’s Edge Computing World, being held virtually October 12-15, 2020.

For more information on the Women in Edge Award visit: https://www.lfedge.org/2020/08/25/state-of-the-edge-and-edge-computing-world-announce-finalists-for-the-2020-edge-woman-of-the-year-award/

 

Akraino’s AI Edge-School/Education Video Security Monitoring Blueprint

By Akraino, Akraino Edge Stack, Blog, Use Cases

Written by Hechun Zhang, Staff Systems Engineer, Baidu; Akraino TSC member, and PTL of the AI Edge Blueprint; and Tina Tsou, Enterprise Architect, Arm and Akraino TSC Co-Chair

In order to support end-to-end edge solutions from the Akraino community, Akraino uses blueprint concepts to address specific Edge use cases. A Blueprint is a declarative configuration of the entire stack i.e., edge platform that can support edge workloads and edge APIs. In order to address specific use cases, a reference architecture is developed by the community.

The School/Education Video Security Monitoring Blueprint belongs to the AI Edge Blueprint family. It focuses on establishing an open source MEC platform that combined with AI capacities at the Edge. In this blueprint, latest technologies and frameworks like micro-service framework, Kata container, 5G accelerating, and open API have been integrated to build a industry-leading edge cloud architecture that could provide comprehensive computing acceleration support at the edge. And with this MEC platform, Baidu has expanded AI implementation across products and services to improve safety and engagement in places such as factories, industrial parks, catering services, and classrooms that rely on AI-assisted surveillance.

Value Proposition

  • Establish an open-source edge infrastructure on which each member company can develop its own AI applications, e.g. video security monitoring.
  • Contribute use cases which help customers adopt video security monitoring, AI city, 5G V2X, and Industrial Internet applications.
  • Collaborate with members who can work together to figure out the next big thing for the industry.

Use cases

Improved Student-Teacher Engagement

 

Using deep learning model training for video data from classrooms, school management can evaluate class engagement and analyze individual student concentration levels to improve real-time teaching situations.

Enhanced Factory Safety and Protection

Real-time monitoring helps detecting factory workers who might forget security gadgets, such as helmets, safety gloves, and so on, to prevent hazardous accidents in the workplace. Companies can monitor safety in a comprehensive and timely way, and used findings as a reference for strengthening safety management.

Reinforced Hygiene and Safety in Catering

Through monitoring staff behavior in the kitchen, such as smoking breaks and cell phone use, this solution ensures the safety and hygiene of the food production process.

Advanced Fire Detection and Prevention

Linked and networked smoke detectors in densely populated places, such as industrial parks and community properties, can help quickly detect and alert authorities to fire hazards and accidents.

Network Architecture

OTE-Stack is an edge computing platform for 5G and AI. By virtualization it can shield heterogeneous characteristics and gives a unified access of cloud edge, mobile edge and private edge. For AI it provides low-latency, high-reliability and cost-optimal computing support at the edge through the cluster management and intelligent scheduling of multi-tier clusters. And at the same time OTE-Stack makes device-edge-cloud collaborative computing possible.

Baidu implemented video security monitoring blueprints on the Arm infrastructure, including cloud-edge servers, hardware accelerators, and custom CPUs designed for world-class performance. Arm and Baidu are members of the Akraino project and use edge cloud reference stack of networking platforms and cloud-edge servers built on Arm Neoverse. The Arm Neoverse architecture supports a vast ecosystem of cloud-native applications and combines AI Edge blueprint for an open source mobile edge computing (MEC) platform optimized for sectors such as safety, security, and surveillance.

“Open source has now become one of the most important culture and strategies respected by global IT and Internet industries. As one of the world’s top Internet companies, Baidu has always maintained an enthusiastic attitude in open source, actively contributing the cutting edge products and technologies to the Linux foundation. Looking towards the future, Baidu will continue to adhere to the core strategy of open source and cooperate with partners to build a more open and improved ecosystem.” — Ning Liu, Director of AI Cloud Group, Baidu

In the 5G era, OTE-Stack has obvious advantages in the field of edge computing:

  • Large scale and hierarchical cluster management
  • Support third cluster
  • Lightweight cluster controller
  • Cluster autonomy
  • Automatic disaster recovery
  • Global scheduling
  • Support multi-runtimes
  • Kubernetes native support

For more information about this Akraino Blueprint, click here.  For general information about Akraino Blueprints, click here.

Breaking Down the Edge Continuum

By Blog, State of the Edge, Trend, Use Cases

Written by Kurt Rinehart, Director of Data Science at Section. This blog originally ran on the Section website. For more content like this, please click here.

There are many definitions of “the edge” out there. Sometimes it can seem as if everyone has their own version.

LF Edge, an umbrella organization that brings together industry leaders to build “an open source framework for the edge,” has a number of edge projects under its remit, each of which seeks to unify the industry around coalescing principles and thereby accelerate open source edge computing developments. Part of its remit is to define what the edge is, an invaluable resource for the edge community to coalesce around.

Latest LF Edge White Paper: Sharpening the Edge

In 2018, State of the Edge (which recently became an official project of LF Edge) put out its inaugural report, defining the edge using four criteria:

  • “The edge is a location not a thing;
  • There are lots of edges, but the edge we care about today is the edge of the last mile network;
  • This edge has two sides: an infrastructure edge and a device edge;
  • Compute will exist on both sides, working in coordination with the centralized cloud.”

Since that inaugural report, much has evolved within the edge ecosystem. The latest white paper from LF Edge, Sharpening the Edge: Overview of the LF Edge Taxonomy and Framework, expands on these definitions and moves on from simply defining two sides (the infrastructure and the device edge) to use the concept of an edge continuum.

The Edge Continuum

The concept of the edge continuum describes the distribution of resources and software stacks between centralized data centers and deployed nodes in the field as “a path, on both the service provider and user sides of the last mile network.”

In almost the same breath, LF Edge also describes edge computing as essentially “distributed cloud computing, comprising multiple application components interconnected by a network.”

We typically think of “the edge” or “the edges” in terms of the physical devices or infrastructure where application elements run. However, the idea of a path between the centralized cloud (also referred to as “the cloud edge” or “Internet edge”) and the device edge instead allows for the conceptualization of multiple steps along the way.

The latest white paper concentrates on two main edge categories within the edge continuum: the Service Provider Edge and the User Edge (each of which is broken down into further subcategories).

edge continuum diagram
Image source: LF Edge

The Service Provider Edge and the User Edge

LF Edge positions devices at one extreme of the edge continuum and the cloud at the other.

Next along the line of the continuum after the cloud, also described as “the first main edge tier”, is the Service Provider (SP) Edge. Similarly to the public cloud, the infrastructure that runs at the SP Edge (compute, storage and networking) is usually consumed as a service. In addition to the public cloud, there are also cellular-based solutions at the SP Edge, which are typically more secure and private than the public cloud, as a result of the differences between the Internet and cellular systems. The SP Edge leverages substantial investments by Communications Service Providers (CSPs) into the network edge, including hundreds of thousands of servers at Points of Presence (PoPs). Infrastructure at this edge tier is largely more standardized than compute at the User Edge.

The second top-level edge tier is the User Edge, which is on the other side of the last mile network. It represents a wider mix of resources in comparison to the SP Edge, and “as a general rule, the closer the edge compute resources get to the physical world, the more constrained and specialized they become.” In comparison to the SP Edge and the cloud where resources are owned by these entities and shared across multiple users, resources at the User Edge tend to be customer-owned and operated.

Moving from the Cloud to the Edge

What do we mean when we talk about moving from the cloud to the edge? Each of the stages along the edge continuum take you progressively closer to the end user. You have high latency and more compute in the centralized cloud versus low latency and less compute as you get closer to the User Edge. When we talk about moving from the cloud to the edge, it means we want to leverage the whole stack and not solely focus on the centralized cloud.

Let’s look at the most obvious use case: content delivery networks (CDNs). In the 1990s, Akamai created content delivery networks to allow localized websites to serve a global audience. A website based in New York could leverage Akamai’s distributed network of proxy servers and data centers around the world to be able to store their static assets globally, including HTML, CSS, JavaScript, video, and images. By caching these in Akamai’s distributed global points of presence (PoP), the website’s end users worldwide were guaranteed high availability and consistent performance.

These days, CDNs are considered to be only one layer in a highly complex Internet ecosystem. Content owners such as media companies and e-commerce vendors continue to pay CDN operators to deliver their content to end users. In turn, a CDN pays ISPs, carriers, and network operators for hosting its servers in their data centers. That’s the Service Provider Edge we’re talking about.

An edge compute platform is still a geographically distributed network, but instead of simply providing proxy servers and data centers, an edge compute platform also offers compute. How do we define this? Compute can be defined as many things, but essentially, it boils down to the ability to run workloads wherever you need to run them. Compute still gives you high availability and performance, but it also allows for the capability to run packaged and custom workloads positioned relatively spatially to users.

An edge compute platform leverages all available compute between the cloud provider and the end user, together with DevOps practices, to deliver traditional CDN and custom workloads.

Applying Lessons from the Cloud to the Edge

We can take the lessons we’ve learned in the cloud and apply them to the edge. These include:

  • Flexibility – At Section, we describe this as wanting to be able to run “any workload, anywhere”, including packaged and customized workloads;
  • Taking a multi-provider approach to deployments – This offers the opportunity to create a higher layer of abstraction. Infrastructure as Code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files as opposed to physical hardware configuration or interactive configuration tools. At Section, we have 6-7 different providers, from cloud providers to boutique providers to bare metal providers.
  • Applying DevOps practices – In order to provide the capabilities that the cloud has at the infrastructure edge, we need to enable developers to get insight and to run things at the edge at speed, just as they did in the cloud. This is DevOps. It’s important to be able to apply DevOps practices here since, “if you build it, you own it”. You want to make things open, customizable, and API-driven with integrations, so that developers can leverage and build on top of them.
  • Leveraging containerized workloads – Deploying containers at the edge involves multiple challenges, particularly around connectivity, distribution and synchronization, but it can be done, and in doing, allows you to leverage this architecture to deploy your own logic, not just pre-packaged ones. Containerization also offers:
    • Security
    • Standardization
    • Isolation; and
    • A lightweight footprint.
  • Insights and Visibility – We need to give developers deep, robust insight into what’s happening at the edge, just as we do in the cloud. The three pillars of observability are logs, metrics and tracing. An ELK stack can provide this, giving developers the invaluable ability to understand what is happening when things inevitably go wrong.

Edge Computing Use Cases in the Wild

There are many examples of use cases already operating at the Edge. A few of the many interesting ones out there include:

  • Facebook Live – When you see a live stream in your feed and click on it, you are requesting the manifest. If the manifest isn’t already on your local PoP, the request travels to the data center to get the manifest, and then fetches the media files in 1 second clips. ML algorithms operate on the 1 second clips to optimize them in real time to deliver the best, fastest experience for users.
  • Cloudflare Workers – These are Service Worker API implementations for the Cloudflare platform. They deploy a server-side approach to running JavaSCript workloads on Cloudflare’s global network.
  • Chick-fil-A – A surprising one. Chick-fil-A has been pushing into the device edge over the last couple of years. Each of their 20,000 stores has a Kubernetes cluster that runs there. The goal: “low latency, Internet-independent applications that can reliably run our business”, in addition to high availability for these applications, a platform that enables rapid innovation, and the ability to horizontally scale.

We’re Not Throwing Away the Cloud

One last thing to make clear: we’re not talking about throwing away the cloud. The cloud is going nowhere. We will be working alongside it, using it. What we’re talking about is moving the boundary of our applications out of the cloud closer to the end user, into the compute that is available there. And, as we’ve seen, we don’t need to throw away the lessons we’ve learned in the cloud; we can still use the tools that we’re used to, plus gain all the advantages that the edge continuum has to offer.

You can download the LF Edge taxonomy white paper here. You can also watch the LF Edge Taxonomy Webinar, which shares insight from the white paper, on our Youtube Channel. Click here to watch it now.  

State of the Edge and Edge Computing World Announce Finalists for the 2020 Edge Woman of the Year Award

By Announcement, State of the Edge

 

 

 

 

 

 

 

 

Edge Computing Industry Comes Together to Recognize Top Ten Women Shaping the Future of Edge

SAN FRANCISCO – August 17, 2020 – Edge computing leaders from State of the Edge and Edge Computing World announce the Second Annual Edge Woman of the Year Award 2020 top ten finalists.  The Edge Woman of the Year 2020 nominees represent industry leaders in roles that are impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems. The “Top Ten Women in Edge” finalists are selected from nominations and submissions submitted by experts in edge from around the world. The final winner will be chosen by a panel of industry judges, including the previous Edge Woman of the Year winner, Farah Papaioannou. The winner of the Edge Woman of the Year 2020 will be announced during this year’s Edge Computing World, being held virtually October 12-15, 2020.

“I was honored to have been chosen as Edge Woman of the Year 2019 and to be recognized alongside many inspiring and innovative women across the industry,” said Farah Papaioannou, Co-Founder and President of Edgeworx, Inc. “I am thrilled to pay that recognition forward and participate in announcing this year’s Edge Woman of the Year 2020 finalist categories; together we have a lot to accomplish.”

The Edge Woman of the Year Award represents a long-term industry commitment to highlight the growing importance of the contributions and accomplishments made by women in edge computing.  The award is presented at the annual Edge Computing World event which gathers the whole edge computing ecosystem, from network to cloud and application to infrastructure end-users and developers while also sharing edge best practices.

The annual Edge Woman of the Year Award is presented to outstanding female and non-binary professionals in edge computing for outstanding performance in their roles elevating Edge. The 2020 award committee selected the following 10 finalists for their excellent work in the named categories:

  • Leadership in Edge Startups
    • Kathy Do, VP, Finance and Operations at MemVerge
  • Leadership in Edge Open Source Contributions
    • Malini Bhandaru, Open Source Lead for IoT & Edge at VMware
  • Leadership in Edge at a Large Organization
    • Jenn Didoni, Head of Cloud Portfolio at Vodafone Group Business
  • Leadership in Edge Security
    • Ramya Ravichandar, VP of Product Management at FogHorn
  • Leadership in Edge Innovation and Research
    • Kathleen Kallot, Director, AI Ecosystem, arm
  • Leadership in Edge Industry and Technology
    • Fay Arjomandi, Founder and CEO, mimik technology, Inc.
  • Leadership in Edge Best Practices
    • Nurit Sprecher, Head of Management & Virtualization Standards, Nokia
  • Leadership in Edge Infrastructure
    • Meredith Schuler, Financial & Strategic Operations Manager, SBA Edge
  • Overall Edge Industry Leadership
    • Nancy Shemwell, Chief Operating Officer, Trilogy Networks, Inc.
  • Leadership in Executing Edge Strategy
    • Angie McMillin, Vice President and General Manager, IT Systems, Vertiv

“The edge computing industry in 2020 continues to grow rapidly. Once again, we had an impressive group of nominees representing a broad cross-section of the many women leaders in edge,” said Candice Digby, a representative of State of the Edge and co-founder of the award. “The Edge Woman of the Year award highlights the impact these women continue to make across the industry and we hope to draw attention to their advancements and encourage  more women to pursue  careers in edge.”

“All the submissions were incredibly impressive and the list of Edge Woman of the Year finalists represents a group of women taking the reins of leadership across the edge computing ecosystem,” said Gavin Whitechurch of Topio Networks and Edge Computing World,  “As the edge industry continues to grow, we want to highlight the female innovators leading the edge computing revolution, working hard to achieve new ground for the industry as a whole.”

For more information on the Women in Edge Award visit: http://www.edgecomputingworld.com/edgewomanoftheyear. 

About State of the Edge

The State of Edge (http://stateoftheedge.com) is a member-supported research organization that produces free reports on edge computing and was the original creator of the Open Glossary of Edge Computing, which was donated to The Linux Foundation’s LF Edge. The State of the Edge welcomes additional participants, contributors and supporters. If you have an interest in participating in upcoming reports or submitting a guest post to the State of the Edge Blog, feel free to reach out by emailing info@stateoftheedge.com.

About Edge Computing World

Edge Computing World is the only industry event that brings together the entire edge ecosystem.

The industry event will present a diverse range of high growth application areas – including AI, IoT, NFV, Augmented Reality, video, cloud gaming & self-driving vehicles – are creating new demands that cannot be met by existing infrastructure.  The theme will cover edge as a new solution required to deal with low latency, application autonomy, data security and bandwidth thinning, which all require greater capability closer to the point of consumption.

Join us at Edge Computing World October 12-15, 2020 for the world’s largest virtual edge computing event.

# # #

MicroMEC now available with the Akraino R3 Release!

By Akraino, Akraino Edge Stack, Blog

Written by Tapio Tallgren, Technical Leader at Nokia Mobile Networks, Community Sub-Committee Chair of Akraino TSC,Ferenc Szekely, Program Manager, SUSE, Committer of Micro MEC blueprint of Akraino TSC and Tina Tsou, Enterprise Architect, Arm, Akraino TSC Co-Chair

The MicroMEC platform started life as a platform to run applications at the very edge of the network, like in a light pole. We joined the LF Edge’s Akraino project from the very beginning.

To find out what the use cases would be first, we participated in the IoThon hackathon in 2019 where we built a miniature city with sensors, cameras and small servers — also known as Raspberry Pis. Our plan was that we will provide APIs to enable developers to access the sensors, cameras, or other independent hardware devices attached to our small servers, ie. the MicroMEC nodes. It was clear that we wanted to deploy all the APIs as well as the apps in containers. We needed a tool like Kubernetes to help us build and manage the MicroMEC cluster. As we targeted “small” devices, with max 4GB of RAM -at that time- and low power consumption we looked into alternatives to k8s. That is how we picked k3s. 

By the autumn of 2019 we had our lab running Raspberry Pi 3B+ and 4B nodes with k3s. We had a successful hackathon – Junction 2019 – in Finland where the teams presented solutions utilizing the MicroMEC cluster. We also added OpenFaaS Cloud (OFC) into the mix and a developer UI to the platform. This allowed developers to write serverless applications for the MicroMEC cluster and deploy them with ease. They could concentrate on their core business: developing apps while MicroMEC with OFC took away the burden of cluster management, deployment etc.

Right after Junction, we were at the Akraino 5G MEC Hackathon in the USA. For this event MicroMEC had to become more “MEC”. This implied the implementation of MEC-11 interfaces and the UI to manage those apps that our MEC-11 implementation made discoverable for customers near the MicroMEC cluster. The MEC cluster runs on Arm architecture based hardware.

With all this activity, we missed the first two Akraino releases, but now we are very happy to join the Akraino R3 release! For this, we had to figure out what is the easiest way to install the stack on the device with a MMC card. The easiest way is to not install anything on the fragile card, but boot the stack from a network server. Eventually we made all MicroMEC nodes to boot from a network server using PXE and the storage of each node was attached via iscsi. This requires a fast enough LAN, but thankfully cheap gigabit switches are widely available these days. 

Learn more about Akraino here.

 

LF Edge’s Akraino Project Release 3 Now Available, Unifying Open Source Blueprints Across MEC, AI, Cloud and Telecom Edge

By Akraino, Announcement, LF Edge

    • 6 New R3 Blueprints (total of 20)  covering use cases across Telco, Enterprise, IoT, Cloud and more
    • Akraino Blueprints cover areas including MEC, AI/ML, Cloud, Connected Vehicle, AR/VR, Android Cloud Native, smartNICs, Telco Core & Open- RAN, with — ongoing support for R1-R2 blueprints and more
    • Community delivers open edge API specifications — to standardize across devices, applications (cloud native), orchestrations,  and multi-cloud — via new white paper

SAN FRANCISCO  August 12, 2020LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced the availability of Akraino Release 3 (“Akraino R3”).  Akraino’s third and most mature release to date delivers fully functional edge solutions– implemented across global organizations– to enable a diversity of edge deployments across the globe. New blueprints include a focus on  MEC, AI/ML, and Cloud edge. In addition, the community authored the first iteration of a new white paper to bring common open edge API standards to align the industry.

Launched in 2018, and now a Stage 3 (or “Impact” stage) project under the LF Edge umbrella, Akraino Edge Stack delivers an open source software stack that supports a high-availability cloud stack optimized for edge computing systems and applications. Designed to improve the state of carrier edge networks, edge cloud infrastructure for enterprise edge, and over-the-top (OTT) edge, it enables flexibility to scale edge cloud services quickly, maximize applications and functions supported at the edge, and to improve the reliability of systems that must be up at all times. 

“Akraino has evolved to unify edge blueprints across use cases,” said Arpit Joshipura, general manager, Networking, Automation, Edge and IoT, the Linux Foundation. “With a growing set of blueprints that enable more and more use cases, we are seeing the power of open source impact every aspect of the edge and how the world accesses and consumes information.”  

About Akraino R3

Akraino Release 3 (R3) delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R3, Akraino brings deployments and PoCs from a swath of global organizations including Aarna Networks, China Mobile, Equinix, Futurewei, Huawei, Intel, Juniper, Nokia, NVIDIA, Tencent, WeBank, WiPro, and more.

Akraino enables innovative support for new levels of flexibility that scale 5G, industrial IoT, telco, and enterprise edge cloud services quickly, by delivering community-vetted and tested edge cloud blueprints to deploy edge services.  New use cases and new and existing blueprints provide an edge stack for Connected Vehicle, AR/VR, AI at the Edge, Android Cloud Native, SmartNICs, Telco Core and Open-RAN, NFV, IOT, SD-WAN, SDN, MEC, and more. 

 Akraino R3 includes  6 new blueprints for a total of 20,  all tested and validated on real hardware labs supported by users and community members — the Akraino community has established a full-stack, automated testing with strict community standards to ensure high-quality blueprints. 

The 20 “ready and proven” blueprints include both updates and long-term support to existing R1 & R2 blueprints, and the introduction of six new blueprints:

      • The AI Edge – School/Education Video Security Monitoring 
      • 5G MEC/Slice System–  Supports Cloud Gaming, HD Video, and Live Broadcasting
      • Enterprise Applications on Lightweight 5G Telco Edge (EATLEdge)
      • Micro-MEC (Multi-access Edge Computing) for SmartCity Use Cases
      • IEC Type 3: Android Cloud Native Applications on Arm®-based  Servers on the Edge 
      • IEC Type 5: Smart NIC: Edge hardware acceleration 

More information on Akraino R3, including links to documentation, code, installation docs for all Akraino Blueprints from R1-R3, can be found here. For details on how to get involved with LF Edge and its projects, visit https://www.lfedge.org/

API  White Paper

The Akraino community published the first iteration of a  new white paper to bring common open edge API standards to the industry. The new white paper makes available, for the first time, generic edge APIs for developers to standardize across devices, applications (cloud native), orchestrations,  and multi-cloud. The paper serves as a stepping stone for broad industry alignment on edge definitions, use cases, APIs. Download the paper here: https://www.lfedge.org/wp-content/uploads/2020/06/Akraino_Whitepaper.pdf

Looking Ahead

The community is already planning R4, which will include more implementation of open edge API guidelines, more automation of testing, increased alliance with upstream and downstream communities, and development of public cloud standard edge interfaces. Additionally, the community is expecting new blueprints as well as additional enhancements to existing blueprints. 

Don’t miss the Open Networking and Edge Summit (ONES) virtual event happening September 28-29, where Akraino and other LF Edge communities will collaborate on the latest open source edge developments. Registration is now open!

Ecosystem Support for Akraino R3

Arm
“The demands on compute, networking, and storage infrastructure are changing significantly as we connect billions of intelligent devices, many of which live at the edge of the 5G network,” said Kevin Ryan, senior director of software ecosystem development, Infrastructure Line of Business, Arm. “By working closely with the Akraino community on the release of Akraino R3, and through our efforts with Project Cassini for seamless cloud-native deployments, Arm remains committed to providing our partners with full- edge solutions primed to take on the 5G era.”

AT&T 
Mazin Gilbert, VP of Technology and Innovation, AT&T, said: “As a founding member of the Akraino platform, AT&T has seen first-hand the remarkable progress as a result of openness and industry collaboration. AI and edge computing are essential when it comes to creating an intelligent, autonomous 5G network, and we’re proud to work together with the community to deliver the best possible solutions for our customers.”

Baidu
In the 5G era, AI+ Edge Computing is not only an important guarantee for updating the consumer and industrial Internet experience (such as video consumption re-upgrading, scene-based AI capabilities, etc.), but also a necessary infrastructure for the development of the Internet industry,” said Ning Liu, Director of AI Cloud Group, Baidu. “Providing users with AI-capable edge computing platforms, products and services is one of Baidu’s core strategies. Looking towards the future, Baidu will continue to adhere to the core strategy of open source and cooperate with partners to build a more open and improved ecosystem.” 

China Unicom
“Commercial 5G is going live around the world. Edge computing will play an important role for large bandwidth and low delay services in the 5G era. The key to the success of edge computing is to provide integrated ICT PaaS capabilities, which is beneficial for the collaboration between networks and services, maximizing the value of 5G,” said Xiongyan Tang, Chief Scientist and CTO of the Network Technology Research Institute of China Unicom. “The PCEI Blueprint will define a set of open and common APIs, to promote the deep cooperation between operators and OTTs, and help to build a unified network edge ecosystem.”  

Huawei 
“High bandwidth, low latency, and massive connections are 5G typical features. Based on MEC’s edge computing and open capabilities, 5G network could build the connection, computing, and capabilities required by vertical industries and enables many applications. In the future, 5G MEC will be an open system that provides an application platform with rich atomic capabilities,” said by Bill Ren, Huawei Chief Open Source Liaison Officer. “Managing a large number of applications and devices on the MEC brings great challenges and increases learning costs for developers. We hope to make 5G available through open source, so that more industry partners and developers can easily develop and invoke 5G capabilities. Build a common foundation for carriers’ MEC through open source to ensure the consistency of open interfaces and models. Only in this way can 5G MEC bring tangible benefits to developers and users.”

Juniper Networks
“Juniper Networks is proud to have been an early member of the Akraino community and supportive of this important work. We congratulate this community for introducing new blueprints to expand the use cases for managed edge cloud with this successful third release,” said Raj Yavatkar, Chief Technology Officer at Juniper Networks. “Juniper is actively involved in the integration of multiple blueprints and we look forward to applying these solutions to evolve edge cloud and 5G private networks to spur new service innovations – from content streaming to autonomous vehicles.”

Tencent
“The new generation network is coming, IoT and Edge Computing are developing rapidly. At the same time, it also brings great challenges to technological innovation. High performance, low latency, high scalability, large-scale architecture is a must for all applications. TARS has released the latest version to meet the adjustment of 5G and Edge Computing. Massive devices can easily use TARS Microservice Architecture to realize the innovation of edge applications. The Connect Vehicle Blueprint and AR/VR Blueprint in Akraino are all using the TARS Architecture,” said Mark Shan, Chairman of Tencent Open Source Alliance, Chairman of TARS Foundation, and Akraino TSC Member. “The blueprints on the TARS Architecture solve the problem of high throughput and low latency. TARS is a neutral project in the Linux Foundation, which can be easily used and helped by anyone from the open-source community.”

Zenlayer
“We are proud to be part of the Edge Cloud community. Zenlayer is actively exploring edge solutions and integrating the solutions to our bare metal product. We hope the edge products will empower rapid customer innovation in video streaming, gaming, enterprise applications and more,” said Jim XU, chief engineering architect of Zenlayer.

About the Linux Foundation
Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.