Monthly Archives

September 2020

Where the Edges Meet: Public Cloud Edge Interface

By Akraino, Akraino Edge Stack, Blog

Written by Oleg Berzin, Ph.D., a member of the Akraino Technical Steering Committee and Senior Director Technology Innovation at Equinix

Introduction

Why 5G

5G will provide significantly higher throughput than existing 4G networks. Currently, 4G LTE is limited to around 150 Mbps. LTE Advanced increases the data rate to 300 Mbps and LTE Advanced Pro to 600Mbps-1 Gbps. The 5G downlink speeds can be up to 20 Gbps. 5G can use multiple spectrum options, including low band (sub 1 GHz), mid-band (1-6 GHz) and mmWave (28, 39 GHz). The mmWave spectrum has the largest available contiguous bandwidth capacity (~1000 MHz) and promises dramatic increases in user data rates. 5G enables advanced air interface formats and transmission scheduling procedures that decrease access latency in the Radio Access Network by a factor of 10 compared to 4G LTE.

The Slicing Must Go On

Among advanced properties of the 5G architecture, Network Slicing enables the use of 5G network and services for a wide variety of use cases on the same infrastructure. Network Slicing (NS) refers to the ability to provision a common physical system to provide resources necessary for delivering service functionality under specific performance (e.g. latency, throughput, capacity, reliability) and functional (e.g. security, applications/services) constraints.

Network Slicing is particularly relevant to the subject matter of the Public Cloud Edge Interface (PCEI) Blueprint. As shown in the figure below, there is a reasonable expectation that applications enabled by the 5G performance characteristics will need access to diverse resources. This includes conventional traffic flows, such as access from mobile devices to the core clouds (public and/or private) as well as the general access to the Internet, edge traffic flows, such as low latency/high speed access to edge compute workloads placed in close physical proximity to the User Plane Functions (UPF), as well as the hybrid traffic flows that require a combination of the above for distributed applications (e.g. online gaming, AI at the edge, etc). One point that is very important is that the network slices provisioned in the mobile network must extend beyond the N6/SGi interface of the UPF all the way to the workloads running on the edge computing hardware and on the Public/Private Cloud infrastructure. In other words, “The Slicing Must Go On” in order to ensure continuity of intended performance for the applications.


The Mobile Edge

The technological capabilities defined by the standards organizations (e.g. 3GPP, IETF) are the necessary conditions for the development of 5G. However, the standards and protocols are not sufficient on their own. The realization of the promises of 5G depends directly on the availability of the supporting physical infrastructure as well as the ability to instantiate services in the right places within the infrastructure.

Latency can be used as a very good example to illustrate this point. One of the most intriguing possibilities with 5G is the ability to deliver very low end to end latency. A common example is the 5ms round-trip device to application latency target. If we look closely at this latency budget, it is not hard to see that to achieve this goal a new physical aggregation infrastructure is needed. This is because the 5ms budget includes all radio/mobile core, transport and processing delays on the path between the application running on User Equipment (UE) and the application running on the compute/server side. Given that at least 2ms will be required for the “air interface”, the remaining 3ms is all that’s left for the radio/packet core processing, network transport and the compute/application processing budget. The figure below illustrates an example of the end-to-end latency budget in a 5G network.

The Edge-in and Cloud-out Effect

Public Cloud Service Providers and 3rd-Party Edge Compute (EC) Providers are deploying Edge instances to better serve their end-users and applications, A multitude of these applications require close inter-working with the Mobile Edge deployments to provide predictable latency, throughput, reliability, and other requirements.

The need to interface and exchange information through open APIs will allow competitive offerings for Consumers, Enterprises, and Vertical Industry end-user segments. These APIs are not limited to providing basic connectivity services but will include the ability to deliver predictable data rates, predictable latency, reliability, service insertion, security, AI and RAN analytics, network slicing, and more.

These capabilities are needed to support a multitude of emerging applications such as AR/VR, Industrial IoT, autonomous vehicles, drones, Industry 4.0 initiatives, Smart Cities, Smart Ports. Other APIs will include exposure to edge orchestration and management, Edge monitoring (KPIs), and more. These open APIs will be the foundation for service and instrumentation capabilities when integrating with public cloud development environments.

Public Cloud Edge Interface (PCEI)

Overview

The purpose of Public Cloud Edge Interface (PCEI) Blueprint family is to specify a set of open APIs for enabling Multi-Domain Inter-working across functional domains that provide Edge capabilities/applications and require close cooperation between the Mobile Edge, the Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute hardware and Networks. The Compute hardware is optimized and power efficient for Edge such as the Arm64 architecture.

The high-level relationships between the functional domains are shown in the figure below:

The Data Center Facility (DCF) Domain. The DCF Domain includes Data Center physical facilities that provide the physical location and the power/space infrastructure for other domains and their respective functions.

The Interconnection of Core and Edge (ICE) Domain. The ICE Domain includes the physical and logical interconnection and networking capabilities that provide connectivity between other domains and their respective functions.

The Mobile Network Operator (MNO) Domain. The MNO Domain contains all Access and Core Network Functions necessary for signaling and user plane capabilities to allow for mobile device connectivity.

The Public Cloud Core (PCC) Domain. The PCC Domain includes all IaaS/PaaS functions that are provided by the Public Clouds to their customers.

The Public Cloud Edge (PCE) Domain. The PCE Domain includes the PCC Domain functions that are instantiated in the DCF Domain locations that are positioned closer (in terms of geographical proximity) to the functions of the MNO Domain.

The 3rd party Edge (3PE) Domain. The 3PE domain is in principle similar to the PCE Domain, with a distinction that the 3PE functions may be provided by 3rd parties (with respect to the MNOs and Public Clouds) as instances of Edge Computing resources/applications.

Architecture

The PCEI Reference Architecture and the Interface Reference Points (IRP) are shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

Use Cases

The PCEI working group identified the following use cases and capabilities for Blueprint development:

  1. Traffic Steering/UPF Distribution/Shunting capability — distributing User Plane Functions in the appropriate Data Center Facilities on qualified compute hardware for routing the traffic to desired applications and network/processing functions/applications.
  2. Local Break-Out (LBO) – Examples: video traffic offload, low latency services, roaming optimization.
  3. Location Services — location of a specific UE, or identification of UEs within a geographical area, facilitation of server-side application workload distribution based on UE and infrastructure resource location.
  4. QoS acceleration/extension – provide low latency, high throughput for Edge applications. Example: provide continuity for QoS provisioned for subscribers in the MNO domain, across the interconnection/networking domain for end-to-end QoS functionality.
  5. Network Slicing provisioning and management – providing continuity for network slices instantiated in the MNO domain, across the Public Cloud Core/Edge as well as the 3Rd-Party Edge domains, offering dedicated resources specifically tailored for application and functional needs (e.g. security) needs.
  6. Mobile Hybrid/Multi-Cloud Access – provide multi-MNO, multi-Cloud, multi-MEC access for mobile devices (including IoT) and Edge services/applications
  7. Enterprise Wireless WAN access – provide high-speed Fixed Wireless Access to enterprises with the ability to interconnect to Public Cloud and 3rd-Party Edge Functions, including the Network Functions such as SD-WAN.
  8. Distributed Online/Cloud Gaming.
  9. Authentication – provided as service enablement (e.g., two-factor authentication) used by most OTT service providers 
  10. Security – provided as service enablement (e.g., firewall service insertion)

The initial focus of the PCEI Blueprint development will be on the following use cases:

  • User Plane Function Distribution
  • Local Break-Out of Mobile Traffic
  • Location Services

User Plane Function Distribution and Local Break-Out

The UPF Distribution use case distinguishes between two scenarios:

  • UPF Interconnection. The UPF/SPGW-U is located in the MNO network and needs to be interconnected on the N6/SGi interface to 3PE and/or PCE/PCC.
  • UPF Placement. The MNO wants to instantiate a UPF/SPGW-U in a location that is different from their network (e.g. Customer Premises, 3rd Party Data Center)

UPF Interconnection Scenario

UPF Placement Scenario

UPF Placement, Interconnection and Local Break-Out Examples

Location Services (LS)

This use case targets obtaining geographic location of a specific UE provided by the 4G/5G network, identification of UEs within a geographical area as well as facilitation of server-side application workload distribution based on UE and infrastructure resource location.

 

Acknowledgements

Project Technical Lead: Oleg Berzin

Committers: Suzy GuTina Tsou Wei Chen, Changming Bai, Alibaba; Jian Li, Kandan Kathirvel, Dan Druta, Gao Chen, Deepak Kataria, David Plunkett, Cindy Xing

Contributors: Arif , Jane Shen, Jeff Brower, Suresh Krishnan, Kaloom, Frank Wang, Ampere

LF Edge Demos at Open Networking & Edge Summit

By Blog, EdgeX Foundry, Event, Fledge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard

Open Networking & Edge Summit, which takes place virtually on September 28-30, is co-sponsored by LF Edge, the Linux Foundation and LF Networking. With thousands expected to attend, ONES will be the epicenter of edge, networking, cloud and IoT. If you aren’t registered yet – it takes two minutes to register for US$50 – click here.

Several LF Edge members will be at the conference leading discussions about trends, presenting use cases and sharing best practices. For a list of LF Edge focuses sessions, click here and add them to your schedule. LF Edge will also host a pavilion – in partnership with our sister organization LF Networking – that will showcase demos, including the debut of two new ones that feature a collaboration between Project EVE and Fledge and Open Horizon and Secure Device Onboarding. Check out the sneak peek of the demos below:

Managing Industrial IoT Data Using LF Edge (Fledge, EVE)

Presented by Flir, Dianomic, OSIsoft, ZEDEDA and making its debut at ONES, this demo showcases the strength of Project EVE and Fledge. The demo Fledge will show how the two open source projects work together to securely manage, connect, aggregate, process, buffer and forward any sensor, machine or PLC’s data to existing OT systems and any cloud. Specifically, it will show a FLIR IR Camera video and data feeds being managed as described.

 

Real-Time Sensor Fusion for Loss Detection (EdgeX Foundry):

Presented by LF Edge members HP, Intel and IOTech, this demo showcases the strength of the Open Retail Initiative and EdgeX Foundry. Learn how different sensor devices can use LF Edge’s EdgeX Foundry open-middleware framework to optimize retail operations and detect loss at checkout. The sensor fusion is implemented using a modular approach, combining point-of-sale , computer vision, RFID and scale data into a POC for loss prevention.

This demo was featured at the National Retail Federation Show in January. More details about the demo can be found in HP’s blog and  Intel blog.

               

Low-touch automated onboarding and application delivery with Open Horizon and Secure Device Onboard

Presented by IBM and Intel, this demo features two of the newest projects accepted into the LF Edge ecosystem – Secure Device Onboard was announced in July while Open Horizon was announced in April.

An OEM or ODM can generate a voucher with SDO utilities that is tied to a specific device. Upon purchase, they can send the voucher to the purchaser. With LF Edge’s Open Horizon Secure Device Onboard integration, an administrator can load the voucher into Open Horizon and pre-register the device. Once the device is powered on and connected to the network, it will automatically authenticate, download and install the Open Horizon agent, and begin negotiation to receive and run relevant workloads.

For more information about ONES, visit the main website: https://events.linuxfoundation.org/open-networking-edge-summit-north-america/. 

Exploration and Practices of Edge Computing: Cloud Managing Containerized Devices

By Blog, EdgeX Foundry, Industry Article, Trend

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.

Introduction

The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.

Nebula Architecture

Project Nebula is designed based on the following key ideas:

  • Agnostic to device CPU architecture, supporting both x86 and ARM;
  • Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
  • Agnostic to data analytics services, supporting on-premise and cloud deployment;
  • Support small to large scale deployment;
  • Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture

Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.

Nebula Demo

Installation

Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.

The basic resource requirement of Nebula is:

  • CPU: 2 virtual CPU cores
  • Memory: 8GB
  • Storage: 150GB

Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.

Nebula Service Console

Vendor Portal

After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.

Nebula Management Portal

In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.

Edge Service Hierarchy

For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.

Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.

After the release, users can see and deploy these edge services.

Device Registration

Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.

Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.

./install.sh init -u user-acccount -p user-account-password -n user-device-name

User Portal

After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.

After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.

After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.

Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.

Next

From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge