Skip to main content
Monthly Archives

March 2020

Using On-Demand Talent for EdgeX Foundry IoT Exploration

By Blog, EdgeX Foundry

Written by Clinton Bonner, VP of Marketing for Topcoder

Industrial IoT is coming into its own. A decade ago, the tech world was enamored with all things IoT and as routinely happens in tech, the narrative gets a bit ahead of the uptake and enterprise use cases. What typically happens is that while some of the buzz or dare I say, the promise of a new technology, dampens a bit after the initial boom, the hard work of democratizing the technology and the establishment of important enterprise use cases marches on.

Often behind the scenes and without a Cybertruck-esque introduction, an enabling catalyst of a new tech philosophy is introduced, gains favor, and builds a steady stable of developers and enterprises who use it. For the Industrial IoT, LF Edge’s EdgeX Foundry is this catalyst and the momentum is genuine.

Earlier this month, LF Edge and EdgeX Foundry collaborated with Dell, Intel, HP, IOTech and Wipro to create a challenge to gauge the interest of the growth of Industrial IoT adoption. Partnering with the Topcoder community, the challenge offers a chance for developers to create a unique use case to submit in a 3-phase approach:

PHASE I – IDEATION

Communities like Topcoder are a fantastic way to cast a wide net and bring in scores of interesting and unique concepts and approaches to using technology such as the EdgeX platform.

PHASE II – DESIGN

The five top ideas will be selected from the ideation phase and move into rapid design on Topcoder. In this phase, the focus will shift to creating and designing intuitive and useful UX/UI concepts that showcase how the idea would work within the technical framework that EdgeX Foundry provides.

PHASE III – PROTOTYPING

The top design concept from phase II will move on to prototyping, resulting in a functioning proof of concept with code-ready design.

This multi-phase approach is a fantastic use of on-demand talent to first explore ideas and then hone in on winning concepts to bring them further down the production life-cycle. It will be fast, focused, and provide an incredible example of how to use on-demand talent to accelerate successful innovation.

The pairing of the open EdgeX Foundry framework with the on-demand talent we provide access to at Topcoder is a smart accelerant and a combination we are excited to see in action!

To register for the challenge, visit the website: https://www.topcoder.com/challenges/30117605.

 

LF Edge Member Spotlight: Vapor IO

By Blog, Member Spotlight, Open Glossary of Edge Computing

The LF Edge community is comprised of a diverse set of member companies that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Matt Trifiro, CMO of Vapor IO and Open Glossary of Edge Computing TSC Chair, to discuss the importance of open source, the “Third Act of the Internet,” how they contribute to the Open Glossary project and the impact of being a member of the LF Edge community.

Can you tell us a little about your organization?

Vapor IO builds and deploys data center and networking infrastructure to support edge computing and 5G. Our main product is the Kinetic Edge, which is a nationwide platform for edge colocation, edge networking and edge traffic exchange. We deploy our facilities on the infrastructure edge, in close proximity to the last mile networks. Our customers are the large cloud providers, CDNs, web-scale companies, streaming game providers, and other organizations building low-latency applications. We are Austin-based, but distributed all over the US. Our Kinetic Edge platform is live in four cities (Atlanta, Chicago, Dallas, and Pittsburgh) and we have 16 additional US cities under construction. We expect to deploy the Kinetic Edge to the top 36 US metro areas by the end of 2021, giving us a reach that exceeds 75 percent of the US population.

Why is your organization adopting an open source approach?

Open source allows a community, often one comprised of competitors, to pool their resources and build a common platform upon which differentiated businesses can be built. By collaborating on shared projects, we collectively accelerate entire markets. This lets companies focus on their unique strengths while not wasting effort competing on common building blocks. Everybody wins. We currently lead two active open source projects:

  • Open Glossary of Edge Computing (an LF Edge project), a Wikipedia-style community-driven glossary of terms relevant to edge computing.
  • Synse, a simple, scalable API for sensing and controlling data center equipment.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We believe edge computing will create a massive restructuring of the Internet, what State of the Edge 2020 report calls the “Third Act of the Internet.” We joined LF Edge to help accelerate the rearchitecture of the Internet to support this Third Act. This transformation will impact the entire world and it will require many companies to collaborate on large, multi-decade initiatives. The Linux Foundation has a track record of good stewardship of open source communities and we felt LF Edge had the right mix of focus and neutrality to make it possible.

What do you see as the top benefits of being part of the LF Edge community?

By being part of the LF Edge community, we get to help drive one of the most fundamental Internet transformations of our lifetime, something that will impact the entire world. LF Edge brings together diverse viewpoints and ensures projects advance based on their merit—and not the power or money behind the contributing companies. This creates a level playing field that advances the entire industry.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

Our main contributions to LF Edge have been twofold:

  1. We chair the Open Glossary of Edge Computing, a founding LF Edge project. This project has been helping to shape the overall LF Edge narrative by creating a shared vocabulary that all LF Edge projects can align around.
  2. We are very active in the LF Edge community. Our CEO, Cole Crawford, is now serving his second term as a member of the LF Edge Governing Board.

What do you think sets LF Edge apart from other industry alliances?

Many industry alliances suffer from a “pay for play” approach, where those who pay the most get the most influence. The Linux Foundation has managed to navigate these waters deftly. They have successfully attracted deep-pocket funders while maintaining a level playing field and aggressively including smaller companies and individuals who help advance the projects in ways that don’t involve direct financial contributions. This gives the LF Edge a lot more “staying power,” as it truly serves the broad community and not just the goals of the largest companies. 

How will  LF Edge help your business?

The LF Edge community gives us access to the best technologies and thought leadership in edge computing. LF Edge helps us stay up to date on the industry and align our business with the community’s needs. By paying attention to what kinds of edge infrastructure LF Edge projects require, we are able to fine-tune our Kinetic Edge offering in a way that will support the next generation of edge native and edge enhanced applications.

What advice would you give to someone considering joining LF Edge?

If you’re looking for a place to start, pick a project in the LF Edge portfolio where you can make the most impact. Become a contributor and get to know the committers and technical leadership. LF projects are meritocracy based, so the more active you are and the more value you contributed, the more recognition and influence you will get within the community. This is one of the best ways to start.

To learn more about Open Glossary of Edge Computing, click here. To find out more about our members or how to join LF Edge, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #glossary channels.

Cloud native in NFVI: Why it’s smart business for 5G growth

By Blog, Industry Article

Written by Balaji Ethirajulu, LF Edge Chair of the Marketing Outreach Committee and Senior Director of Product Management at Ericsson

This blog originally ran on the Ericsson website – click here for more articles like this one. 

As telecom industry is moving through the 5G transformation, I’ve been thinking about automation, network flexibility, Total Cost of Ownership (TCO), and faster time to market. For the last two or three years, we’ve witnessed a massive growth in open-source software, related to cloud native.

There are plenty of projects in the Cloud Native Computing Foundation (CNCF) that have already graduated, while some are in the incubating stage and others have entered a sandbox status. Many of them are addressing the various needs of telecom networks and other vertical segments. One of the key areas for the telecom industry is Network Functions Virtualization Infrastructure (NFVI).

Industry challenges

New technologies come and go, and this has happened for decades. But what we’ve been witnessing for the past several years is something that we’ve not experienced before. The acceleration of new technology innovation is mind boggling. The pace at which new technologies are being introduced is rapidly increasing. We’re now able to create new business models by leveraging these new technologies. For example, 5G will enable services across all industries. To do this, we need highly optimized and efficient networks, more automation, CI/CD, edge computing, reliable networks with good security, network and service agility, and faster time to market.

Addressing those challenges

Ericsson has been investing in R&D to leverage these technologies including open source to address the challenges outlined above. One of these key areas is cloud native applications and the infrastructure to support it. Together with open-source technologies they provide the key characteristics to address many of these challenges.

For the last several years, many enterprises and communication service providers (CSPs) have been moving their workloads to a virtualized infrastructure. They have realized some benefits of virtualization. But cloud native brings more benefits in terms of reduced TCO, more automation, faster time to market and so on.

In addition, enterprise needs are different from telecom needs. As many cloud-native technologies are maturing, they need to address the telecom network requirements such as security, high availability, reliability, and real-time needs. Ericsson is leveraging many cloud native open-source technologies and hardening them to meet the rigorous needs of telecom applications and infrastructure.

Our cloud native journey and open source

Over the past few years, Ericsson has been investing in cloud native technologies to transform our portfolio and meet the demands of 5G and digital transformation. Many of our solutions, including 5G applications, orchestration, and cloud infrastructure are based on cloud native technologies and they use a significant amount of CNCF open-source software. In addition, Ericsson has its own Kubernetes distribution called ‘Ericsson Cloud Container Distribution’, certified by CNCF. Ericsson’s cloud native based ‘dual-mode 5G Cloud Core’ utilizes several CNCF technologies and adopts them to a CSP’s needs. Several major global operators have embraced cloud native 5G solutions.

Ericsson supports and promotes many Linux foundation projects including LFN, LF Edge and CNCF.  As a member of the CNCF community, we are actively drive Telecom User group (TUG), which is an important initiative focusing on CSP needs.

Our NFVI already includes OpenStack and other virtualization technologies. As we step further into 5G transformation, Ericsson NFVI, based on Kubernetes, will support various network configurations such as bare metal (K8s, Container as a service or CaaS) ­– and CaaS +VMs, including Management and Orchestration (MANO).

Some CSPs have migrated their workloads from Physical Network Function (PNF) to Virtual network Function (VNF). The percentage of migration varies among CSPs and region to region. Some are migrating from PNF to CNF (Cloud Native Network Function). So, a typical network may include PNF, VNF and CNF. It’s because of this that NFVI needs to support various configurations, and both virtual machines and containers.

Winning GT3 Racing with LF Edge’s Fledge

By Blog, Fledge, Use Cases

Optimizing machine operations using Industrial IoT Fledge with Google Cloud, ML and state-of-the-art digital twins and simulators

Gradient Racing is a professional auto racing team based in Austin, Texas.  One of Honda’s customers for NSX GT3 Evo, Gradient is committed to using cutting edge technology to gain an edge in GT racing.

Modern race cars have thousands of adjustable parameters in their suspension, aerodynamics, transmission and other systems.  Finding the perfect configuration of these parameters for a given driver, track and race strategy is key to winning races. In GT racing, this has been more art than science, with drivers making multiple test runs and working with the race team to adjust the car based on feel.

Like Formula One and Nascar, Gradient wanted to bring state-of-the-art simulation technology to bear on this problem, believing that it would allow them to configure the car far more precisely. To do this, they engaged Motorsports.ai, a leader in racing modeling.

Creating a Virtual Racing Environment

Motorsports.ai’s goal was to create a full simulation environment of the race: a simulated driver, a simulated car and a digital twin of the track.  While car simulation software was commercially available, Motorsports.ai decided to use machine learning technology to develop an AI driver that would react to conditions in the same way as Gradient’s human driver.

Motorsports.ai deployed the Linux Foundation’s Fledge for three functions.  First, to collect the massive amount of data required to train the machine learning system. Second, to validate the race simulations of car, driver and track.  Last, to help validate the digital twin of each race track.  Originally developed by Dianomic Systems, Fledge joined LF Edge, an umbrella organization that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, last year. Fledge is an open-source Industrial IoT framework to collect sensor data, enhance it with meta information, run ML models on the edge and reliably transport data  to central or cloud-based processing systems. In a matter of weeks, Motorsports.ai developed the plugins needed to interface Fledge with the race car’s CAN Bus and Eclipse Cyclone DDS interfaces to over 2000 in-car sensors..

The human driver made multiple test runs on the track to determine his reaction to different conditions. Twenty times a second, Fledge collected readings on every aspect of car performance, as well as driver actions such as gas pedal and brake pressure, steering angle and gear changes.  Fledge enhanced each reading with the precise timestamp and GPS location of the car on the track.  Overall, Fledge collected more than 1GB of data on each test run, which it buffered and automatically transmitted to Google Cloud at the end of the run.  Motorsports.ai fed this data into TensorFlow running on a 20 node Google Kubernetes Engine cluster to validate the digital twin, and improve the accuracy of the predictions.

To build the model, Motorsports.ai integrated Fledge into a highly accurate simulation, and virtual prototyping environment to collect the same data as was collected in the physical car. Using TensorFlow, they then run distributed machine learning jobs using the KubeFlow framework on the aforementioned kubernetes cluster. Fledge is used to manage the sensor data from both the actual and virtual track runs. Motorsports.ai uses these results to provide predictions that ensure a competitive performance.

A key attribute for a digital twin is validation, and verification, which in this case requires precise mapping of the track surface type. The coefficient of friction between the car’s tires and track varies at each venue, the surface could be asphalt, concrete, painted, or all of the above. To identify the surface type for the digital twin, an Intel RealSense camera was deployed on the car, oriented to the track.  ML models running on the NVIDIA Jetson Xavier platform were then used to identify surface categories while Fledge added the telemetry, and timestamp to the ML models surface type category predictions.

Winning the Race

Gradient used the Motorsports.ai simulation to optimize suspension and alignment settings for every race in 2019.  The simulation enabled far more rigorous tuning than their previous manual method.  Instead of building a configuration based on a few laps and driver “feel”, they were able to run tens of thousands of simulations using precise measurement.  They were able to investigate conditions that had not yet been experienced on the actual track and ensure that they could compete effectively in them.

For example, for tire camber alone they ran 244 simulations, testing the camber of each tire at 0.1° increments through a range of -3.0° to +3.0°.  All told, Gradient simulated more than 10,000 track runs for each race, orders of magnitude more than would have been possible using a human driver.

The results showed on the scoreboard. Gradient broke the Circuit of the Americas track record by over 1.1 seconds and the Mid-Ohio Sports Car Course record by more than 1 second. They continued their success at other races and took home the national GTB1 championship title of the 2019 Pirelli Triple Trofeo.

To learn more about Fledge, visit https://www.lfedge.org/projects/fledge/, get the code here, or engage with the community on Slack (#fledge).  To learn more about LF Edge, visit https://www.lfedge.org/.

EdgeX Foundry’s Call for Nominations for the 3rd Annual EdgeX Awards

By Blog, EdgeX Foundry

It’s that time again! Time to step back from all of the hard work that the EdgeX Foundry Community has put into building out our latest version and celebrate what we have accomplished over the last year. Join us as we recognize members who have gone the extra mile to make EdgeX so successful, with our 3rd Annual EdgeX Community Awards.

Milestones

When we first started the awards in 2018, we had about 1,000 downloads and about 20 contributors.  A year later, in April 2019, we moved within LF Edge and became its first Stage 3 project.  At that time, EdgeX Foundry had about 75 contributors and 100,000 total downloads.  We also released two more versions (Edinburgh and Fuji).  As the community grew, so did the scope and the downloads.  In September, we hit 1= Million downloads, and we closed out the year with our second million downloads. By the end of February 2020, we surpassed 3 million downloads and have more than 150 contributors.

How to Participate

If you are already a member of the EdgeX Foundry community, please take a moment and visit our EdgeX Awards page and nominate the community member that you feel best exemplified Innovation and/or contributed leadership and helped EdgeX Foundry advance and grow over the last year.  Nominations are being accepted through April 5, 2020.

The Awards

Innovation Award:  The Innovation Award recognizes individuals who have contributed the most innovative solution to the project over the past year.

Previous Winners:

2018- Drasko Draskovic (Mainflux) and Tony Espy (Canonical)

2019- Trevor Conn (Dell) and Cloud Tsai (IOTech)

Contribution Award:  The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.

Previous Winners:

2018- Andy Foster (IOTech) and Drasko Draskovic (Mainflux)

2019- Michael Johanson (Intel) and Lenny Goodell (Intel)

To learn more about EdgeX Foundry, visit the website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 3 million and more downloads!

 

IoT, AI & Networking at the Edge

By Blog, LF Edge, Open Glossary of Edge Computing, State of the Edge

Written by LF Edge member Mike Capuano, Chief Marketing Officer for Pluribus Networks

This blog originally ran on the State of the Edge website. 

5G is the first upgrade to the cellular network that will be justified not only by higher speeds and new capabilities targeted at consumer applications such as low latency gaming but also  by its ability to support enterprise applications. The Internet of Things (IoT) will become the essential fuel for this revolution, as it transforms almost every business, government, and educational institution. By installing sensors, video cameras and other devices in buildings,  factories, stadiums and in other locations, such as in vehicles, enterprises can collect and act on data to make them more efficient and more competitive. This digital transformation will create a better and safer environment for employees and to deliver the best user experience possible to end customers. In this emerging world of 5G-enabled IoT, edge computing will play a critical role.

IoT will leverage public and private 5G, AI, and edge compute. In many cases, analysis of the IoT data will be highly complex, requiring the correlation of multiple data input streams fed into an AI inference model—often in real time. Use cases include factory automation and safety, energy production, smart cities, traffic management, large venue crowd management, and many more. Because the data streams will be large and will often require immediate decision-making, they will benefit from edge compute infrastructure that is in close proximity to the data in order to reduce latency and data transit costs, as well as ensure autonomy if the connection to a central data center is cut.

Owing to these requirements, AI stacks will be deployed in multiple edge locations, including on premises, in 5G base station aggregation sites, in telco central offices and at many more “edges”. We are rapidly moving from a centralized data center model to a highly distributed compute architecture. Developers will place workloads in edge locations where applications can deliver their services at the highest performance with the lowest cost.

Critical to all of this will be the networking that will connect all of these edge locations and their interrelated compute clusters. Network capabilities must now scale to a highly distributed model, providing automation capabilities that include Software Defined Networking (SDN) of both the physical and virtual networks.

Network virtualization—like the virtualization of the compute layer— is a flexible, software-based representation of the network built on top of its physical properties. The physical network is obviously required for basic connectivity – we must move bits across the wire, after all. SDN automates this physical “underlay” but it is still rigid since there are physical boundaries. Network virtualization is a complete abstraction of the physical network. It consists of dynamically-constructed VXLAN tunnels supported by virtual routers, virtual switches and virtual firewalls — all defined and instantiated in software, all of which can be manipulated in seconds and is much faster than reconfiguring the physical network (even with SDN).

To satisfy the requirements of a real-time, edge-driven IoT environment, we must innovate to deliver cost effective, simple, and unified software defined networking across both the physical and virtual networks that support the edge. The traditional model for SDN was not based on these requirements. It  was built for large hyperscale and enterprise data centers that rely on multiple servers and software licenses incurring costs and consuming space and power. This approach also requires complex integrations to deploy and orchestrate the physical underlay, the virtual overlay, along with storage and compute.

Deploying traditional SDN into edge environments is not an attractive solution. Often there will not be space to deploy multiple servers for management and SDN control of the physical and virtual networks. Furthermore, all the SDN controllers need to be orchestrated by a higher layer “controller of controllers” to synchronize network state, which adds unnecessary latency, cost and complexity.

In some cases, companies also deploy SmartNICs (a network interface controller that also performs networking functions to offload processing from the CPU). SmartNICS allow packet processing associated with network virtualization without burdening the primary compute (which is better utilized supporting other workloads). Also, hardware-based taps, probes and packet brokers are being deployed to support network telemtry and analytics.

Applying the network automation model we built for large centralized data centers will be expensive and cumbersome, as well as space and power inefficient, in edge environments. The industry needs to rethink the approach to edge network automation and deliver a solution designed from the ground up for distributed environments. Ideally this solution does not require additional hardware and software but can leverage the switches and compute that are already being deployed to resource constrained edge environments.

The good news is that a number of companies are developing new approaches that deliver highly distributed SDN control that unifies the physical underlay and virtual overlay along with providing network analytics — all with no additional external hardware or software. These new technologies can utilize, for example, the fairly powerful and underutilized CPU and memory in “white box” Top of Rack (TOR) switches to deliver SDN, network virtualization, network analytics and a DCGW (Data Center Gateway) router function. In other words, these solutions have been designed with the edge in mind and are delivering powerful automation with no extra hardware and additional software licenses – supporting the edge with a cost effective solution that also saves space and power.

Pluribus Networks delivers open networking solutions based on a unique next-gen SDN fabric for data centers and distributed cloud edge compute environments.

The State of the Edge 2020 report is available to read now for free. We encourage anyone who is interested in edge computing to give it a read and to send feedback to State of the Edge.

Edge computing architecture and use cases

By Blog, LF Edge, Use Cases

Written by By Jason Gonzalez, Jason Hunt, Mathews Thomas, Ryan Anderson, Utpal Mangla with LF Edge Premier Member Company IBM 

This blog originally ran on the IBM website. For more articles from IBM, visit https://developer.ibm.com/articles/.

Developing new services that take advantage of emerging technologies like edge computing and 5G will help generate more revenue for many businesses today, but especially for telecommunications and media companies. IBM works with many telecommunications companies to help explore these new technologies so that they can better understand how the technologies are relevant to current and future business challenges.

The good news is that edge computing is based on an evolution of an ecosystem of trusted technologies. In this article, we will explain what edge computing is, describe relevant use cases for the telecommunications and media industry while describing the benefits for other industries, and finally present what an end-to-end architecture that incorporates edge computing can look like.

What is Edge Computing?

Edge computing is composed of technologies take advantage of computing resources that are available outside of traditional and cloud data centers such that the workload is placed closer to where data is created and such that actions can then be taken in response to an analysis of that data. By harnessing and managing the compute power that is available on remote premises, such as factories, retail stores, warehouses, hotels, distribution centers, or vehicles, developers can create applications that:

  • Substantially reduce latencies
  • Lower demands on network bandwidth
  • Increase privacy of sensitive information
  • Enable operations even when networks are disrupted

To move the application workload out to the edge, multiple edge nodes might be needed, as shown in Figure 1.

Figure 1. Examples of edge nodes

Example of edge nodes

These are some of the key components that form the edge ecosystem:

  • Cloud
    This can be a public or private cloud, which can be a repository for the container-based workloads like applications and machine learning models. These clouds also host and run the applications that are used to orchestrate and manage the different edge nodes. Workloads on the edge, both local and device workloads, will interact with workloads on these clouds. The cloud can also be a source and destination for any data that is required by the other nodes.
  • Edge device
    An edge device is a special-purpose piece of equipment that also has compute capacity that is integrated into that device. Interesting work can be performed on edge devices, such as an assembly machine on a factory floor, an ATM, an intelligent camera, or an automobile. Often driven by economic considerations, an edge device typically has limited compute resources. It is common to find edge devices that have ARM or x86 class CPUs with 1 or 2 cores, 128 MB of memory, and perhaps 1 GB of local persistent storage. Although edge devices can be more powerful, they are the exception rather than the norm currently.
  • Edge node
    An edge node is a generic way of referring to any edge device, edge server, or edge gateway on which edge computing can be performed.
  • Edge cluster/server
    An edge cluster/server is a general-purpose IT computer that is located in a remote operations facility such as a factory, retail store, hotel, distribution center, or bank. An edge cluster/server is typically constructed with an industrial PC or racked computer form factor. It is common to find edge servers with 8, 16, or more cores of compute capacity, 16GB of memory, and hundreds of GBs of local storage. An edge cluster/server is typically used to run enterprise application workloads and shared services.
  • Edge gateway An edge gateway is typically an edge cluster/server which, in addition to being able to host enterprise application workloads and shared services, also has services that perform network functions such as protocol translation, network termination, tunneling, firewall protection, or wireless connection. Although some edge devices can serve as a limited gateway or host network functions, edge gateways are more often separate from edge devices.

IoT sensors are fixed function equipment that collects and transmits data to an edge/cloud but does not have onboard compute, memory, and storage. Containers cannot be deployed on them for this reason. These edge devices connect to the different nodes but are not reflected in Figure 1 as they are fixed-function equipment.

With a basic understanding of edge computing, let’s take a brief moment to discuss 5G and its impact on edge computing before we discuss the benefits and challenges around edge computing.

Edge Computing and 5G

The advent of 5G has made edge computing even more compelling, enabling significantly improved network capacity, lower latency, higher speeds, and increased efficiency. 5G promises data speeds in excess of 20 Gbps and the ability to connect over a million devices per square kilometer.

Communications service providers (CSPs) can use edge computing and 5G to be able to route user traffic to the lowest latency edge nodes in a much more secure and efficient manner. With 5G, CSPs can also cater to real-time communications for next-generation applications like autonomous vehicles, drones, or remote patient monitoring. Data intensive applications that require large amounts of data to be uploaded to the cloud can run more effectively by using a combination of 5G and edge computing.

In the advent of 5G and edge computing, developers will need to continue to focus on making native cloud applications even more efficient. The continual addition of newer and smaller edge devices will require changes to existing applications so that enterprises can fully leverage the capabilities of 5G and edge computing. In some cases, applications will need to be containerized and run on a very small device. In other cases, the virtualized network components need to be redesigned to take full advantage of the 5G network. Many other cases exist that require evaluation as part of the application development roadmap and future state architecture.

Benefits and challenges of edge computing

As emerging technologies, 5G and edge computing bring many benefits to many industries, but they also bring some challenges along with them.

Core benefits

The benefits of edge computing technology include these core benefits:

  • Performance: Near instant compute and analytics at the edge lowers latency, and therefore greatly increasing performance. With the advent of 5G, it is possible to rapidly communicate with the edge, and applications that are running at the edge can quickly respond to the ever-growing demand of consumers.
  • Availability: Critical systems need to operate irrespective of connectivity. There are many potential points of failure with the current communications flow. These points of failure include the company’s core network, multiple hops across multiple network nodes with security risks along the network paths, and many others. With edge, with the communication primarily between the consumer/requestor and the device/local edge node, there is a resulting increase in availability of the systems.
  • Data security: In edge computing architectures, the analytics data potentially never leaves the physical area where it is gathered and is used within the local edge. Only the edge nodes need to be primarily secured, which makes it easier to manage and monitor, which means that the data is more secure.

Additional benefits, with additional challenges

These additional benefits do not come without additional challenges, such as:

  • Managing at scale
  • Making workloads portable
  • Security
  • Emerging technologies and standards

Managing at scale

As workload locations shift when you incorporate edge computing, and the deployment of applications and analytics capabilities occur in a more distributed fashion, the ability to manage this change in an architecturally consistent way requires the use of orchestration and automation tools in order to scale.

For example, if the application is moved from one data center with always available support to 100’s of locations at the local edge that are not readily accessible or not in a location with that kind of local technical support, how one manages the lifecycle and support of the application must change.

To address this challenge, new tools and training for technical support teams will be needed to manage, orchestrate, and automate this new complex environment. Teams will require more than the traditional network operations tools (and even beyond software-defined network operations tools), as support teams will need tools to help manage application workloads in context to the network in various distributed environments (many more than previously), which will each have differing strengths and capabilities. In future articles in this series, we will look at these application and network tools in more details.

Making workloads portable

To operate at scale, the workloads being considered for edge computing localization need to be modified to be more portable.

  • First, size does matter. Since reducing latency requires moving the workload closer to the edge and moving it to the edge components will mean less compute resources to run the workload, the overall size of various workloads might limit the potential of edge computing.
  • Second, adopting a portability standard or set of standards can be difficult with many varied workloads to consider. Also, the standard should allow for management of the full lifecycle of the application, from build through run and maintain.
  • Third, work will need to be done on how best to break up workloads into sub-components to take advantage of the distributed architecture of edge computing. This would allow for the running of certain parts of workloads to run on an edge device with others running on an edge cluster/server or any other distribution across edge components. In a CSP, this would typically include migrating to a combination of network function virtualization (for network workloads) and container workloads (for application workloads and in the future, network workloads), where applicable and possible. Depending on the current application environment, this move might be a large effort.

To address this challenge in a reasonable way, workloads can be prioritized based on a number of factors, including benefit of migration, complexity, and resource/time to migrate. Many network and application partners are already working on migrating capabilities to container-based approaches, which can aid in addressing this challenge.

Security

While Data Security is a benefit in that the data can be limited to certain physical locations for specific applications, overall security is an additional challenge when adopting edge computing. Device edge physical devices might not have the ability to leverage existing security standards or solutions due to their limited capabilities.

So, to address these security challenges, the infrastructure upstream in the local edge might have additional security concerns to address. In addition, with multiple device edges, the security is now distributed and more complex to handle.

Emerging technologies and standards

With many standards in this ecosystem newly created or quickly evolving, it will be difficult to maintain technology decisions long term. The decision of a specific edge device or technology might be superseded by the next competing device, making it a challenging environment to operate in. With the right tools in place to address management of these varied workloads along with their entire application lifecycle, it can be an easier task to introduce new devices or capabilities or replace existing devices as the technologies and standards evolve.

Cross Industry Use Cases

In any complex environment, there are many challenges that occur and many ways to address them. The evaluation of whether a business problem or use case could or should be solved with edge computing will need to be done on a case by case basis to determine whether it makes sense to pursue.

For reference, let’s discuss some potential industry use cases for consideration, both examples and real solutions. A common theme across all these industries is the network that will be provided by the CSP. The devices with applications need to operate within the network and the advent of 5G makes it even more compelling for these industries to start seriously considering edge computing.

Use case #1: Video Surveillance

Using video to identify key events is rapidly spreading across all domains and industries. Transmitting all the data to the cloud or data center is expensive and slow. Edge computing nodes that consist of smart cameras can do the initial level of analytics, including recognizing entities of interest. The footage of interest can then be transmitted to a local edge for further analysis and respond appropriated to footage of interest including raising alerts. Content that cannot be handled at the local edge can be sent to the cloud or data center for in-depth analysis.

Consider this example: A major fire occurs, and it is often difficult to differentiate humans from other objects burning. In addition, the network can become heavily loaded in such instances. With edge computing, cameras that are located close to the event can determine whether a human is caught in the fire by identifying characteristics typical of a human being and clothing that humans might normally wear which might survive the fire. As soon as the camera recognizes a human in the video content, it will start transmitting the video to the local edge. Hence, the network load is reduced as the transmission only happens when the human is recognized. In addition, the local edge is close to the device edge so latency will be almost zero. Lastly, the local edge can now contact the appropriate authorities instead of transmitting the data to the data center which will be slower and since the network from the fire site to the data center might be down.

Figure 2. Depiction of intelligent video example

Image of firefighters in a fire

Use case #2: Smart Cities

Operating and governing cities has become a challenging mission due to many interrelated issues such as increasing operation cost from aging infrastructure, operational inefficiencies, and increasing expectations from a city’s citizens. Advancement of many technologies like IoT, edge computing, and mobile connectivity has helped smart city solutions to gain popularity and acceptance among a city’s citizens and governance alike.

IoT devices are the basic building blocks of any smart city solution. Embedding these devices into the city’s infrastructure and assets helps monitor infrastructure performance and provides insightful information about the behavior of these assets. Due to the necessity to provide real-time decision and avoid transmitting a large amount of sensor data, edge computing becomes a necessary technology to deliver city’s mission critical solutions such as traffic, flood, security and safety, and critical infrastructure monitoring.

Use case #3: Connected Cars

Connected cars can gather data from various sensors within the vehicle including user behavior. The initial analysis and compute of the data can be executed within the vehicle. Relevant information can be sent to the base station that then transmits the data to the relevant endpoint, which might be a content delivery network in the case of a video transmission or automobiles manufacturers data center. The manufacturer might also have relationship with the CSP in which case the compute node might be at the base station owned by the CSP.

Use case #4: Manufacturing

Rapid response to manufacturing processes is essential to reduce product defects and improve efficiencies. Analytic algorithms monitor how well each piece of equipment is running and adjust the operating parameters to improve its efficiency. Analytic algorithms also detect and predict when a failure is likely to occur so that maintenance can be scheduled on the equipment between runs. Different edge devices are capable of differing levels of processing and relevant information can be sent to multiple edges including the cloud.

Consider this example: A manufacturer of electric bicycles is trying to reduce downtime. There are over 3,000 pieces of equipment on the factory floor including presses, assembly machines, paint robots and conveyers. An outage in a production run costs you $250,000 per hour. Operating at peak efficiency and with no unplanned outages is the difference between having profit and not having profit. To operate smoothly, the factory needs the following to run at the edge:

  • Analytic algorithms that can monitor how well each piece of equipment is running and then adjust the operating parameters to improve efficiency.
  • Analytic algorithms that can detect and predict when a failure is likely to occur so that maintenance can be scheduled on the equipment between runs.

Predicting failure can be complex and requires the customized models for each use case. For one example how these types of models can be created, refer to this code pattern, “Create predictive maintenance models to detect equipment breakdown risks.” Some of these models need to run on the edge, and our next set of tutorials will explain how to do this.

Figure 3. Depiction of manufacturing example

Image of manufacturing floor

Overall Architecture

As we discussed earlier, edge computing consists of three main nodes:

  1. Device edge, where the edge devices sit
  2. Local edge, which includes both the infrastructure to support the application and also the network workloads
  3. Cloud, or the nexus of your environment, where everything comes together that needs to come together

Figure 4 represents an architecture overview of these details with the local edge broken out to represent the workloads.

Figure 4. Edge computing architecture overview

Edge computing architecture overview

Each of these nodes is an important part of the overall edge computing architecture.

  1. Device Edge The actual devices running on-premises at the edge such as cameras, sensors, and other physical devices that gather data or interact with edge data. Simple edge devices gather or transmit data, or both. The more complex edge devices have processing power to do additional activities. In either case, it is important to be able to deploy and manage the applications on these edge devices. Examples of such applications include specialized video analytics, deep learning AI models, and simple real time processing applications. IBM’s approach (in its IBM Edge Computing solutions) is to deploy and manage containerized applications on these edge devices.
  2. Local Edge
    The systems running on-premises or at the edge of the network. The edge network layer and edge cluster/servers can be separate physical or virtual servers existing in various physical locations or they can be combined in a hyperconverged system. There are two primary sublayers to this architecture layer. Both the components of the systems that are required to manage these applications in these architecture layers as well as the applications on the device edge will reside here.

    • Application layer: Applications that cannot run at the device edge because the footprint is too large for the device will run here. Example applications include complex video analytics and IoT processing.
    • Network layer: Physical network devices will generally not be deployed due to the complexity of managing them. The entire network layer is mostly virtualized or containerized. Examples include routers, switches, or any other network components that are required to run the local edge.
  3. Cloud
    This architecture layer is generically referred to as the cloud but it can run on-premise or in the public cloud. This architecture layer is the source for workloads, which are applications that need to handle the processing that is not possible at the other edge nodes and the management layers. Workloads include application and network workloads that are to be deployed to the different edge nodes by using the appropriate orchestration layers.

Figure 5 illustrates a more detailed architecture that shows which components are relevant within each edge node. Duplicate components such as Industry Solutions/Apps exist in multiple nodes as certain workloads might be more suited to either the device edge or the local edge and other workloads might be dynamically moved between nodes under certain circumstances, either manually controlled or with automation in place. It is important to recognize the importance of managing workloads in discreet ways as the less discreet, the more limited in how we might deploy and manage them.

While a focus of this article has been on application and analytics workloads, it should also be noted that network function is a key set of capabilities that should be incorporated into any edge strategy and thus our edge architecture. The adoption of tools should also take into consideration the need to handle application and network workloads in tandem.

Figure 5. Edge computing architecture overview – Detailed

Edge computing architecture detailed overview

We will be exploring every aspect of this architecture in more detail in upcoming articles. However, for now, let’s take a brief look at one real implementation of a complex edge computing architecture.

Sample implementation of an edge computing architecture

In 2019, IBM partnered with Telecommunications companies and other technology participants to build a Business Operation System solution. The focus of the project was to allow CSPs to manage and deliver multiple high-value products and services so that they can be delivered to market more quickly and efficiently, including capabilities around 5G.

Edge computing is a part of the overall architecture as it was necessary to provide key services at the edge. The following illustrates the implementation with further extensions having since been made. The numbers below refer to the numbers in Figure 6:

  1. A B2B customer goes to portal and orders a service around video analytics using drones.
  2. The appropriate containers are deployed to the different edge nodes. These containers include visual analytics applications and network layer to manage the underlying network functionality required for the new service.
  3. The service is provisioned, and drones start capturing the video.
  4. Initial video processing is done by the drones and the device edge.
  5. When an item of interest is detected, it is sent to the local edge for further processing.
  6. At some point in time, it is determined that a new model needs to be deployed to the edge device as new unexpected features begin to appear in the video so a new model is deployed.
Figure 6. Edge computing architecture overview – TM Forum Catalyst Example

Edge computing architecture overview TM Forum Catalyst Example

As we continue to explore edge computing in upcoming articles, we will focus more and more on the details around edge computing, but let’s remember that edge computing plays a key role as part of a strategy and architecture, an important part, but only one part.

Conclusion

In this brief overview of edge computing technology, we’ve shown how edge computing is relevant to challenges faced by many industries, but especially the telecommunications industry.

The edge computing architecture identifies the key layers of the edge: the device edge (which includes edge devices), the local edge (which includes the application and network layer), and the cloud edge. But we’re just getting started. Our next article in this series will dive deeper into the different layers and tools that developers need to implement an edge computing architecture.

Akraino Edge Stack Use Cases: Tencent’s End User Story

By Akraino Edge Stack, Blog

Written by Tencent’s xin qiu  and Mark Shan, active members of the Akraino technical community

Akraino, a project creating an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications, has created blueprints to support a variety of edge use cases. The Akraino community tests and validates the blueprints on real hardware labs supported by users and community members. Below is a description of several validated blueprints by LF Edge member Tencent.

Tencent has been deploying Tars on Connected Vehicle Blueprint and validating on IEC Type 4: AR/VR oriented Edge Stack for Integrated Edge Cloud (IEC) Blueprint Family.

Tars Microservice Framework

Tars is a key component for Connected Vehicle Blueprint and IEC Type 4 AR/VR Blueprint.  So we introduce Tars Framework first prior to diving into the Akraino Blueprint. Tars is a high-performance microservice framework, which supports C++, Java, Nodejs as well as PHP for now. Tars framework offers a set of efficient solutions for development, test, and maintenance. Tars framework integrates extensible components for encoding/decoding, high-performance RPC protocol, name service, monitor, statistics and configuration.  Tars framework will be a great help for reliable distributed application development.

Tars framework has been used in Tencent since 2008. Nowadays it has already been used by hundreds of applications in Tencent, services developed on the Tars Framework run on 16 000 machines. The applications include but not limited to social networks, gaming, video,  financial, telecom, education. 

Tars architecture

The following picture depicts Tars’ architecture.  Tars can be deployed in multiple environments, including bare metal,  virtual machines as well as the container.

In terms of RPC, Tars supports Tars, Protocol Buffer, TUP, SSL, HTTP1/2. The customers can select any RPC protocol based on the application’s requirement.

Beyond  RPC call, Tars supports multiple languages and microservice governance functions, including Service Register/Discovery, Load Balance, Area Perception, Flow Control, Set Mode, Tracing and so on.

Tars Optimization for edge

To address the requirements of time-critical Application on the edge, some optimizations are itemized below:

  • High-Performance RPC:   Tars Framework supports a high-performance RPC Protocol, called Tars Protocol.   To improve the performance, the Tars protocol re-design a brand new RPC protocol and make the transfer between “request” and “Binary stream” more efficiently. In terms of the detail of Tars protocol, refer to the Tars.h file in the https://github.com/TarsCloud/Tars.
  • Lightweight framework
    1. To make Tars can be well deployed on the edge computing platform, we make the framework pluggable. The customer plugin the pluggable components when we really need it. Unnecessary components can be avoided.
    2. Rewrite some functions(like scale-out, monitor and so on) to reduce CPU consumption.
    3. Orchestrate no-urgent functions(like monitor data calculation and so on) from edge to DC(or higher level edge). Reduce resource consumption.

For more detail information for Tars, refer to https://github.com/TarsCloud/Tars/blob/master/Introduction.md.

Connect Vehicle Blueprint

For Connected Vehicle Blueprint,Connected Vehicle Blueprint focuses on establishing an open-source MEC platform, which is the backbone for the V2X application.

Sponsor Companies

Tencent, Arm, Intel, Nokia

Value Proposition

  • Establish an open-source edge MEC platform on which each member company can develop its own v2x application.
  • Contribute some use cases which help customers adopt V2X applications.
  • Find some members who can work together to figure out something big for the industry.

Use Cases

From the use case perspective,  the following are the potential use cases. More is possible in the future.

  • Accurate Location: Accuracy of location improved by over 10 times than today’s GPS system. Today’s GPS system is around 5-10 meters away from your reallocation, <1 meter is possible with the help of Connected Vehicle Blueprint.
  • Smarter Navigation: Real-time traffic information update, reduce the latency from minutes to seconds, figure out the most efficient way for drivers.
  • Safe Drive Improvement: Figure out the potential risks which can NOT be seen by the driver.
  • Reduce traffic violation: Let the driver understand the traffic rules in some specific area. For instance,  change the line prior to a narrow street, avoiding the opposite way drive in the one-way road, avoiding carpool lane when a single driver and so on.

Network Architecture

From the network architecture perspective,  the blueprint can be deployed in both 4G and 5G networks. Two key points should be paid special attention to.

  1. One is offloading data to edge MEC platform.  The policy of data offload is configurable based on different applications.
  2. The other is the ability that letting the edge talks to the other edges as well as the remote data center. In some use cases, the data in one edge can NOT address the application’s requirements, we need to collect the data from different edges or send the data to the remote data center.

Screen Shot 2019-11-03 at 8.30.34 PM.png

The general data flows are itemized below:

  • Grab the traffic/vehicle information and circulate the information to the edge;
  • Different types of information are distributed to the corresponding application based on the configurable policy;
  • The corresponding edge applications process the information in a very fast speed and figure out the suggested action items for drivers;
  • The suggested action items are sent back to the driver via the network.

MEC Development Architecture

From the MEC deployment perspective,  the blueprint consists of three layers.

  • IaaS Layer     Connected Vehicle Blueprint can be deployed on community hardware, virtual machine as well as the container.   Both the Arm and x86 platforms are BOTH well supported.
  • PaaS Layer   Tars is the microservice framework of Connected Vehicle Blueprint. Tars can provide high-performance RPC calls, deploy microservice in larger scale-out scenarios efficiently, provide easy-to-use service management features.
  • SaaS Layer   The latest v2x application depicted in the foregoing use case perspective.

For more information about Connected Vehicle Blueprint,  please refer to:

Connected Vehicle Blueprint(Aka CVB)

Release 2 Documents

Slide Deck

IEC Type 4: AR/VR oriented Edge Stack for Integrated Edge Cloud(IEC) Blueprint Family

Integrated Edge Cloud(IEC) is an Akraino approved blueprint family and part of Akraino Edge Stack, which intends to develop a fully integrated edge infrastructure solution, and the project is completely focused on Edge Computing. This open-source software stack provides critical infrastructure to enable high performance, reduce latency, improve availability, lower operational overhead, provide scalability, address security needs, and improve fault management. The IEC project will address multiple edge use cases and industry, not just the Telco Industry. IEC intends to develop solution and support of carrier, provider, and the IoT networks.

IEC Type 4 is focused on AR VR applications on the edge.  In general, the architecture consists of three layers:  Iaas(IEC Type2), PaaS(Tars), SaaS(AR/VR Application).

Sponsor Companies

Tencent, Arm, HTC, IBM, MobiledgeX, Visby, UC Irvine, Juniper, PSU, Orange

Value Proposition

  • Establish an open-source edge infrastructure on which each member company can develop its own AR/VR application.
  • Contribute some use cases which help customers adopt ARVR applications.
  • Find some members who can work together to figure out something big for the industry.

Use Cases

The combination of IEC Type2 and Tars provides high performance and high availability infrastructure for AR/VR applications. The AR/VR application includes but not limited to operation guidance, virtual classroom, sports live, gaming as so on.

For Release 2, we focus on building the infrastructure and virtual classroom application (Highlighted in dark purple color). Virtual Classroom is a basic app that allows you to live a virtual reality experience simulating a classroom with teachers and students.

UseCases Value Proposition
Operation Guidance ​ Predict the next step for the operations(like assembling Lego blocks, cooking sandwiches, etc) and help people to achieve a goal.
Virtual Classroom  Simulating a virtual classroom,  ​which improves online education experiences for the teachers and students.
Sports Live Augment and simulate the sports live, which gives the audiences an amazing immersive watching experience.  ​
Gaming Augment and simulate the game scenario, let players enjoy an immersive game world.  ​

Architecture

The whole architecture, shown below, consists of two parts: the front end and the backend.

  • For the front end, the minimal requirements are two clients, one for the teacher and the other one for the student. The client device could be a cellphone, tablets, wearable devices,personal computers, etc.  The client collects information from the real world and transfers the data to the backend for calculation. Beyond data transfer and calculation, render is another function running on the front end client-side.
  • For the backend, we deploy the backend in two virtual machines in the Cloud.
    1. To make the VR backend work well, we deploy IEC in the IaaS Layer, Tars framework in PaaS Layer, Virtual Classroom Backend in SaaS Layer.
    2. To make CI/CD available, we deploy Jenkins Master in one Virtual Machine.  The Jenkins master issues command to triger the script run on the dedicated VM.

For more information about IEC Type 4,  please refer to:

Release 2 Documentation

Leader’s Statements

” Open Source is Tencent’s core strategy. Tencent, as a platinum member and board of Linux Foundation, continuously promotes network innovation from application perspectives. We believe that application-drive-network will bring tremendous benefits to our customers and stakeholders. After the Tars project in 2018 and recent Akraino blueprints, Tencent will contribute new open-source Foundation and projects in the future. Welcome more Linux member companies get involved!”

  • Xin Liu, General Manager of Future Network Lab, Tencent
“Tencent has been ready for open source and joined the Linux Foundation to become a platinum member. Tencent also donated TARS to Linux Foundation , a multilingual microservice development framework that has been used for ten years.Tars has been a mature project in Linux Foundation, and is currently providing mature infrastructure for projects in LF Edge. At present, TARS has become the unique microservice development framework in the connected vehicle and AR / VR blueprint of Akraino, improving the necessary conditions for high availability, high performance and service governance of applications. We will continue to develop the TARS project and Akraino project in edge computing. Tencent will establish an open-source foundation with more partners and contribute more projects to the open-source industry in the future. Welcome to join us! ”
  • Mark Shan, Project consultant of Tencent Technical Committee, Akraino TSC member

*******************************

Guoxu Liu, Tencent

In Tencent buildings in Shenzhen, ELIOT AIoT in Smart Office Blueprint has been deployed. The development of this blueprint helps enterprises  increasing  the efficiency of management and decrease the overall cost of managing offices and meeting rooms. At the same, employees can reserve and use meeting rooms more easily and conveniently.

Recently, Tencent is planning to deploy this blueprint in Beijing, Shanghai, Guangzhou,  and many other cities in China.

For more information about the Akraino blueprints or end user case stories, please visit the wiki: https://wiki.akraino.org. Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #akraino channels.