Category

LF Edge

LF Edge Expands Ecosystem with Open Horizon, adds Seven New Members and Reaches Critical Deployment Milestones

By Akraino Edge Stack, Announcement, Baetyl, EdgeX Foundry, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, State of the Edge

  • Open Horizon, an application and metadata delivery platform, is now part of LF Edge as a Stage 1 (At-Large) Project.
  • New members bring R&D expertise in Telco, Enterprise and Cloud Edge Infrastructure.
  • EdgeX Foundry hits 4.3 million downloads and Akraino R2 delivers 14 validated deployment-ready blueprints.
  • Fledge shares a race car use case optimizing car and driver operations using Google Cloud, Machine Learning and state-of-the-art digital twins and simulators.

SAN FRANCISCO – April 30, 2020 –  LF Edge, an umbrella organization under The Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced continued project momentum with the addition a new project and several technical milestones for EdgeX Foundry, Akraino Edge Stack and Fledge. Additionally, the project welcomes seven new members including CloudBrink, Federated Wireless, Industrial Technology Research Institute (ITRI), Kaloom, Ori Industries, Tensor Networks and VoerEir to its ecosystem.

Open Horizon, an existing project contributed by IBM, is a platform for managing the service software lifecycle of containerized workloads and related machine learning assets. It enables autonomous management of applications deployed to distributed webscale fleets of edge computing nodes and devices without requiring on-premise administrators.

Edge computing brings computation and data storage closer to where data is created by people, places, and things. Open Horizon simplifies the job of getting the right applications and machine learning onto the right compute devices, and keeps those applications running and updated. It also enables the autonomous management of more than 10,000 edge devices simultaneously – that’s 20 times as many endpoints as in traditional solutions.

“We are thrilled to welcome Open Horizon and new members to the LF Edge ecosystem,” said Arpit Joshipura, general manager, Networking, Edge & IoT, the Linux Foundation. “These additions complement our deployment ready LF Edge open source projects and our growing global ecosystem.”

“LF Edge is bringing together some of the most significant open source efforts in the industry, said Todd Moore, IBM VP Open Technology, “We are excited to contribute the Open Horizon project as this will expand the work with the other projects and companies to create shared approaches, open standards, and common interfaces and APIs.”

Open Horizon joins LF Edge’s other projects including: Akraino Edge Stack, Baetyl,  EdgeX Foundry, Fledge, Home Edge, Project EVE and State of the Edge. These projects support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and  faster processing and mobility. By forming a software stack that brings the best of cloud, enterprise and telecom, LF Edge helps to unify a fragmented edge market around a common, open vision for the future of the industry.

Since its launch last year, LF Edge projects have met significant milestones including:

  • EdgeX Foundry has hit 4.3 million docker downloads.
  • Akraino Edge Stack (Release 2) has 14 specific Blueprints that have all tested and validated on hardware labs and can be deployed immediately in various industries including Connected Vehicle, AR/VR, Integrated Cloud Native NFV, Network Cloud and Tungsten Fabric and SDN-Enabled Broadband Access.
  • Fledge shares a race car use case optimizing car and driver operations using Google Cloud, Machine Learning and state-of-the-art digital twins and simulators.
  • State of the Edge merged under LF Edge earlier this month and will continue to pave the path as the industry’s first open research program on edge computing. Under the umbrella, State of the Edge will continue its assets including State of the Edge Reports, Open Glossary of Edge Computing and the Edge Computing Landscape.

Support from the Expanding LF Edge Ecosystem

Federated Wireless:

“LF Edge has become a critical point of collaboration for network and enterprise edge innovators in this new cloud-driven IT landscape,” said Kurt Schaubach, CTO, Federated Wireless. “We joined the LF Edge to apply our connectivity and spectrum expertise to helping define the State of the Edge, and are energized by the opportunity to contribute to the establishment of next generation edge compute for the myriad of low latency applications that will soon be part of private 5G networks.”

Industrial Technology Research Institute (ITRI):

“ITRI is one of the world’s leading technology R&D institutions aiming to innovate a better future for society. Founded in 1973, ITRI has played a vital role in transforming Taiwan’s industries from labor-intensive into innovation-driven. We focus on the fields of Smart Living, Quality Health, and Sustainable Environment. Over the years, we also added a focus on 5G, AI, and Edge Computing related research and development. We joined LF Edge to leverage its leadership in these areas and to collaborate with the more than 75 member companies on projects like Akraino Edge Stack.”

Kaloom:

“Kaloom is pleased to join LF Edge to collaborate with the community on developing open, cloud-native networking, management and orchestration for edge deployments” said Suresh Krishnan, chief technology officer, Kaloom.  “We are working on an unified edge solution in order to optimize the use of resources while meeting the exacting performance, space and energy efficiency needs that are posed by edge deployments. We look forward to contributing our expertise in this space and to collaborating with the other members in LF Edge in accelerating the adoption of open source software, hardware and standards that speed up innovation and reduce TCO.”

Ori Industries:

“At Ori, we are fundamentally changing how software interacts with the distributed hardware on mobile operator networks.” said Mahdi Yahya, Founder and CEO, Ori Industries. “We also know that developers can’t provision, deploy and run applications seamlessly on telco infrastructure. We’re looking forward to working closely with the LF Edge community and the wider open-source ecosystem this year, as we turn our attention to developers and opening up access to the distributed, telco edge.”

Tensor Networks:

“Tensor Networks believes in and supports open source. Having an arena free from the risks of IP Infringement to collaborate and develop value which can be accessible to more people and organizations is essential to our efforts. Tensor runs its organization, and develops products on top of Linux.  The visions of LF Edge, where networks and latency are part of open software based service composition and delivery, align with our vision of open, fast, smart, secure, connected, and customer driven opportunities across all industry boundaries.” – Bill Walker, Chief Technology Officer.

VoerEir:

“In our extensive work with industry leaders for NFVI/VIM test and benchmarking,  a need to standardize infrastructure KPIs in Edge computing has gradually become more important,” said Arif  Khan, Co-Founder of VoerEir AB. “This need has made it essential for us to join LF Edge and to initiate the new Feature Project “Kontour” under the Akraino umbrella. We are excited to collaborate with various industry leaders to define, standardize  and measure Edge KPIs.”

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

LF Edge Member Spotlight: Altran

By Akraino Edge Stack, Blog, LF Edge, Member Spotlight

The LF Edge community is comprised of a diverse set of member companies that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Shamik Mishra, Vice President, Research and Innovation of Altran, to discuss the importance of open source software, collaborating with industry leaders in edge computing, security, how they contribute to the Akraino Edge Stack project and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Altran is a world leader in engineering and R&D services. The Group offers a unique value proposition that helps customers meet their transformation and innovation challenges. Altran supports its customers, from concept to industrialization, to develop the products and services of tomorrow. Altran has been working for more than 35 years with major players in many sectors: Automotive, Aeronautics, Space, Defense & Naval, Rail, Infrastructure & Transport, Industry & Consumer Products, Life Sciences, Communications, Semiconductor & Electronics, Software & Internet, Finance & Public Sector. In 2019, Capgemini, and Altran announced a merger project in the context of a friendly tender offer to create a global leader in Intelligent Industry. Altran generated 3.2 billion in revenue in 2019, with more than 50,000 employees in more than 30 countries.

Why is your organization adopting an open source approach?

Open Source Software has revolutionized the technology industry globally in more ways than one. It has significantly shortened software/product development cycles, spurred innovation and boosted entrepreneurship. Licenses come free of cost, with the expenses typically being for deploying, hardening, supporting, customizing, and maintaining the software. Businesses have the flexibility of choosing and customizing the best solutions for their needs. At Altran, we help our clients lead into the future by solving their most complex engineering and R&D problems through specialized solutions. We see open source software as an opportunity to differentiate our offerings by accelerating development of such solutions for our clients. This provides them the required flexibility of choice and a way to optimize costs for creating value towards their clients.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

Altran envisioned the significant role and impact of Edge Computing early in its evolution cycle. However, one of the key challenges is that different organizations in the industry perceive Edge Computing differently based on the nature of their business or pursuit. LF Edge is an umbrella organization that is agnostic of hardware, silicon, cloud and OS. This helps the industry leaders arrive at a common understanding of Edge Computing and create an open and standard framework for the technology. We believe LF Edge can decisively reduce the semantic dissonance on Edge Computing in the industry and help businesses work together to accelerate innovation.

What do you see as the top benefits of being part of the LF Edge community?

 LF Edge community has industry leading organizations as members who specialize in large-scale hardware manufacturing (chip, mobile devices, network equipment etc.), software product development, operating systems, engineering services, telecom services, cloud solutions and more. This diversity helps set a broad framework for Edge Computing by considering various concerns and scenarios relevant to the member domains. Since the framework will be open source, it strengthens its credibility and will likely be adopted widely in the industry. Altran being a world leader in engineering and R&D services, believes the LF Edge forum presents a unique opportunity to engage with the community and contribute to the open source Edge Computing framework. Based on our early start in this space, the exposure to experts on the LF Edge forum helps us to benefit from the views and propositions of diverse organizations to validate and strengthen our service offerings and competencies in Edge Computing.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

Altran aims to play a leading role in the Edge Computing ecosystem by contributing some of its software and solution breakthroughs to the open community, including (but not limited to) contributions to compute frameworks, APIs, management plane and use cases. Altran is currently actively pursuing the Akraino project particularly on security aspects. Other interest areas are device edge and intelligent application development leveraging the LF edge projects.

What do you think sets LF Edge apart from other industry alliances?

Three key features set LF Edge apart from other industry alliances and are likely to make it successful: 1. The diversity of members. The community has members ranging across multiple domains such as Chip, Mobile, Network Equipment Manufacturing, software product development, telecom service providers, cloud service providers, engineering service providers etc. The diversity is also geographical. It includes organizations that are across various geos, thereby helping to absorb wide range of views and concerns for the community. With so much diversity the framework is inclined to remain neutral and is unlikely to be driven by the interests of a few members in the community. 2. The timing of the alliance Edge Compute, in its evolution, is currently at a point, where it is possible for the industry to come together and define a framework and standards that can be eventually adopted by various players and can be evolved and customized. Unlike other alliances, the timing of this alliance helps it to be more successful. 3. The framework is open source and the projects onboarded are also open source. This makes it easy to accelerate the innovation in this space. The community also draws strength from the support of Linux Foundation.

How will  LF Edge help your business?

Altran’s vision for Multi-Access Edge Computing (MEC) is to create a developer-centric architecture and cloud-native platform that will make Edge discovery, onboarding and management of applications easy and seamless for Edge application developers. The Altran Ensconce platform brings together multiple capabilities, accelerators and frameworks that enables rapid development of Multi-Access Edge Compute (MEC) solutions. Being part of LF Edge, we see an opportunity to further evolve our MEC solutions, working with the LF Edge alliance on the open source framework for Edge Computing. We believe that our clients can benefit from this approach by quick adoption of our solutions and be ahead in their businesses. We also believe that there is a unique opportunity for us to contribute to the community from our experience.

What advice would you give to someone considering joining LF Edge?

LF Edge is a great opportunity for various organizations in the industry that are interested in collaborating with other industry leaders to drive innovation in Edge Computing and mutually benefit from the ecosystem. Organizations can shape the future of Edge Computing by joining this alliance early in the innovation cycle.

To learn more about Akraino Edge Stack, click here. To find out more about our members or how to join LF Edge, click here. Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #akraino-tsc channels.

Linux Foundation Executive to Give Keynote Webinar on Momentum, Direction of Open Source Networking & Edge

By Announcement, Event, LF Edge

SAN FRANCISCO, April 15, 2020 — The  Linux Foundation, the nonprofit organization enabling mass innovation through open source, will host a keynote webinar — “The State of Open Source Networking & Edge” — featuring Arpit Joshipura, general manager, Networking, Edge, & IOT. The webinar takes place April 30 at 9:00 AM PT and is open to anyone interested in attending.

Hosted by LF Networking (LFN) and LF Edge, the webinar serves as a virtual update on the current state of the open networking and edge landscapes. Due to the COVID-19 outbreak, the Open Networking & Edge Summit (ONES) North America, which was initially scheduled to take place in Los Angeles, Calif. later this month, has been rescheduled to September 28-29. However, the important work of the ecosystem continues and it’s time for an update on that progress.

“We are all learning to adapt and be more nimble than ever before,” said Arpit Joshipura, general manager, Networking, Edge & IoT, the Linux Foundation. “While we aren’t able to meet face to face with our communities physically, we continue to accelerate community collaboration and momentum while evolving critical industry initiatives that impact how the world accesses information. Please join us April 30 to hear how open networking and edge communities are moving the needle.”

The webinar will not only cover critical industry initiatives such as the Common NFVI Telco Taskforce (CNTT), OPNFV Verification Program (OVP), new project inductions and releases, but Joshipura will present compelling evidence on how community collaboration is accelerating the path forward. This will include an update on deployments, business value-add, R&D, developer engagement, and challenges the community is addressing in 2020. The webinar also presents an opportunity to hear these major LF Networking and LF Edge announcements first-hand. Attendees are encouraged to engage and participate in an open Q&A session following the presentation.

The webinar serves as the first in a series of LF Networking Webinars to bring the community up to speed on open source networking news, initiatives, and innovations and provide a new opportunity for community engagement. LF Edge’s webinar series, “On the Edge with LF Edge,” kicked off last month with an update on the Akraino project. The next LF Edge webinar, “EdgeX101: Intro, Roadmap, and Use Cases,” takes place April 23.

Registration is required to attend the webinar, which takes place April 30 at 9:00 am PT. Details and registration information available here: https://zoom.us/webinar/register/WN_pt6VgEy2S46T6qW5oNRLPA

Additionally, the important work of the LFN technical communities continues unabated as the LFN Technical Meetings Spring 2020 (initially co-located with ONES North America) are being held virtually from April 21-23. Details and registration: https://www.linuxfoundation.org/calendar/lfn-technical-meetings/

Details on ONES, including registration and final agenda, are available here: https://events.linuxfoundation.org/open-networking-edge-summit-north-america/

 

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

EdgeX Foundry on ELIOT Blueprint

By Akraino, Blog, EdgeX Foundry, LF Edge

Written by Ramya Ranganathan, IOTG Validation Architect at Intel and EdgeX Foundry TSC member and EdgeX Test/QA WG Contributor

 

Background

In the recent past, EdgeX has experience challenges in running regression tests on different platforms. Some of the difficulty has been attributed to not running the EdgeX platform tests on While it could be attributed to a pre-validated OS/SW configuration.

Idea

By running on a pre-validated base platform, the hope was to eliminate the platform variabilities and limit the debug scope to EdgeX SW. This in turn would lead to a quicker debug, throughput and finally quicker time to market.

Why LF Edge Akraino Blue Print

Since LF Edge has been spearheading the Akraino Blue print effort to provide a holistic design of EdgeX suitable platforms with respect to scalability, availability, security using finite set of configurations, and ease of use by Zero-touch provisioning, a proposal was put forth by EdgeX QA/Test work group to use a light weight Akraino blue print as “pre-validated base platform” for EdgeX engineering activities. The motivation was that the team could leverage the results from Akraino’s blue print validation framework and use it as a stable base platform for EdgeX engineering activities. While the motivation was from within the EdgeX community, this also served as a testimony to LF Edge’s Akraino initiative and to the importance of the LF Edge umbrella project to provide wholistic solutions to the EdgeX and larger LF Edge communities.

Engineering Activity & Results

Akraino offers several Blue prints, so the first task was to identify the right blueprint for EdgeX needs. ELIOT blue print has been chosen by the EdgeX QA/Test WG for this initial feasibility study as it seems to have a light weight foot print as the name suggests and also it is supported on both ARM and x86 architectures. EdgeX QA/Test WG members got LF Edge accounts and access to the Thunder Pod2 ARM based system and were able to get the EdgeX tests up and running on ELIOT Blue print with minimal effort (which goes in line with the key principle behind Akraino’s blue print goal).

Learn more about the Akraino ELIOT Blueprint: AkrainoELIOTBluePrint.pdf

Conclusion

This activity is an example of the early engagements between EdgeX and other LF Edge projects – one of mutual value to the engineers in both communities and demonstrating the value of a larger edge computing umbrella project.

For more information about Akraino Blueprints, click here: https://wiki.akraino.org/. To learn more about EdgeX Foundry, click here: https://wiki.edgexfoundry.org/. Or, join the conversation on the EdgeX Foundry Slack Channel.

IoT, AI & Networking at the Edge

By Blog, LF Edge, Open Glossary of Edge Computing, State of the Edge

Written by LF Edge member Mike Capuano, Chief Marketing Officer for Pluribus Networks

This blog originally ran on the State of the Edge website. 

5G is the first upgrade to the cellular network that will be justified not only by higher speeds and new capabilities targeted at consumer applications such as low latency gaming but also  by its ability to support enterprise applications. The Internet of Things (IoT) will become the essential fuel for this revolution, as it transforms almost every business, government, and educational institution. By installing sensors, video cameras and other devices in buildings,  factories, stadiums and in other locations, such as in vehicles, enterprises can collect and act on data to make them more efficient and more competitive. This digital transformation will create a better and safer environment for employees and to deliver the best user experience possible to end customers. In this emerging world of 5G-enabled IoT, edge computing will play a critical role.

IoT will leverage public and private 5G, AI, and edge compute. In many cases, analysis of the IoT data will be highly complex, requiring the correlation of multiple data input streams fed into an AI inference model—often in real time. Use cases include factory automation and safety, energy production, smart cities, traffic management, large venue crowd management, and many more. Because the data streams will be large and will often require immediate decision-making, they will benefit from edge compute infrastructure that is in close proximity to the data in order to reduce latency and data transit costs, as well as ensure autonomy if the connection to a central data center is cut.

Owing to these requirements, AI stacks will be deployed in multiple edge locations, including on premises, in 5G base station aggregation sites, in telco central offices and at many more “edges”. We are rapidly moving from a centralized data center model to a highly distributed compute architecture. Developers will place workloads in edge locations where applications can deliver their services at the highest performance with the lowest cost.

Critical to all of this will be the networking that will connect all of these edge locations and their interrelated compute clusters. Network capabilities must now scale to a highly distributed model, providing automation capabilities that include Software Defined Networking (SDN) of both the physical and virtual networks.

Network virtualization—like the virtualization of the compute layer— is a flexible, software-based representation of the network built on top of its physical properties. The physical network is obviously required for basic connectivity – we must move bits across the wire, after all. SDN automates this physical “underlay” but it is still rigid since there are physical boundaries. Network virtualization is a complete abstraction of the physical network. It consists of dynamically-constructed VXLAN tunnels supported by virtual routers, virtual switches and virtual firewalls — all defined and instantiated in software, all of which can be manipulated in seconds and is much faster than reconfiguring the physical network (even with SDN).

To satisfy the requirements of a real-time, edge-driven IoT environment, we must innovate to deliver cost effective, simple, and unified software defined networking across both the physical and virtual networks that support the edge. The traditional model for SDN was not based on these requirements. It  was built for large hyperscale and enterprise data centers that rely on multiple servers and software licenses incurring costs and consuming space and power. This approach also requires complex integrations to deploy and orchestrate the physical underlay, the virtual overlay, along with storage and compute.

Deploying traditional SDN into edge environments is not an attractive solution. Often there will not be space to deploy multiple servers for management and SDN control of the physical and virtual networks. Furthermore, all the SDN controllers need to be orchestrated by a higher layer “controller of controllers” to synchronize network state, which adds unnecessary latency, cost and complexity.

In some cases, companies also deploy SmartNICs (a network interface controller that also performs networking functions to offload processing from the CPU). SmartNICS allow packet processing associated with network virtualization without burdening the primary compute (which is better utilized supporting other workloads). Also, hardware-based taps, probes and packet brokers are being deployed to support network telemtry and analytics.

Applying the network automation model we built for large centralized data centers will be expensive and cumbersome, as well as space and power inefficient, in edge environments. The industry needs to rethink the approach to edge network automation and deliver a solution designed from the ground up for distributed environments. Ideally this solution does not require additional hardware and software but can leverage the switches and compute that are already being deployed to resource constrained edge environments.

The good news is that a number of companies are developing new approaches that deliver highly distributed SDN control that unifies the physical underlay and virtual overlay along with providing network analytics — all with no additional external hardware or software. These new technologies can utilize, for example, the fairly powerful and underutilized CPU and memory in “white box” Top of Rack (TOR) switches to deliver SDN, network virtualization, network analytics and a DCGW (Data Center Gateway) router function. In other words, these solutions have been designed with the edge in mind and are delivering powerful automation with no extra hardware and additional software licenses – supporting the edge with a cost effective solution that also saves space and power.

Pluribus Networks delivers open networking solutions based on a unique next-gen SDN fabric for data centers and distributed cloud edge compute environments.

The State of the Edge 2020 report is available to read now for free. We encourage anyone who is interested in edge computing to give it a read and to send feedback to State of the Edge.

Edge computing architecture and use cases

By Blog, LF Edge, Use Cases

Written by By Jason Gonzalez, Jason Hunt, Mathews Thomas, Ryan Anderson, Utpal Mangla with LF Edge Premier Member Company IBM 

This blog originally ran on the IBM website. For more articles from IBM, visit https://developer.ibm.com/articles/.

Developing new services that take advantage of emerging technologies like edge computing and 5G will help generate more revenue for many businesses today, but especially for telecommunications and media companies. IBM works with many telecommunications companies to help explore these new technologies so that they can better understand how the technologies are relevant to current and future business challenges.

The good news is that edge computing is based on an evolution of an ecosystem of trusted technologies. In this article, we will explain what edge computing is, describe relevant use cases for the telecommunications and media industry while describing the benefits for other industries, and finally present what an end-to-end architecture that incorporates edge computing can look like.

What is Edge Computing?

Edge computing is composed of technologies take advantage of computing resources that are available outside of traditional and cloud data centers such that the workload is placed closer to where data is created and such that actions can then be taken in response to an analysis of that data. By harnessing and managing the compute power that is available on remote premises, such as factories, retail stores, warehouses, hotels, distribution centers, or vehicles, developers can create applications that:

  • Substantially reduce latencies
  • Lower demands on network bandwidth
  • Increase privacy of sensitive information
  • Enable operations even when networks are disrupted

To move the application workload out to the edge, multiple edge nodes might be needed, as shown in Figure 1.

Figure 1. Examples of edge nodes

Example of edge nodes

These are some of the key components that form the edge ecosystem:

  • Cloud
    This can be a public or private cloud, which can be a repository for the container-based workloads like applications and machine learning models. These clouds also host and run the applications that are used to orchestrate and manage the different edge nodes. Workloads on the edge, both local and device workloads, will interact with workloads on these clouds. The cloud can also be a source and destination for any data that is required by the other nodes.
  • Edge device
    An edge device is a special-purpose piece of equipment that also has compute capacity that is integrated into that device. Interesting work can be performed on edge devices, such as an assembly machine on a factory floor, an ATM, an intelligent camera, or an automobile. Often driven by economic considerations, an edge device typically has limited compute resources. It is common to find edge devices that have ARM or x86 class CPUs with 1 or 2 cores, 128 MB of memory, and perhaps 1 GB of local persistent storage. Although edge devices can be more powerful, they are the exception rather than the norm currently.
  • Edge node
    An edge node is a generic way of referring to any edge device, edge server, or edge gateway on which edge computing can be performed.
  • Edge cluster/server
    An edge cluster/server is a general-purpose IT computer that is located in a remote operations facility such as a factory, retail store, hotel, distribution center, or bank. An edge cluster/server is typically constructed with an industrial PC or racked computer form factor. It is common to find edge servers with 8, 16, or more cores of compute capacity, 16GB of memory, and hundreds of GBs of local storage. An edge cluster/server is typically used to run enterprise application workloads and shared services.
  • Edge gateway An edge gateway is typically an edge cluster/server which, in addition to being able to host enterprise application workloads and shared services, also has services that perform network functions such as protocol translation, network termination, tunneling, firewall protection, or wireless connection. Although some edge devices can serve as a limited gateway or host network functions, edge gateways are more often separate from edge devices.

IoT sensors are fixed function equipment that collects and transmits data to an edge/cloud but does not have onboard compute, memory, and storage. Containers cannot be deployed on them for this reason. These edge devices connect to the different nodes but are not reflected in Figure 1 as they are fixed-function equipment.

With a basic understanding of edge computing, let’s take a brief moment to discuss 5G and its impact on edge computing before we discuss the benefits and challenges around edge computing.

Edge Computing and 5G

The advent of 5G has made edge computing even more compelling, enabling significantly improved network capacity, lower latency, higher speeds, and increased efficiency. 5G promises data speeds in excess of 20 Gbps and the ability to connect over a million devices per square kilometer.

Communications service providers (CSPs) can use edge computing and 5G to be able to route user traffic to the lowest latency edge nodes in a much more secure and efficient manner. With 5G, CSPs can also cater to real-time communications for next-generation applications like autonomous vehicles, drones, or remote patient monitoring. Data intensive applications that require large amounts of data to be uploaded to the cloud can run more effectively by using a combination of 5G and edge computing.

In the advent of 5G and edge computing, developers will need to continue to focus on making native cloud applications even more efficient. The continual addition of newer and smaller edge devices will require changes to existing applications so that enterprises can fully leverage the capabilities of 5G and edge computing. In some cases, applications will need to be containerized and run on a very small device. In other cases, the virtualized network components need to be redesigned to take full advantage of the 5G network. Many other cases exist that require evaluation as part of the application development roadmap and future state architecture.

Benefits and challenges of edge computing

As emerging technologies, 5G and edge computing bring many benefits to many industries, but they also bring some challenges along with them.

Core benefits

The benefits of edge computing technology include these core benefits:

  • Performance: Near instant compute and analytics at the edge lowers latency, and therefore greatly increasing performance. With the advent of 5G, it is possible to rapidly communicate with the edge, and applications that are running at the edge can quickly respond to the ever-growing demand of consumers.
  • Availability: Critical systems need to operate irrespective of connectivity. There are many potential points of failure with the current communications flow. These points of failure include the company’s core network, multiple hops across multiple network nodes with security risks along the network paths, and many others. With edge, with the communication primarily between the consumer/requestor and the device/local edge node, there is a resulting increase in availability of the systems.
  • Data security: In edge computing architectures, the analytics data potentially never leaves the physical area where it is gathered and is used within the local edge. Only the edge nodes need to be primarily secured, which makes it easier to manage and monitor, which means that the data is more secure.

Additional benefits, with additional challenges

These additional benefits do not come without additional challenges, such as:

  • Managing at scale
  • Making workloads portable
  • Security
  • Emerging technologies and standards

Managing at scale

As workload locations shift when you incorporate edge computing, and the deployment of applications and analytics capabilities occur in a more distributed fashion, the ability to manage this change in an architecturally consistent way requires the use of orchestration and automation tools in order to scale.

For example, if the application is moved from one data center with always available support to 100’s of locations at the local edge that are not readily accessible or not in a location with that kind of local technical support, how one manages the lifecycle and support of the application must change.

To address this challenge, new tools and training for technical support teams will be needed to manage, orchestrate, and automate this new complex environment. Teams will require more than the traditional network operations tools (and even beyond software-defined network operations tools), as support teams will need tools to help manage application workloads in context to the network in various distributed environments (many more than previously), which will each have differing strengths and capabilities. In future articles in this series, we will look at these application and network tools in more details.

Making workloads portable

To operate at scale, the workloads being considered for edge computing localization need to be modified to be more portable.

  • First, size does matter. Since reducing latency requires moving the workload closer to the edge and moving it to the edge components will mean less compute resources to run the workload, the overall size of various workloads might limit the potential of edge computing.
  • Second, adopting a portability standard or set of standards can be difficult with many varied workloads to consider. Also, the standard should allow for management of the full lifecycle of the application, from build through run and maintain.
  • Third, work will need to be done on how best to break up workloads into sub-components to take advantage of the distributed architecture of edge computing. This would allow for the running of certain parts of workloads to run on an edge device with others running on an edge cluster/server or any other distribution across edge components. In a CSP, this would typically include migrating to a combination of network function virtualization (for network workloads) and container workloads (for application workloads and in the future, network workloads), where applicable and possible. Depending on the current application environment, this move might be a large effort.

To address this challenge in a reasonable way, workloads can be prioritized based on a number of factors, including benefit of migration, complexity, and resource/time to migrate. Many network and application partners are already working on migrating capabilities to container-based approaches, which can aid in addressing this challenge.

Security

While Data Security is a benefit in that the data can be limited to certain physical locations for specific applications, overall security is an additional challenge when adopting edge computing. Device edge physical devices might not have the ability to leverage existing security standards or solutions due to their limited capabilities.

So, to address these security challenges, the infrastructure upstream in the local edge might have additional security concerns to address. In addition, with multiple device edges, the security is now distributed and more complex to handle.

Emerging technologies and standards

With many standards in this ecosystem newly created or quickly evolving, it will be difficult to maintain technology decisions long term. The decision of a specific edge device or technology might be superseded by the next competing device, making it a challenging environment to operate in. With the right tools in place to address management of these varied workloads along with their entire application lifecycle, it can be an easier task to introduce new devices or capabilities or replace existing devices as the technologies and standards evolve.

Cross Industry Use Cases

In any complex environment, there are many challenges that occur and many ways to address them. The evaluation of whether a business problem or use case could or should be solved with edge computing will need to be done on a case by case basis to determine whether it makes sense to pursue.

For reference, let’s discuss some potential industry use cases for consideration, both examples and real solutions. A common theme across all these industries is the network that will be provided by the CSP. The devices with applications need to operate within the network and the advent of 5G makes it even more compelling for these industries to start seriously considering edge computing.

Use case #1: Video Surveillance

Using video to identify key events is rapidly spreading across all domains and industries. Transmitting all the data to the cloud or data center is expensive and slow. Edge computing nodes that consist of smart cameras can do the initial level of analytics, including recognizing entities of interest. The footage of interest can then be transmitted to a local edge for further analysis and respond appropriated to footage of interest including raising alerts. Content that cannot be handled at the local edge can be sent to the cloud or data center for in-depth analysis.

Consider this example: A major fire occurs, and it is often difficult to differentiate humans from other objects burning. In addition, the network can become heavily loaded in such instances. With edge computing, cameras that are located close to the event can determine whether a human is caught in the fire by identifying characteristics typical of a human being and clothing that humans might normally wear which might survive the fire. As soon as the camera recognizes a human in the video content, it will start transmitting the video to the local edge. Hence, the network load is reduced as the transmission only happens when the human is recognized. In addition, the local edge is close to the device edge so latency will be almost zero. Lastly, the local edge can now contact the appropriate authorities instead of transmitting the data to the data center which will be slower and since the network from the fire site to the data center might be down.

Figure 2. Depiction of intelligent video example

Image of firefighters in a fire

Use case #2: Smart Cities

Operating and governing cities has become a challenging mission due to many interrelated issues such as increasing operation cost from aging infrastructure, operational inefficiencies, and increasing expectations from a city’s citizens. Advancement of many technologies like IoT, edge computing, and mobile connectivity has helped smart city solutions to gain popularity and acceptance among a city’s citizens and governance alike.

IoT devices are the basic building blocks of any smart city solution. Embedding these devices into the city’s infrastructure and assets helps monitor infrastructure performance and provides insightful information about the behavior of these assets. Due to the necessity to provide real-time decision and avoid transmitting a large amount of sensor data, edge computing becomes a necessary technology to deliver city’s mission critical solutions such as traffic, flood, security and safety, and critical infrastructure monitoring.

Use case #3: Connected Cars

Connected cars can gather data from various sensors within the vehicle including user behavior. The initial analysis and compute of the data can be executed within the vehicle. Relevant information can be sent to the base station that then transmits the data to the relevant endpoint, which might be a content delivery network in the case of a video transmission or automobiles manufacturers data center. The manufacturer might also have relationship with the CSP in which case the compute node might be at the base station owned by the CSP.

Use case #4: Manufacturing

Rapid response to manufacturing processes is essential to reduce product defects and improve efficiencies. Analytic algorithms monitor how well each piece of equipment is running and adjust the operating parameters to improve its efficiency. Analytic algorithms also detect and predict when a failure is likely to occur so that maintenance can be scheduled on the equipment between runs. Different edge devices are capable of differing levels of processing and relevant information can be sent to multiple edges including the cloud.

Consider this example: A manufacturer of electric bicycles is trying to reduce downtime. There are over 3,000 pieces of equipment on the factory floor including presses, assembly machines, paint robots and conveyers. An outage in a production run costs you $250,000 per hour. Operating at peak efficiency and with no unplanned outages is the difference between having profit and not having profit. To operate smoothly, the factory needs the following to run at the edge:

  • Analytic algorithms that can monitor how well each piece of equipment is running and then adjust the operating parameters to improve efficiency.
  • Analytic algorithms that can detect and predict when a failure is likely to occur so that maintenance can be scheduled on the equipment between runs.

Predicting failure can be complex and requires the customized models for each use case. For one example how these types of models can be created, refer to this code pattern, “Create predictive maintenance models to detect equipment breakdown risks.” Some of these models need to run on the edge, and our next set of tutorials will explain how to do this.

Figure 3. Depiction of manufacturing example

Image of manufacturing floor

Overall Architecture

As we discussed earlier, edge computing consists of three main nodes:

  1. Device edge, where the edge devices sit
  2. Local edge, which includes both the infrastructure to support the application and also the network workloads
  3. Cloud, or the nexus of your environment, where everything comes together that needs to come together

Figure 4 represents an architecture overview of these details with the local edge broken out to represent the workloads.

Figure 4. Edge computing architecture overview

Edge computing architecture overview

Each of these nodes is an important part of the overall edge computing architecture.

  1. Device Edge The actual devices running on-premises at the edge such as cameras, sensors, and other physical devices that gather data or interact with edge data. Simple edge devices gather or transmit data, or both. The more complex edge devices have processing power to do additional activities. In either case, it is important to be able to deploy and manage the applications on these edge devices. Examples of such applications include specialized video analytics, deep learning AI models, and simple real time processing applications. IBM’s approach (in its IBM Edge Computing solutions) is to deploy and manage containerized applications on these edge devices.
  2. Local Edge
    The systems running on-premises or at the edge of the network. The edge network layer and edge cluster/servers can be separate physical or virtual servers existing in various physical locations or they can be combined in a hyperconverged system. There are two primary sublayers to this architecture layer. Both the components of the systems that are required to manage these applications in these architecture layers as well as the applications on the device edge will reside here.

    • Application layer: Applications that cannot run at the device edge because the footprint is too large for the device will run here. Example applications include complex video analytics and IoT processing.
    • Network layer: Physical network devices will generally not be deployed due to the complexity of managing them. The entire network layer is mostly virtualized or containerized. Examples include routers, switches, or any other network components that are required to run the local edge.
  3. Cloud
    This architecture layer is generically referred to as the cloud but it can run on-premise or in the public cloud. This architecture layer is the source for workloads, which are applications that need to handle the processing that is not possible at the other edge nodes and the management layers. Workloads include application and network workloads that are to be deployed to the different edge nodes by using the appropriate orchestration layers.

Figure 5 illustrates a more detailed architecture that shows which components are relevant within each edge node. Duplicate components such as Industry Solutions/Apps exist in multiple nodes as certain workloads might be more suited to either the device edge or the local edge and other workloads might be dynamically moved between nodes under certain circumstances, either manually controlled or with automation in place. It is important to recognize the importance of managing workloads in discreet ways as the less discreet, the more limited in how we might deploy and manage them.

While a focus of this article has been on application and analytics workloads, it should also be noted that network function is a key set of capabilities that should be incorporated into any edge strategy and thus our edge architecture. The adoption of tools should also take into consideration the need to handle application and network workloads in tandem.

Figure 5. Edge computing architecture overview – Detailed

Edge computing architecture detailed overview

We will be exploring every aspect of this architecture in more detail in upcoming articles. However, for now, let’s take a brief look at one real implementation of a complex edge computing architecture.

Sample implementation of an edge computing architecture

In 2019, IBM partnered with Telecommunications companies and other technology participants to build a Business Operation System solution. The focus of the project was to allow CSPs to manage and deliver multiple high-value products and services so that they can be delivered to market more quickly and efficiently, including capabilities around 5G.

Edge computing is a part of the overall architecture as it was necessary to provide key services at the edge. The following illustrates the implementation with further extensions having since been made. The numbers below refer to the numbers in Figure 6:

  1. A B2B customer goes to portal and orders a service around video analytics using drones.
  2. The appropriate containers are deployed to the different edge nodes. These containers include visual analytics applications and network layer to manage the underlying network functionality required for the new service.
  3. The service is provisioned, and drones start capturing the video.
  4. Initial video processing is done by the drones and the device edge.
  5. When an item of interest is detected, it is sent to the local edge for further processing.
  6. At some point in time, it is determined that a new model needs to be deployed to the edge device as new unexpected features begin to appear in the video so a new model is deployed.
Figure 6. Edge computing architecture overview – TM Forum Catalyst Example

Edge computing architecture overview TM Forum Catalyst Example

As we continue to explore edge computing in upcoming articles, we will focus more and more on the details around edge computing, but let’s remember that edge computing plays a key role as part of a strategy and architecture, an important part, but only one part.

Conclusion

In this brief overview of edge computing technology, we’ve shown how edge computing is relevant to challenges faced by many industries, but especially the telecommunications industry.

The edge computing architecture identifies the key layers of the edge: the device edge (which includes edge devices), the local edge (which includes the application and network layer), and the cloud edge. But we’re just getting started. Our next article in this series will dive deeper into the different layers and tools that developers need to implement an edge computing architecture.

LF Edge Member Spotlight: Netsia

By Akraino Edge Stack, Blog, LF Edge, Member Spotlight

The LF Edge community is comprised of a diverse set of member companies that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Madhu Kashyap, Director of Product Management for Netsia, to discuss the importance of a open source, their infrastructure and need for communication at the edge, how they contribute to the Akraino Edge Stack project and the impact of being a member of the LF Edge community.

Can you tell us a little about your organization?

Netsia develops fixed broadband and wireless solutions for the telecom industry. Netsia’s vision is to provide a shared infrastructure for fixed mobile convergence at the edge.  

Netsia’s SEBA (Software-Enabled Broadband Access) solution transforms the traditional Passive Optical Network (PON) used in fixed access networks (FTTx) through an open, programmable, cloud-native, vendor agnostic future-proof platform, that is based on Cloudification, Virtualization, Software Defined Networking (SDN) and Network Functions Virtualization (NFV). It leverages network disaggregation, open source software and white box economies at the network edge.

Why is your organization adopting an open source approach?

Netsia is an active participant in open source communities and standards bodies and provides enriched telco-grade distributions of open source platforms in a continuous manner with long term support.

Communication Service Providers (CSPs), both fixed and mobile, are looking to transform their PoPs (Point of Presence) / Central Offices (COs) as edge clouds. The current architecture is rigid, closed, monolithic and purpose-built leading to high CAPEX and OPEX due to vendor lock-in. CSPs want to use virtualization, cloudification, SDN, NFV technologies by leveraging open source software and disaggregated white boxes.

Open source software helps Netsia to gather CSPs’ requirements and share the cost of development with the community. Netsia adds value by hardening and productizing open source.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

Netsia is focused on the Edge. CSPs are looking to consolidate various access network technologies to a common management platform that includes SDN controller, orchestration, VIM etc.  while addressing the latency and bandwidth demands of 5G networks.

The LF Edge community provides the industry with a solid foundation for managing the edge with its development of blueprints across different domains and shaping the future of edge networking and IoT.

What do you see as the top benefits of being part of the LF Edge community?

LF Edge provides a dedicated forum of like-minded organizations be it operators or vendors who believe in openness through design, architecture, APIs or code to drive  edge transformation. By participating in LF Edge, Netsia stands to gain from the collective innovation that drives the industry forward. With a community like this the whole is greater than the sum of the parts and organizations can benefit greatly from the different perspectives and ideas that are generated in the course of project planning, design and implementation. 

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

Netsia is actively involved in the Akraino project and the SEBA Blueprint.

What do you think sets LF Edge apart from other industry alliances?

 Being part of the LF umbrella opens many doors and adds significant heft to initiatives that are taken seriously by the industry. Harmonization with other open source communities and working with upstream projects LF Edge makes sure there is no duplication and overlap of effort.

How will  LF Edge help your business?

As LF Edge blueprints get adopted by industry it helps Netsia by differentiating itself in a highly competitive field. LF Edge provides the name recognition and brand that is recognized in the industry as innovating with cutting edge technology.

What advice would you give to someone considering joining LF Edge?

Netsia would definitely encourage and advocate for anyone considering LF Edge membership. The level of commitment by the community is unparalleled and provides high visibility to member organizations through events and other marketing activities.

To learn more about Akraino Edge Stack, click here. To find out more about our members or how to join LF Edge, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #akraino channels.