Category

State of the Edge

State of the Edge 2021 Report

By Blog, LF Edge, State of the Edge

Written by Jacob Smith, State of the Edge Co-Chair and VP Bare Metal Strategy at Equinix Metal

On Wednesday, LF Edge published their 2021 edition of the State of the Edge Report.  As a co-chair and researcher for the report, I want to share insight into how this year’s report was created. In development, we focused on three key questions to ensure the report covered essential subject matter:

  • What’s new in edge computing since last year?
  • What are the challenges and opportunities facing the ecosystem?
  • How can we help?

Designed to be “vendor neutral” and to shy away from any bias, we had multiple authors contribute content and utilized an independent editor to balance the report to better serve the edge computing community. Several experts weighed in on what they were paying attention to in the edge space and identified four major stand-out points:

  • Critical infrastructure (data centers)
  • Hardware and silicon
  • Software as a whole
  • Networks and networking

It’s abundantly clear there is growth in edge computing. Over the last year, COVID-19 accelerated nearly every facet of digital transformation, and this factored into the growth of the edge computing space. But, it also might just be a long overdue increase in digital innovation and transformation.

We will continue to discover more about the edge and how to use it to optimize the computing ecosystem. For now, our attention is focused on how diverse the edge is and all of the solutions that are waiting to be discovered.

Summarized from Equinix Metal’s The State of Your Edge. Read the full story here. Download the report here.   

 

 

LF Edge’s State of the Edge 2021 Report Predicts Global Edge Computing Infrastructure Market to be Worth Up to $800 Billion by 2028

By Announcement, LF Edge, State of the Edge
  • COVID-19 highlighted that expertise in legacy data centers could be obsolete in the next few years as the pandemic forced the development of new tools enabled by edge computing for remote monitoring, provisioning, repair and management.
  • Open source hardware and software projects are driving innovation at the edge by accelerating the adoption and deployment of applications for cloud-native, containerized and distributed applications.
  • The LF Edge taxonomy, which offers terminology standardization with a balanced view of the edge landscape, is based on inherent technical and logistical tradeoffs spanning the edge to cloud continuum is gaining widespread industry adoption.
  • Seven out of 10 areas of edge computing experienced growth in 2020 with a number of new use cases that are driven by 5G.

SAN FRANCISCO – March 10, 2020 –  State of the Edge, a project under the LF Edge umbrella organization that established an open, interoperable framework for edge independent of hardware, silicon, cloud, or operating system, today announced the release of the 4th annual, State of the Edge 2021 Report. The market and ecosystem report for edge computing shares insight and predictions on how the COVID-19 pandemic disrupted the status quo, how new types of critical infrastructure have emerged to service the next-level requirements, and open source collaboration as the only way to efficiently scale Edge Infrastructure.

Tolaga Research, which led the market forecasting research for this report, predicts that between 2019 and 2028, cumulative capital expenditures of up to $800 billion USD will be spent on new and replacement IT server equipment and edge computing facilities. These expenditures will be relatively evenly split between equipment for the device and infrastructure edges.

“Our 2021 analysis shows demand for edge infrastructure accelerating in a post COVID-19 world,” said Matt Trifiro, co-chair of State of the Edge and CMO of edge infrastructure company Vapor IO. “We’ve been observing this trend unfold in real-time as companies re-prioritize their digital transformation efforts to account for a more distributed workforce and a heightened need for automation. The new digital norms created in response to the pandemic will be permanent. This will intensify the deployment of new technologies like wireless 5G and autonomous vehicles, but will also impact nearly every sector of the economy, from industrial manufacturing to healthcare.”

The pandemic is accelerating digital transformation and service adoption.

Government lockdowns, social distancing and fragile supply chains had both consumers and enterprises using digital solutions last year that will permanently change the use cases across the spectrum. Expertise in legacy data centers could be obsolete in the next few years as the pandemic has forced the development of tools for remote monitoring, provisioning, repair and management, which will reduce the cost of edge computing. Some of the areas experiencing growth in the Global Infrastructure Edge Power are automotive, smart grid and enterprise technology. As businesses began spending more on edge computing, specific use cases increased including:

  • Manufacturing increased from 3.9 to 6.2 percent, as companies bolster their supply chain and inventory management capabilities and capitalize on automation technologies and autonomous systems.
  • Healthcare, which increased from 6.8 to 8.6 percent, was buoyed by increased expectations for remote healthcare, digital data management and assisted living.
  • Smart cities increased from 5.0 to 6.1 percent in anticipation of increased expenditures in digital infrastructure in the areas such as surveillance, public safety, city services and autonomous systems.

“In our individual lock-down environments, each of us is an edge node of the Internet and all our computing is, mostly, edge computing,” said Wenjing Chu, senior director of Open Source and Research at Futurewei Technologies, Inc. and LF Edge Governing Board member. “The edge is the center of everything.”

Open Source is driving innovation at the edge by accelerating the adoption and deployment of edge applications.

Open Source has always been the foundation of innovation and this became more prevalent during the pandemic as individuals continued to turn to these communities for normalcy and collaboration. LF Edge, which hosts nine projects including State of the Edge, is an important driver of standards for the telecommunications, cloud and IoT edge. Each project collaborates individually and together to create an open infrastructure that creates an ecosystem of support. LF Edge’s projects (Akraino Edge Stack, Baetyl, EdgeX Foundry, Fledge, Home Edge, Open Horizon, Project EVE, and Secure Device Onboard) support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and  faster processing and mobility.

“State of the Edge is shaping the future of all facets of just edge computing and the ecosystem that surrounds it,” said Arpit Joshipura, General Manager of Networking, IoT and Edge. “The insights in the report reflect the entire LF Edge community and our mission to unify edge computing and support a more robust solution at the IoT, Enterprise, Cloud and Telco edge. We look forward to sharing the ongoing work State of the Edge that amplifies innovations across the entire landscape.”

Other report highlights and methodology

For the report, researchers modeled the growth of edge infrastructure from the bottom up, starting with the sector-by-sector use cases likely to drive demand. The forecast considers 43 use cases spanning 11 verticals in calculating the growth, including those represented by smart grids, telecom, manufacturing, retail, healthcare, automotive and mobile consumer services. The vendor-neutral report was edited by Charlie Ashton, Senior Director of Business Development at Napatech, with contributions from Phil Marshall, Chief Research officer at Tolaga Research; Phil Shih, Founder and Managing Director of Structure Research; Technology Journalists Mary Branscombe and Simon Bisson; and Fay Arjomandi, Founder and CEO of mimik. Other highlights from the State of the Edge 2021 Report include:

  • Off-the-shelf services and applications are emerging that accelerate and de-risk the rapid deployment of edge in these segments. The variety of emerging use cases is in turn driving a diversity in edge-focused processor platforms, which now include Arm-based solutions, SmartNICs with FPGA-based workload acceleration and GPUs.
  • Edge facilities will also create new types of interconnection. Similar to how data centers became meeting points for networks, the micro data centers at wireless towers and cable headends that will power edge computing often sit at the crossroads of terrestrial connectivity paths. These locations will become centers of gravity for local interconnection and edge exchange, creating new and newly efficient paths for data.
  • 5G, next-generation SD-WAN and SASE have been standardized. They are well suited to address the multitude of edge computing use cases that are being adopted and are contemplated for the future. As digital services proliferate and drive demand for edge computing, the diversity of network performance requirements will continue to increase.

“The State of the Edge report is an important industry and community resource. This year’s report features the analysis of diverse experts, mirroring the collaborative approach that we see thriving in the edge computing ecosystem,” said Jacob Smith, co-chair of State of the Edge and Vice President of Bare Metal at Equinix. “The 2020 findings underscore the tremendous acceleration of digital transformation efforts in response to the pandemic, and the critical interplay of hardware, software and networks for servicing use cases at the edge.”

Download the report here.

State of the Edge Co-Chairs Matt Trifiro and Jacob Smith, VP Bare Metal Strategy & Marketing of Equinix, will present highlights from the report in a keynote presentation at Open Networking & Edge Executive Forum, a virtual conference on March 10-12. Register here ($50 US) to watch the live presentation on March 12 at 7 am PT or access the video on-demand.

Trifiro and Smith will also host an LF Edge webinar to showcase the key findings on March 18 at 8 am PT. Register here.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Kubernetes Is Paving the Path for Edge Computing Adoption

By Blog, State of the Edge, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

For more content like this, please visit the Section website

Since Kubernetes was released five years ago by Google, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.

When it comes to the Infrastructure (or Service Provider) Edge, Kubernetes is increasingly being adopted as a key component of edge computing. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.

A Shared Operational Paradigm

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.

As defined by the State of the Edge’s Open Glossary, an “edge-native application” is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.

Kubernetes-Based Edge Architecture

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are three approaches to using Kubernetes in edge-based architecture to manage workloads and resource deployments.

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.

The three approaches for edge-based scenarios can be summarized as follows:

  1. The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine. K3s is the reference architecture for this solution.
  2. The next approach comes from KubeEdge, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.
  3. The third approach is hierarchical cloud plus edge, using a virtual kubelet as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.

Section’s Migration to Kubernetes

Section migrated to Kubernetes from a legacy homegrown scheduler last year. Instead of building our own fixed hardware network, Section distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, Packet, DigitalOcean, CenturyLink, and RackCorp. Kubernetes allows us to be infrastructure-agnostic and seamlessly manage a diverse set of workloads.

Our first-hand experience of the many benefits of Kubernetes at the edge include:

  • Flexible tooling, allowing our developers to interact with the edge as they need to;
  • Our users can run edge workloads anywhere along the edge continuum;
  • Scaling up and out as needed through our vendor-neutral worldwide network;
  • High availability of services;
  • Fewer service interruptions during upgrades;
  • Greater resource efficiency – in particular, we use the Kubernetes Horizontal Pod Autoscaler, which automatically scales the number of pods up or down according to defined latency or volume thresholds;
  • Full visibility into production workloads through the built-in monitoring system;
  • Improved performance.

Summary

As more organizations and operators continue to adopt and support Kubernetes-based cloud-edge patterns, the ecosystem will continue to mature. However, not every organization will have the resources and/or expertise to build these systems themselves. This is where edge platforms (like Section) bridge those gaps, offering DevOps teams familiar tooling to take advantage of the benefits that Kubernetes has to offer without the complexities that come along with it.

Interesting Developments In Edge Hypervisors

By Blog, Industry Article, Project EVE, State of the Edge

Written by Rex St. John, EGX Developer Relations at NVIDIA

This article originally ran on Rex’s LinkedIn page. For more content like this, connect with him on LinkedIn. 

After building Edge Computing ecosystems at Intel and Arm, I have recently made the switch to working on Edge Computing at NVIDIA. Several people have asked me to share my perspective and learnings, so I am starting this informal, personal series on the topic. All opinions shared here are my own personal opinions and not those of my employer.

Objective

In this article, I will share two reasons why some experts in the industry are investing in hypervisor technology as well as two interesting open source edge hypervisor solutions to be aware of. For edge definition nitpickers (you know who you are), I am going to be referring to the “Device Edge” here. There are many other definition for “Edge,” if you are curious, read this LF Edge white paper.

The Hovercraft Analogy

For those of you who are unfamiliar, a hypervisor is kind of like a hovercraft that your programs can sit inside. Like hovercrafts, hypervisors can provide protective cushions which allow your applications to smoothly transition from one device to another, shielding the occupants from the rugged nature of the terrain below. With hypervisors, the bumps in the terrain (differences between hardware devices), are minimized and mobility of the application is increased.

Benefits of Hypervisors

Benefits of hypervisors include security, portability and reduced need to perform cumbersome customization to run on specific hardware. Hypervisors also allow a device to concurrently run multiple, completely different, operating systems. Hypervisors also can help partition applications from one another for security and reliability purposes. You can read more about hypervisors here. They frequently are compared to, used together with or even compete with containers for similar use cases, though they historically require more processing overhead to run.

Two Reasons Why Some (Very Smart) Folks Are Choosing Hypervisors For The Edge

A core challenge in Edge Computing is the extreme diversity in hardware that applications are expected to run on. This, in turn, creates challenges in producing secure, maintainable, scalable applications capable of running across all possible targets.

Unlike their heavier datacenter-based predecessors, light-weight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge. Here are two reasons why some in the industry are taking a careful look at edge hypervisors.

Reason 1: Avoiding The Complexity And Overhead of Kubernetes

One potential reason for taking a hypervisor-based approach at the edge is that there may be downsides in pursuing Kubernetes for smaller clusters. These include the difficulty in building and managing a team who can properly setup and scale a real-world Kubernetes application due to the overhead and complexity of Kubernetes itself. In some cases, such as in running a cluster of 4-5 nodes, it might be desirable to use more streamlined approaches involving a controller and light-weight hypervisors. This is the approach taken by EVE, mentioned in more detail below.

Reason 2: Ease Of Modernizing Brown-Field Industrial IT

Another pressing reason for choosing edge hypervisors is that “brown-field” installations of existing edge hardware are extremely expensive to upgrade to follow modern IT “best practices.” Hypervisors provide a path forward that does not involve rewriting old systems from scratch as the code running on older machines can frequently be shifted into a hypervisor and neatly managed and secured from there (a process referred to as “Workload Consolidation.”)

Let’s take a look at two interesting examples of edge hypervisors to understand further.

Hypervisor #1: Project ACRN

No alt text provided for this image

The first edge hypervisor we will look at is called ACRN, which is a project hosted by the Linux Foundation. ACRN has a well documented architecture and offers a wide range of capabilities and configurations depending on the situation or desired outcome.

No alt text provided for this image

ACRN seeks to support industrial use cases by offering a specific partitioning between high-reliability processes and those which do not need to receive preferential security and processing priority. ACRN accomplishes this separation by specifying a regime for sandboxing different hypervisor instances running on the device as shown above. I recommend keeping an eye on ACRN as it seems to have significant industry support. ACRN supported platforms currently tend to be strongly x86-based.

Hypervisor #2: EVE (part of LF Edge)

Also a project hosted on the Linux Foundation, EVE differs from ACRN in that it belongs to the LFEdge project cluster. Unlike ACRN, EVE also tends to be more agnostic about supported devices and architectures. Following the instructions hosted on the EVE Github page, I was able to build and run it on a Raspberry Pi 4 within the space of ten minutes, for example.

No alt text provided for this image

In terms of design philosophy, EVE is positioning itself as the “Android of edge devices.” You can learn more about EVE by watching this recent webinar featuring the Co-Founder of Zededa, Roman Shaposhnik. EVE makes use of a “Controller” structure which provisions and manages edge nodes to simplify the overhead of operating an edge cluster.

Wrapping Up

Expect to see more happening in the hypervisor space as the technology continues to evolve. Follow me to stay up to date with the latest developments in Edge Computing.

LF Edge Member Spotlight: Equinix Metal

By Blog, Member Spotlight, State of the Edge

The LF Edge community is comprised of a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Jacob Smith, Vice President of Bare Metal Marketing & Strategy for Equinix Metal, to discuss the their activities in open source, collaborating with industry leaders in edge computing, their leadership State of the Edge, and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Equinix Metal is the leading provider of globally available, automated bare metal. Formed through the acquisition of Packet by Equinix in 2020, we focus on operating foundational, interconnected infrastructure that is proximate to the world’s major networks, clouds and enterprises.

Why is your organization adopting an open-source approach?

Our vision is to help make infrastructure a competitive advantage for today’s digital leaders. Open source is a key part of that strategy, providing a clear way for us to invest in “making the tent bigger.”  In our view, the more people and companies that innovate with digital infrastructure, the better.

In addition to our participation in LF Edge (especially the State of the Edge report), Equinix Metal is a leading member (and user) of the Open19 project. Last year, we also open sourced our core bare metal provisioning technology (Tinkerbell), which was accepted into the CNCF as a sandbox project. This continues our long support of the cloud native community, including a $1M annual infrastructure donation to support the Community Infrastructure Lab.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We joined LF Edge at its founding due to our interest in edge computing use cases and our respect for the Linux Foundation’s ability to bring diverse stakeholders together. In addition to its leading projects, the LF Edge community invites and enables the kind of diversity in the edge ecosystem that is critical to its success.

What do you see as the top benefits of being part of the LF Edge community?

One of the most important benefits is access to a growing group of companies that are serious about the edge. Interacting at the committee level allows us to connect with leaders throughout the field who are building truly interesting technologies and solutions to solve real problems.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

The Equinix Metal team has focused its efforts on the State of the Edge project, which was co-founded with VaporIO and contributed to the LF Edge. This has been an exciting effort, downloaded by thousands of community members annually. I am also co-chair of the State of the Edge report.

What do you think sets LF Edge apart from other industry alliances?

Strong governance helps to ensure that a variety of voices and projects can gain influence, and this is a unique strength of the open-source community.

How will LF Edge help your business?

LF Edge provides a steady touchpoint in a fast-changing ecosystem. Now that we’re part of a large company and travel is restricted; it is easy to lose touch with the pulse of the industry.  LF Edge helps to keep us in touch.

What advice would you give to someone considering joining LF Edge?

Jump in and join a committee or raise your hand to help lead an effort. This is the best way — outside of contributing code — to drive our community forward while quickly forming the relationships that matter.

 

Over the Edge Podcast with LF Edge Members

By Blog, LF Edge, Member Spotlight, State of the Edge

Edge computing represents a long-term transformation of the Internet that could take decades to fully materialize. On the Over the Edge podcast, Ian Faison and LF Edge member Matt Trifiro interview corporate leaders, open-source experts, technologists, journalists, analysts, and innovators pushing the boundaries of edge. Since launch earlier this year, the podcast has featured several LF Edge members and contributors who are changing the landscape. As we look back at 2020, here’s a podcast roundup of what these leaders had to say about edge computing.

Edge computing is an inflection point – Matt Trifiro, CMO of Vapor IO and Chair of State of the Edge

Bringing the world of software into the world of physical networks – Jacob Smith, Co-Founder of Packet and Chair of State of the Edge

Bringing the edge to emerging markets – Joe Zhu, CEO of Zenlayer and Akraino contributor

How open source is expanding the horizon for IoT and edge – Malini Bhandaru, IoT Open Source Lead at VMware and Co-Chair of the EdgeX Foundry Security Working Group

Open source collaboration is the only way to scale – Jason Shepherd, VP of Ecosystem at ZEDEDA and LF Edge Governing Board member and one of the leaders of Project EVE

A 30,000-foot view of edge – Gavin Whitechurch, Co-Founder of Edge Computing World/COO of Topio Networks and State of the Edge contributor

How standards drive adoption and enable the intelligent edge – Alex Reznik, Distinguished Technologist at HPE and Chair of ETSI MEC and Akraino contributor

Building the easy button for edge – Cole Crawford, CEO and Founder of Vapor IO and one of the leaders of State of the Edge

The future of IoT deployment at the edge – Sarah Beaudoin, Head of Customer Advocacy at ZEDEDA and Project EVE contributor

The cloud that will power and scale the new internet – Mahdi Yahya, CEO and Founder of Ori Industries and Akraino contributor

Redefining networking to empower edge innovation– David Hart, CTO and Co-Founder of NetFoundry and EdgeX Foundry contributor

CBRS, Shared Spectrum, and the democratization of wireless access – Iyad Tarazi, President, CEO and Co-Founder of Federated Wireless and Akraino contributor

Additional podcast episodes can be found here. If you want to be featured in the Over the Edge podcast, let us know!

 

On the “Edge” of Something Great

By Akraino, Announcement, Baetyl, Blog, EdgeX Foundry, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard, State of the Edge

As we kick off Open Networking and Edge Summit today, we are celebrating the edge by sharing the results of our first-ever LF Edge Member Survey and insight into what our focuses are next year.

LF Edge, which will celebrate its 2nd birthday in January 2021, sent the survey to our more than 75 member companies and liaisons. The survey featured about 15 questions that collected details about open source and edge computing, how members of the LF Edge community are using edge computing and what project resources are most valuable. 

Why did you chose to participate in LF Edge?

The Results Are In

The Top 3 reasons to participate in LF Edge are market creation and adoption acceleration, collaboration with peers and industry influence. 

  • More than 71% joined LF Edge for market creation and adoption acceleration
  • More than 57% indicated they joined LF Edge for business development
  • More than 62% have either deployed products or services based on LF Edge Projects or they are planned by for later this year, next year or within the next 3-5 years

Have you deployed products or services based on LF Edge Projects?

This feedback corresponds with what we’re seeing in some of the LF Edge projects. For example, our Stage 3 Projects Akraino and EdgeX Foundry are already being deployed. Earlier this summer, Akraino launched its Release 3 (R3) that delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R3, Akraino brings deployments and PoCs from a swath of global organizations including Aarna Networks, China Mobile, Equinix, Futurewei, Huawei, Intel, Juniper, Nokia, NVIDIA, Tencent, WeBank, WiPro, and more. 

Additionally, EdgeX Foundry has hit more than 7 million container downloads last month and a global ecosystem of complementary products and services that continues to increase. As a result, EdgeX Foundry is seeing more end-user case studies from big companies like Accenture, ThunderSoft and Jiangxing Intelligence

Have you gained insight into end user requirements through open collaboration?


Collaboration with peers

The edge today is a solution-specific story. Equipment and architectures are purpose-built for specific use cases, such as 5G and network function virtualization, next-generation CDNs and cloud, and streaming games. Which is why collaboration is key and more than 70% of respondents said they joined LF Edge to collaborate with peers. Here are a few activities at ONES that showcase the cross-project and members collaboration. 

Additionally, LF Edge created a LF Edge Vertical Solutions Group that is working to enable easily-customized deployments based on market/vertical requirements. In fact, we are hosting an LF Edge End User Community Event on October 1 that provides a platform for discussing the utilization of LF Edge Projects in real-world applications. The goal of these sessions is to educate the LF Edge community (both new and existing) to make sure we appropriately tailor the output of our project collaborations to meet end user needs. Learn more.

Industry Influence

More than 85% of members indicated they have gained insights into end user requirements through open collaboration. A common definition of the edge is gaining momentum. Community efforts such as LF Edge and State of the Edge’s assets, the Open Glossary of Edge Computing, and the Edge Computing Landscape are providing cohesion and unifying the industry. In fact,  LF Edge members in all nine of the projects collaborated to create an industry roadmap that is being supported by global tech giants and start-ups alike.

 

 

Where do we go from here? 

When asked, LF Edge members didn’t hold back. They want more. They want to see more of everything – cross-project collaboration, end user events and communication, use cases, open source collaboration with other liaisons. As we head into 2021, LF Edge will continue to lay the groundwork for markets like cloud native, 5G, and edge for  more open deployments and collaboration.  

 

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge

Finalists for the 2020 Edge Woman of the Year Award!

By Blog, State of the Edge

Written by Candice Digby, Partner and Events Manager at Vapor IO, a LF Edge member and active community member in the State of the Edge Project

Last year’s Edge Woman of the Year winner Farah Papaioannou is ready to pass the torch.

“I was honored to have been chosen as Edge Woman of the Year 2019 and to be recognized alongside many inspiring and innovative women across the industry,” said Farah Papaioannou, Co-Founder and President of Edgeworx, Inc. “I am thrilled to pay that recognition forward and participate in announcing this year’s Edge Woman of the Year 2020 finalist categories; together we have a lot to accomplish.”

(left to right) Matt Trifiro, Farah Papaioannou, Gavin Whitechurch

With more nominations in the 2nd annual competition, it was difficult for State of the Edge and Edge Computing World to select only ten top finalists. The Edge Woman of the Year 2020 nominees represent industry leaders in roles that are impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems.

The Edge Woman of the Year Award represents a long-term industry commitment to highlight the growing importance of the contributions and accomplishments made by women in edge computing.  The award is presented at the annual Edge Computing World event which gathers the whole edge computing ecosystem, from network to cloud and application to infrastructure end-users and developers while also sharing edge best practices.

The annual Edge Woman of the Year Award is presented to outstanding female and non-binary professionals in edge computing for outstanding performance in their roles elevating Edge. The 2020 award committee selected the following 10 finalists for their excellent work in the named categories:

  • Leadership in Edge Startups
    • Kathy Do, VP, Finance and Operations at MemVerge
  • Leadership in Edge Open Source Contributions
    • Malini Bhandaru, Open Source Lead for IoT & Edge at VMware
  • Leadership in Edge at a Large Organization
    • Jenn Didoni, Head of Cloud Portfolio at Vodafone Group Business
  • Leadership in Edge Security
    • Ramya Ravichandar, VP of Product Management at FogHorn
  • Leadership in Edge Innovation and Research
    • Kathleen Kallot, Director, AI Ecosystem, arm
  • Leadership in Edge Industry and Technology
    • Fay Arjomandi, Founder and CEO, mimik technology, Inc.
  • Leadership in Edge Best Practices
    • Nurit Sprecher, Head of Management & Virtualization Standards, Nokia
  • Leadership in Edge Infrastructure
    • Meredith Schuler, Financial & Strategic Operations Manager, SBA Edge
  • Overall Edge Industry Leadership
    • Nancy Shemwell, Chief Operating Officer, Trilogy Networks, Inc.
  • Leadership in Executing Edge Strategy
    • Angie McMillin, Vice President and General Manager, IT Systems, Vertiv

The “Top Ten Women in Edge” finalists are selected from nominations and submissions submitted by experts in edge from around the world. The final winner will be chosen by a panel of industry judges. The winner of the Edge Woman of the Year 2020 will be announced during this year’s Edge Computing World, being held virtually October 12-15, 2020.

For more information on the Women in Edge Award visit: https://www.lfedge.org/2020/08/25/state-of-the-edge-and-edge-computing-world-announce-finalists-for-the-2020-edge-woman-of-the-year-award/

 

Breaking Down the Edge Continuum

By Blog, State of the Edge, Trend, Use Cases

Written by Kurt Rinehart, Director of Data Science at Section. This blog originally ran on the Section website. For more content like this, please click here.

There are many definitions of “the edge” out there. Sometimes it can seem as if everyone has their own version.

LF Edge, an umbrella organization that brings together industry leaders to build “an open source framework for the edge,” has a number of edge projects under its remit, each of which seeks to unify the industry around coalescing principles and thereby accelerate open source edge computing developments. Part of its remit is to define what the edge is, an invaluable resource for the edge community to coalesce around.

Latest LF Edge White Paper: Sharpening the Edge

In 2018, State of the Edge (which recently became an official project of LF Edge) put out its inaugural report, defining the edge using four criteria:

  • “The edge is a location not a thing;
  • There are lots of edges, but the edge we care about today is the edge of the last mile network;
  • This edge has two sides: an infrastructure edge and a device edge;
  • Compute will exist on both sides, working in coordination with the centralized cloud.”

Since that inaugural report, much has evolved within the edge ecosystem. The latest white paper from LF Edge, Sharpening the Edge: Overview of the LF Edge Taxonomy and Framework, expands on these definitions and moves on from simply defining two sides (the infrastructure and the device edge) to use the concept of an edge continuum.

The Edge Continuum

The concept of the edge continuum describes the distribution of resources and software stacks between centralized data centers and deployed nodes in the field as “a path, on both the service provider and user sides of the last mile network.”

In almost the same breath, LF Edge also describes edge computing as essentially “distributed cloud computing, comprising multiple application components interconnected by a network.”

We typically think of “the edge” or “the edges” in terms of the physical devices or infrastructure where application elements run. However, the idea of a path between the centralized cloud (also referred to as “the cloud edge” or “Internet edge”) and the device edge instead allows for the conceptualization of multiple steps along the way.

The latest white paper concentrates on two main edge categories within the edge continuum: the Service Provider Edge and the User Edge (each of which is broken down into further subcategories).

edge continuum diagram
Image source: LF Edge

The Service Provider Edge and the User Edge

LF Edge positions devices at one extreme of the edge continuum and the cloud at the other.

Next along the line of the continuum after the cloud, also described as “the first main edge tier”, is the Service Provider (SP) Edge. Similarly to the public cloud, the infrastructure that runs at the SP Edge (compute, storage and networking) is usually consumed as a service. In addition to the public cloud, there are also cellular-based solutions at the SP Edge, which are typically more secure and private than the public cloud, as a result of the differences between the Internet and cellular systems. The SP Edge leverages substantial investments by Communications Service Providers (CSPs) into the network edge, including hundreds of thousands of servers at Points of Presence (PoPs). Infrastructure at this edge tier is largely more standardized than compute at the User Edge.

The second top-level edge tier is the User Edge, which is on the other side of the last mile network. It represents a wider mix of resources in comparison to the SP Edge, and “as a general rule, the closer the edge compute resources get to the physical world, the more constrained and specialized they become.” In comparison to the SP Edge and the cloud where resources are owned by these entities and shared across multiple users, resources at the User Edge tend to be customer-owned and operated.

Moving from the Cloud to the Edge

What do we mean when we talk about moving from the cloud to the edge? Each of the stages along the edge continuum take you progressively closer to the end user. You have high latency and more compute in the centralized cloud versus low latency and less compute as you get closer to the User Edge. When we talk about moving from the cloud to the edge, it means we want to leverage the whole stack and not solely focus on the centralized cloud.

Let’s look at the most obvious use case: content delivery networks (CDNs). In the 1990s, Akamai created content delivery networks to allow localized websites to serve a global audience. A website based in New York could leverage Akamai’s distributed network of proxy servers and data centers around the world to be able to store their static assets globally, including HTML, CSS, JavaScript, video, and images. By caching these in Akamai’s distributed global points of presence (PoP), the website’s end users worldwide were guaranteed high availability and consistent performance.

These days, CDNs are considered to be only one layer in a highly complex Internet ecosystem. Content owners such as media companies and e-commerce vendors continue to pay CDN operators to deliver their content to end users. In turn, a CDN pays ISPs, carriers, and network operators for hosting its servers in their data centers. That’s the Service Provider Edge we’re talking about.

An edge compute platform is still a geographically distributed network, but instead of simply providing proxy servers and data centers, an edge compute platform also offers compute. How do we define this? Compute can be defined as many things, but essentially, it boils down to the ability to run workloads wherever you need to run them. Compute still gives you high availability and performance, but it also allows for the capability to run packaged and custom workloads positioned relatively spatially to users.

An edge compute platform leverages all available compute between the cloud provider and the end user, together with DevOps practices, to deliver traditional CDN and custom workloads.

Applying Lessons from the Cloud to the Edge

We can take the lessons we’ve learned in the cloud and apply them to the edge. These include:

  • Flexibility – At Section, we describe this as wanting to be able to run “any workload, anywhere”, including packaged and customized workloads;
  • Taking a multi-provider approach to deployments – This offers the opportunity to create a higher layer of abstraction. Infrastructure as Code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files as opposed to physical hardware configuration or interactive configuration tools. At Section, we have 6-7 different providers, from cloud providers to boutique providers to bare metal providers.
  • Applying DevOps practices – In order to provide the capabilities that the cloud has at the infrastructure edge, we need to enable developers to get insight and to run things at the edge at speed, just as they did in the cloud. This is DevOps. It’s important to be able to apply DevOps practices here since, “if you build it, you own it”. You want to make things open, customizable, and API-driven with integrations, so that developers can leverage and build on top of them.
  • Leveraging containerized workloads – Deploying containers at the edge involves multiple challenges, particularly around connectivity, distribution and synchronization, but it can be done, and in doing, allows you to leverage this architecture to deploy your own logic, not just pre-packaged ones. Containerization also offers:
    • Security
    • Standardization
    • Isolation; and
    • A lightweight footprint.
  • Insights and Visibility – We need to give developers deep, robust insight into what’s happening at the edge, just as we do in the cloud. The three pillars of observability are logs, metrics and tracing. An ELK stack can provide this, giving developers the invaluable ability to understand what is happening when things inevitably go wrong.

Edge Computing Use Cases in the Wild

There are many examples of use cases already operating at the Edge. A few of the many interesting ones out there include:

  • Facebook Live – When you see a live stream in your feed and click on it, you are requesting the manifest. If the manifest isn’t already on your local PoP, the request travels to the data center to get the manifest, and then fetches the media files in 1 second clips. ML algorithms operate on the 1 second clips to optimize them in real time to deliver the best, fastest experience for users.
  • Cloudflare Workers – These are Service Worker API implementations for the Cloudflare platform. They deploy a server-side approach to running JavaSCript workloads on Cloudflare’s global network.
  • Chick-fil-A – A surprising one. Chick-fil-A has been pushing into the device edge over the last couple of years. Each of their 20,000 stores has a Kubernetes cluster that runs there. The goal: “low latency, Internet-independent applications that can reliably run our business”, in addition to high availability for these applications, a platform that enables rapid innovation, and the ability to horizontally scale.

We’re Not Throwing Away the Cloud

One last thing to make clear: we’re not talking about throwing away the cloud. The cloud is going nowhere. We will be working alongside it, using it. What we’re talking about is moving the boundary of our applications out of the cloud closer to the end user, into the compute that is available there. And, as we’ve seen, we don’t need to throw away the lessons we’ve learned in the cloud; we can still use the tools that we’re used to, plus gain all the advantages that the edge continuum has to offer.

You can download the LF Edge taxonomy white paper here. You can also watch the LF Edge Taxonomy Webinar, which shares insight from the white paper, on our Youtube Channel. Click here to watch it now.