Skip to main content
Category

Open Glossary of Edge Computing

Enabling a More Sustainable Energy Model with Edge Computing

By Blog, Open Glossary of Edge Computing, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

This blog previously ran on the Section website. For more content like this, click here.

The tech sector has been under mounting scrutiny over the last decade by environmental groups, businesses, and customers for its increasing energy consumption levels. Attention has recently been turning to the ways in which edge computing can help build more sustainable solutions.

The Impact of Coronavirus Lockdowns on Internet Usage

While worldwide energy consumption has significantly dropped as a result of global lockdown restrictions, Internet usage has seen a huge spike.

NETSCOUT, a provider of network and application performance monitoring tools, saw a 25-35% increase in worldwide Internet traffic patterns in mid-March, the time when the shift to remote work and online learning happened for much of the world’s population. That number has stayed pretty consistent. There has been particularly increased use of bandwidth-intensive applications like video streaming, Zoom and online gaming. The spike in Internet use has implications for the sector’s sustainability targets and makes the urgency of a call for change even louder.

Before we look specifically at sustainability in the edge computing field, let’s pull back to take a look at the wider sector.

The Energy Challenge

The information and communication technology (ICT) ecosystem accounts for over 2% of global emissions, putting it on a par with the aviation industry’s carbon footprint, at least according to a 2018 study. Did you know that data centers use an estimated 200 terawatt (Twh) hours each year? This represents around 1% of global electricity demand and data centers contribute around 0.3% to overall carbon emissions.

Where these figures could lie in the future is harder to predict. Widely cited forecasts say the ICT sector and data centers will take a larger slice of overall electricity demand. Some experts predict this could rise as high as 8% by 2030, while the most aggressive models predict that electricity usage by ICT could surpass 20% of the global total by the time a child born today hits his or her teens, with data centers using over one-third of that.

Keeping Future Energy Demand in Check

However, improvements in energy efficiency in the data center field are already making a difference. While the amount of computing performed in data centers more than quintupled between 2010 and 2018, the amount of energy consumed by data centers worldwide only rose by 6% across the same period.

The ICT sector has been hard at work to keep future energy demand in check in various ways. These include:

Streamlining Computer Processes

The shift away from legacy enterprise data centers operated by traditional enterprises such as banks, retailers, and insurance companies, to newer commercially operated cloud data centers has been making a big difference to overall energy consumption.

In a recent company blog, Urs Hölzle, Google’s senior VP technical infrastructure, pointed to a study in Science showing the gains in energy efficiency. Hölzle wrote, “a Google data center is twice as energy efficient as a typical enterprise data center. And compared with five years ago, we now deliver around seven times as much computing power with the same amount of electrical power.”

The study shows how an overall shift to hyperscale data centers has helped reduce the amount of traditional enterprise data center capacity and subsequently reduced overall energy consumption. One of the authors is Jonathan Koomey, a former Lawrence Berkeley National Laboratory scientist, who has been studying the subject of data center energy usage for over two decades. Koomey says that forecasts that focus on data growth projections alone ignore the energy efficiency gains the sector has been making.

Facebook has also been exploring ways to maximise efficiency over the last decade. The social media giant started the Open Compute Project in 2011 to share hardware and software solutions aimed at making computing more energy-efficient.

One recent novel idea is that data centers could act as energy suppliers and sell excess electricity to the grid instead of keeping it as an insurance policy. Kevin Hagen, Iron Mountain’s VP of environment, social and governance strategy, describes backup generators and UPS batteries as “useless capital.”

Instead, he asks, “What would it look like if all that money was actually invested in energy systems that we got to use when we were trying to arbitrage energy during the day and night on a regular basis? What if we share control with the utility, so they can see there’s an asset and use it back and forth?”

New Ways to Cool Data Centers

According to Global Market Insights, cooling systems represent on average 40% of entire data center energy consumption. Data center design specialists have been looking into different ways to approach reducing energy needs specifically for cooling, which have been largely approached in the same way for the last three decades. These include locating data centers in cooler areas, using AI to regulate the data center’s cooling systems to match the weather, warm-water cooling, immersion cooling, and rear door cooling systems.

It’s an important problem to be focused on since data centers worldwide contribute to industry’s consumption of 45% of all available clean water.

A Focus on Sustainables

Greenpeace has been putting data centers under the spotlight for over a decade, calling for them and other digital infrastructure to become 100% renewably powered. There has been great progress since 2010 when Greenpeace started its ClickClean campaign when IT companies were negligible contributors to renewable-power purchase agreements; today Google is the world’s largest corporate purchaser of renewable energy.

Apple was at the top of Greenpeace’s list of the top greenest tech companies last year. It already boasts 100% renewable energy to power all its global data centers, and is currently focused on cleaning up its supply chain. Greenpeace has recently called out AWS, however, for a lack of renewables in Virginia’s “data center alley”, which it says is powered by “dirty energy.”

Corporate data centers have traditionally been part of the “dirty energy” problem, but are beginning to participate in seeking out renewable energy. Some of the initiatives in this space include DigitalRealty’s announcement on using wind energy in its 13 Dallas, TX data centers and IronMountain’s Green Power Pass specifically for enterprise customers to be able to participate in renewable energy purchase for its growing number of data centers.

The Role of Edge Computing in Working Towards More Efficient Solutions

Edge computing is also increasingly being looked to as an area that can help work towards more efficient solutions. These include:

Energy Resource Efficiencies

technical paper published on Arxiv.org earlier this month lifted the hood on Autoscale, Facebook’s energy-sensitive load balancer. Autoscale reduces the number of servers that need to be on during low-traffic hours and specifically focuses on AI, which can run on smartphones in abundance and lead to decreased battery life through energy drain and performance issues.

The Autoscale technology leverages AI to enable energy-efficient inference on smartphones and other edge devices. The intention is for Autoscale to automate deployment decisions and decide whether AI should run on-device, in the cloud, or on a private cloud. Autoscale could result in both cost and efficiency savings.

Other energy resource efficiency programs are also underway in the edge computing space.

Reducing High Bandwidth Energy Consumption

Another area edge can assist in is to reduce the amount of data traversing the network. This is especially important for high-bandwidth applications like YouTube and Netflix, which have skyrocketed in recent years, and recent months in particular. This is partly due to the fact each stream is composed of a large file, but also due to how video-on-demand content is distributed in a one-to-one model. High bandwidth consumption is linked to high energy usage and high carbon emissions since it uses the network more heavily and demands greater power.

Edge computing could help optimize energy usage by reducing the amount of data traversing the network. By running applications at the user edge, data can be stored and processed close to the end user and their devices instead of relying on centralized data centers that are often hundreds of miles away. This will lead to lower latency for the end user and could lead to a significant reduction in energy consumption.

Smart Grids and Monitoring Can Lead to Better Management of Energy Consumption

Edge computing can also play a key role in being an enabler of solutions that help enterprises better monitor and manage their energy consumption. Edge compute already supports many smart grid applications, such as grid optimization and demand management. Allowing enterprises to track and monitor energy usage in real-time and visualize it through dashboards, enterprises can better manage their energy usage and put preventative measures in place to limit it where possible. This kind of real-time assessment of supply and demand can be particularly useful for managing limited renewable energy resources, such as wind and solar power.

Examples in the Wild

Schneider Electric

Schneider Electric, a specialist in energy management and automation, is beginning to see clients seek more efficient infrastructure and edge computing solutions. This includes helping organizations to digitize their manufacturing processes by using data to produce real-time feedback and alerts to gain efficiencies across the supply chain, including the reduction of waste and carbon emissions. Natalya Makarochkina, Senior VP, International, Secure Power at Schneider Electric, says edge computing is allowing this to happen in a scalable and sustainable way.

Makarochkina highlights two Schneider customers that have used edge computing to improve sustainability and make efficiency savings. The first is a specialty grocer that used edge computing to upgrade IT infrastructure and reduce its physical footprint and consumption requirements, which led to a 35% reduction in engineering cost and a 50% increase in deployment speed for the end user. The second is a co-working space provider that used Schneider’s edge infrastructure options to launch an on-site data center that offers IT infrastructure on demand, offering tenants greater operational efficiencies.

“Edge computing and hybrid cloud play a pivotal role in sustainability because they are designed specifically to put applications and data closer to devices and their users. This helps organisations to better react to changes in consumer demands and improve processes to create more sustainable products.”– Natalya Makarochkina, Senior VP, International, Secure Power at Schneider Electric

Section

At Section, we’re working to provide more accessible options for developers to build edge strategies that deliver on sustainability targets. Our patent-pending Adaptive Edge Engine technology allows developers to piece together bespoke edge networks that are optimized across various attributes, with carbon-neutral underlying infrastructure included in the portfolio of options. Other attributes include cost, performance, and security/regulatory requirements.

Similarly to Facebook’s Autoscale, Section’s Adaptive Edge Engine tunes the required compute resources at the edge in response to, not only the application’s required compute attributes (e.g. PCI compliance or carbon neutrality), but also continuously in response to traffic served and performance. Our ML forecasting engine sits behind the workload placement and decision engine which continuously places workload and scales the underlying infrastructure (up and down) so we use the least amount of compute while providing the maximum availability and performance.

Summary

Not many would argue that technology’s immersion in our day-to-day lives is only expected to expand. Technology providers must work together to create sustainable solutions to meet these growing demands. While not an end-all solution, edge computing has the potential to play a key role in helping to control the negative impacts that accompany rising energy demands.

Section is a LF Edge Member. To learn more about LF Edge or any of our members, visit https://www.lfedge.org/members/.

State of the Edge is Now Part of LF Edge

By Blog, Open Glossary of Edge Computing, State of the Edge

Written by Matthew Trifiro, CMO of Vapor IO and co-chair of State of the Edge

Last week, we were thrilled to announce State of the Edge has become an official project of The Linux Foundation’s LF Edge. You can read the press release here.

In 2017, just as edge computing was entering the zeitgeist, a few like-minded companies came together to create State of the Edge. The goal was to bring clarity and a common understanding to the emerging market of edge computing. Back then, there was no LF Edge and edge computing felt like the early days of cloud or the early days of containers. A few pioneers could be found laying the technological foundations, but the practitioners did not share a common vocabulary and a lot of confusion and misunderstanding ensued.

We started with a vision of funding vendor-neutral research. Since launching, we’ve built an incredible community and published three major edge research reports, all of which are offered free of charge under a Creative Commons license. They are:

  • 2018 State of the Edge — the first inaugural report, which many people have called the “edge 101,” laid out a lot of foundational concepts. It has largely stood the test of time and is required reading at some companies during employee onboarding.
  • 2019 Data at the Edge — an experimental, shorter-form, topic-specific report that we built from research funded by Seagate. We will probably do more of these in the future.
  • 2020 State of the Edge — the second inaugural report, which we published in December 2019, was our most ambitious yet. We hired Phil Marshall of Tolaga Research to build a financial forecasting model to predict the expected demand for edge infrastructure.

In 2019, we began collaborating with The Linux Foundation, initially around the Open Glossary of Edge Computing and the Edge Computing Landscape. When LF Edge launched in January 2019, The Open Glossary became one of the five founding projects (including Akraino, EdgeX Foundry, Home Edge and Project EVE). The relationship became so beneficial to both parties, that by the end of last year it was clear that State of the Edge could find a long term home at The Linux Foundation. With LF Edge’s open governance model, we will continue to advance the State of the Edge as an open source project that maintains the organization’s original mission, further accelerating the adoption of edge computing technologies.

As of today, State of the Edge will officially merge with the Open Glossary of Edge Computing and the combined project will assume the State of the Edge name as a Stage 2 project (growth) at LF Edge. All State of the Edge projects will continue to be produced and funded collaboratively, with an explicit goal of producing original research without vendor bias and involving a diverse set of stakeholders.

The program will continue alongside a community that cares deeply about edge computing and the innovations that will be required to bring its promise to fruition.

State of the Edge will remain an active website but we’ll also be blogging and adding content on to the State of the Edge LF Edge website. Follow @LF_Edge for more news.

We’re looking forward to the next phase of growth for State of the Edge!

A special thanks is due to the original creators, contributors and funders of the State of the Edge project (alpha order).

Founding Members:

General Members:

Media and Analyst Partners:

LF Edge Member Spotlight: Vapor IO

By Blog, Member Spotlight, Open Glossary of Edge Computing

The LF Edge community is comprised of a diverse set of member companies that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Matt Trifiro, CMO of Vapor IO and Open Glossary of Edge Computing TSC Chair, to discuss the importance of open source, the “Third Act of the Internet,” how they contribute to the Open Glossary project and the impact of being a member of the LF Edge community.

Can you tell us a little about your organization?

Vapor IO builds and deploys data center and networking infrastructure to support edge computing and 5G. Our main product is the Kinetic Edge, which is a nationwide platform for edge colocation, edge networking and edge traffic exchange. We deploy our facilities on the infrastructure edge, in close proximity to the last mile networks. Our customers are the large cloud providers, CDNs, web-scale companies, streaming game providers, and other organizations building low-latency applications. We are Austin-based, but distributed all over the US. Our Kinetic Edge platform is live in four cities (Atlanta, Chicago, Dallas, and Pittsburgh) and we have 16 additional US cities under construction. We expect to deploy the Kinetic Edge to the top 36 US metro areas by the end of 2021, giving us a reach that exceeds 75 percent of the US population.

Why is your organization adopting an open source approach?

Open source allows a community, often one comprised of competitors, to pool their resources and build a common platform upon which differentiated businesses can be built. By collaborating on shared projects, we collectively accelerate entire markets. This lets companies focus on their unique strengths while not wasting effort competing on common building blocks. Everybody wins. We currently lead two active open source projects:

  • Open Glossary of Edge Computing (an LF Edge project), a Wikipedia-style community-driven glossary of terms relevant to edge computing.
  • Synse, a simple, scalable API for sensing and controlling data center equipment.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We believe edge computing will create a massive restructuring of the Internet, what State of the Edge 2020 report calls the “Third Act of the Internet.” We joined LF Edge to help accelerate the rearchitecture of the Internet to support this Third Act. This transformation will impact the entire world and it will require many companies to collaborate on large, multi-decade initiatives. The Linux Foundation has a track record of good stewardship of open source communities and we felt LF Edge had the right mix of focus and neutrality to make it possible.

What do you see as the top benefits of being part of the LF Edge community?

By being part of the LF Edge community, we get to help drive one of the most fundamental Internet transformations of our lifetime, something that will impact the entire world. LF Edge brings together diverse viewpoints and ensures projects advance based on their merit—and not the power or money behind the contributing companies. This creates a level playing field that advances the entire industry.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

Our main contributions to LF Edge have been twofold:

  1. We chair the Open Glossary of Edge Computing, a founding LF Edge project. This project has been helping to shape the overall LF Edge narrative by creating a shared vocabulary that all LF Edge projects can align around.
  2. We are very active in the LF Edge community. Our CEO, Cole Crawford, is now serving his second term as a member of the LF Edge Governing Board.

What do you think sets LF Edge apart from other industry alliances?

Many industry alliances suffer from a “pay for play” approach, where those who pay the most get the most influence. The Linux Foundation has managed to navigate these waters deftly. They have successfully attracted deep-pocket funders while maintaining a level playing field and aggressively including smaller companies and individuals who help advance the projects in ways that don’t involve direct financial contributions. This gives the LF Edge a lot more “staying power,” as it truly serves the broad community and not just the goals of the largest companies. 

How will  LF Edge help your business?

The LF Edge community gives us access to the best technologies and thought leadership in edge computing. LF Edge helps us stay up to date on the industry and align our business with the community’s needs. By paying attention to what kinds of edge infrastructure LF Edge projects require, we are able to fine-tune our Kinetic Edge offering in a way that will support the next generation of edge native and edge enhanced applications.

What advice would you give to someone considering joining LF Edge?

If you’re looking for a place to start, pick a project in the LF Edge portfolio where you can make the most impact. Become a contributor and get to know the committers and technical leadership. LF projects are meritocracy based, so the more active you are and the more value you contributed, the more recognition and influence you will get within the community. This is one of the best ways to start.

To learn more about Open Glossary of Edge Computing, click here. To find out more about our members or how to join LF Edge, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #glossary channels.

IoT, AI & Networking at the Edge

By Blog, LF Edge, Open Glossary of Edge Computing, State of the Edge

Written by LF Edge member Mike Capuano, Chief Marketing Officer for Pluribus Networks

This blog originally ran on the State of the Edge website. 

5G is the first upgrade to the cellular network that will be justified not only by higher speeds and new capabilities targeted at consumer applications such as low latency gaming but also  by its ability to support enterprise applications. The Internet of Things (IoT) will become the essential fuel for this revolution, as it transforms almost every business, government, and educational institution. By installing sensors, video cameras and other devices in buildings,  factories, stadiums and in other locations, such as in vehicles, enterprises can collect and act on data to make them more efficient and more competitive. This digital transformation will create a better and safer environment for employees and to deliver the best user experience possible to end customers. In this emerging world of 5G-enabled IoT, edge computing will play a critical role.

IoT will leverage public and private 5G, AI, and edge compute. In many cases, analysis of the IoT data will be highly complex, requiring the correlation of multiple data input streams fed into an AI inference model—often in real time. Use cases include factory automation and safety, energy production, smart cities, traffic management, large venue crowd management, and many more. Because the data streams will be large and will often require immediate decision-making, they will benefit from edge compute infrastructure that is in close proximity to the data in order to reduce latency and data transit costs, as well as ensure autonomy if the connection to a central data center is cut.

Owing to these requirements, AI stacks will be deployed in multiple edge locations, including on premises, in 5G base station aggregation sites, in telco central offices and at many more “edges”. We are rapidly moving from a centralized data center model to a highly distributed compute architecture. Developers will place workloads in edge locations where applications can deliver their services at the highest performance with the lowest cost.

Critical to all of this will be the networking that will connect all of these edge locations and their interrelated compute clusters. Network capabilities must now scale to a highly distributed model, providing automation capabilities that include Software Defined Networking (SDN) of both the physical and virtual networks.

Network virtualization—like the virtualization of the compute layer— is a flexible, software-based representation of the network built on top of its physical properties. The physical network is obviously required for basic connectivity – we must move bits across the wire, after all. SDN automates this physical “underlay” but it is still rigid since there are physical boundaries. Network virtualization is a complete abstraction of the physical network. It consists of dynamically-constructed VXLAN tunnels supported by virtual routers, virtual switches and virtual firewalls — all defined and instantiated in software, all of which can be manipulated in seconds and is much faster than reconfiguring the physical network (even with SDN).

To satisfy the requirements of a real-time, edge-driven IoT environment, we must innovate to deliver cost effective, simple, and unified software defined networking across both the physical and virtual networks that support the edge. The traditional model for SDN was not based on these requirements. It  was built for large hyperscale and enterprise data centers that rely on multiple servers and software licenses incurring costs and consuming space and power. This approach also requires complex integrations to deploy and orchestrate the physical underlay, the virtual overlay, along with storage and compute.

Deploying traditional SDN into edge environments is not an attractive solution. Often there will not be space to deploy multiple servers for management and SDN control of the physical and virtual networks. Furthermore, all the SDN controllers need to be orchestrated by a higher layer “controller of controllers” to synchronize network state, which adds unnecessary latency, cost and complexity.

In some cases, companies also deploy SmartNICs (a network interface controller that also performs networking functions to offload processing from the CPU). SmartNICS allow packet processing associated with network virtualization without burdening the primary compute (which is better utilized supporting other workloads). Also, hardware-based taps, probes and packet brokers are being deployed to support network telemtry and analytics.

Applying the network automation model we built for large centralized data centers will be expensive and cumbersome, as well as space and power inefficient, in edge environments. The industry needs to rethink the approach to edge network automation and deliver a solution designed from the ground up for distributed environments. Ideally this solution does not require additional hardware and software but can leverage the switches and compute that are already being deployed to resource constrained edge environments.

The good news is that a number of companies are developing new approaches that deliver highly distributed SDN control that unifies the physical underlay and virtual overlay along with providing network analytics — all with no additional external hardware or software. These new technologies can utilize, for example, the fairly powerful and underutilized CPU and memory in “white box” Top of Rack (TOR) switches to deliver SDN, network virtualization, network analytics and a DCGW (Data Center Gateway) router function. In other words, these solutions have been designed with the edge in mind and are delivering powerful automation with no extra hardware and additional software licenses – supporting the edge with a cost effective solution that also saves space and power.

Pluribus Networks delivers open networking solutions based on a unique next-gen SDN fabric for data centers and distributed cloud edge compute environments.

The State of the Edge 2020 report is available to read now for free. We encourage anyone who is interested in edge computing to give it a read and to send feedback to State of the Edge.

State of the Edge 2020: Democratizing Edge Computing Research

By Blog, Open Glossary of Edge Computing, State of the Edge

Written by Matt Trifiro, Open Glossary of Edge Computing TSC Chair, Co-Chair at State of the Edge and CMO at Vapor IO

State of the Edge 2020, a vendor-neutral report supported by The Linux Foundation’s LF Edge contains unique and in-depth research on edge computing, covering the major trends, drivers and impacts of the technology. The report provides authoritative market forecasting and trend analysis from independent contributors, bringing authoritative research on edge computing to everyone.

Edge Computing and LF Edge

Many believe edge computing will be one of the most transformative technologies of the next decade, and that by positioning dense compute, storage and network resources at the edge of the network new classes of applications and services will be enabled which support use cases from life safety to entertainment.

The Linux Foundation’s LF Edge has greatly contributed to the growth of edge computing in the industry, both in terms of technical projects and a deep shared understanding of the concepts and terminology underpinning this new area of technology.

 

One of the key projects within LF Edge is the Open Glossary of Edge Computing. This project seeks to harmonize the terminology used across the industry when discussing edge computing and has been adopted by a number of projects and contributors in the community. These community members recognize that without a common and accurate definition of key terms and concepts, it is much harder to collaborate on challenges.

State of the Edge

The Open Glossary of Edge Computing was originally born as part of the inaugural State of the Edge report in 2018, where an initial version was published and included as part of the report. Shortly after this, the Open Glossary of Edge Computing was adopted as an LF Edge project.

State of the Edge is itself an open and collaborative community of organizations and individuals who are passionate about the future of edge computing. The project looks to advance edge computing within the industry through consensus building, ecosystem development and effective communication. To that end, State of the Edge reports are written and published using contributions from a diverse community of writers and analysts. By including many voices, State of the Edge publications avoid the often incomplete, skewed and overly vendor-driven material and research typically available.

Multiple reports have been published to date, and more are planned for release during 2020, including coverage of topics highly relevant to edge computing, such as 5G networks. In addition, the State of the Edge 2020 report contains the latest version of the Open Glossary of Edge Computing, which reached version 2.0 during 2019.

 

Democratizing Edge Computing Research

The first State of the Edge report in 2018 focused on establishing a baseline of knowledge from across the edge computing industry. This made it possible for readers to accurately assess what edge computing meant for them, their customers and their unique use cases. This first report covered what were many new and often misunderstood concepts, tying them together in a way that enabled more people than ever before to appreciate and understand the edge.

When it came to the State of the Edge 2020 report, following extensive feedback and surveys, the collaborative team decided that market forecasting on edge computing was hard to come by, and in high demand. Though forecast models on edge computing exist, they are often proprietary and are not built transparently. Moreover, they are typically locked behind expensive paywalls that limit the number of people that can benefit from them.

By drawing on the expertise of professional researchers and well-regarded contributors, State of the Edge has released its first market forecast, along with a comprehensive narrative that discusses the new trends in edge computing.

The State of the Edge is run like an open source project and publishes all of its reports under a Creative Commons license, making it freely available to anyone who is interested. This approach allows the community to benefit from shared knowledge and valuable research on edge computing, without limiting it to those with money to spend.

Available to Read Now 

The State of the Edge 2020 report is available to read now for free. We encourage anyone who is interested in edge computing to give it a read and to send any feedback to State of the Edge.

LF Edge in 2020: Looking back and Revving forward

By Akraino Edge Stack, Baetyl, Blog, EdgeX Foundry, Fledge, Home Edge, Open Glossary of Edge Computing, Project EVE

Written by Melissa Evers-Hood, LF Edge Governing Board Chair 

Dear Community,

Happy New Year! As we kick off 2020, I wanted to send a note of thanks and recognition to each of you for a wonderful 2019, which marked several meaningful accomplishments for this organization.  LF Edge was launched in Jan 2019 with an aim to unify the edge communities across IOT, Telco, Enterprise and Cloud providing aligned open source edge frameworks for Infrastructure and Applications.

Our accomplishments include:

  • EdgeX Foundry has blossomed this year in participation, downloads, and use cases. EdgeX, as folks commonly call it, also graduated to Impact project stage and surpassed 1.5 million container downloads in 2019.
  • Akraino, which also reached Impact stage this year, is preparing for it’s second release with 5 new blueprints for R2, with updates to 9 of the existing 10 R1 blueprints already released. Most notably, its broadening its blueprint profile to include new blueprints for Connected Vehicles and AR/VR, truly becoming a viable framework across edge applications.
  • At the Growth Stage, Open Glossary provides common terminology and ecosystem mapping for the complex Edge environment. In 2019, the Glossary Project shipped 2.0 of the Glossary, which was integrated into the 2020 State of the Edge Report. The Glossary Project began the process of helping to standardize terminology across all LF Edge projects, and also launched the LF Edge Landscape Project: https://landscape.lfedge.org/.
  • Also at the Growth Stage, Project Eve allows cloud-native development practices in IOT and edge applications. EVE’s most recent release, 4.5.1 (which was gifted on December 25, 2019), provides a brand new initramfs based installer, ACRN tech preview, and ARM/HiKey support.
  • The Home Edge project, targeted to enable a home edge computing framework, announced their Baobab release in November. The Home Edge Project has initiated cross-project collaboration with EdgeX Foundry (secure data storage) and Project EVE (containerized OS).
  • We also added 2 additional projects this year.
    • Baetyl which provides an open source edge computing platform.
    • Fledge which is an open source framework and community for the industrial edge focused on critical operations, predictive maintenance, situational awareness and safety. Fledge has recently begun cross-project collaboration with Project EVE and Akraino, with more information available here.
  • Our reach has broadened with 9k articles, almost 50k new users, and 6.7M social media impressions.

I am excited about the work ahead in 2020, especially as we celebrate our one year anniversary this month. We laid the foundation last year – offered a solution to unite the various edge communities – and now, with your support and contributions, we’re ready to move to the next phase.

LF Edge is co-hosting Open Networking & Edge Summit in April and our teams are working hard on several cross-project demos and solutions. We’re planning meetups and other F2F opportunities at the show, so this conference will be a must.

Our focus as a community will be to continue to expand our developers and end users.  We will do this through having agile communities, that collaborate openly, create secure, updateable, production ready code, and work together as one. We also expect that there will be new projects to join and integrate.  As we walk into this bright future, working as a unified body will demonstrate that the fastest path to Edge products is through LF Edge.

I look forward to working with each of you in ‘20 and seeing you in Los Angeles this April at ONES!

Melissa

The Linux Foundation’s LF Edge Releases V2.0 of the Open Glossary of Edge Computing

By Announcement, Open Glossary of Edge Computing

 

New version aligns a year of terminology and lexical updates across multiple edge computing projects

SAN FRANCISCO – August 29, 2019LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced Version 2.0 of its Open Glossary of Edge Computing. This latest version of the Open Glossary adds a year of updates from the edge community while further iterating vocabulary across the entirety of LF Edge projects.

The Open Glossary of Edge Computing was created in 2018 as a vehicle to organize a shared, vendor-neutral vocabulary for edge computing to improve communication and accelerate innovation in the field. Launched as part of the first annual State of the Edge report, the Open Glossary is now an open source project under the LF Edge umbrella. The Open Glossary 2.0 is available in a publicly-accessible GitHub repo, and the new versions will be included in the State of the Edge 2019 report, to be released later this fall.

“The Open Glossary of Edge Computing exemplifies a community-driven process to document and refine the language around edge computing,” said Arpit Joshipura, general manager, Networking, Edge, and IoT, the Linux Foundation. “As the diversity of LF Edge increases, we want frameworks in place that make it easy to talk about edge computing in consistent and less-biased ways. It’s imperative the community comes together to converge on a shared vocabulary, as it will play a substantial role in how our industry discusses and defines the next-generation internet.”

“Now that we’ve reached the v2.0 milestone, the next big task for the Open Glossary will be to recommend standardizing the lexicon across all of the LF Edge projects,” said Matt Trifiro, CMO of Vapor IO and chair of the Open Glossary of Edge Computing. “The LF Edge projects span the continuum from cloud to device. Each project has evolved organically, often with its own vocabulary, and our goal is to both respect each project’s history but make it practical and possible to talk coherently across projects using terms and phrases that mean the same thing. This will be a long-term community effort.”

By cataloging the community’s shared lexicon, the Open Glossary fosters an adroit exchange of information and helps to illuminate hidden bias in discussions of edge computing. Moreover, because the project is freely-licensed, individuals and organizations may publish and incorporate the glossary into their own works. The Open Glossary is presented under the Creative Commons Attribution-ShareAlike 4.0 International license (CC-BY-SA-4.0) in order to encourage use and adoption.

To increase the diversity of viewpoints, the Open Glossary project welcomes contributors from everywhere, offering a publicly-accessible GitHub repository where individuals can join the community, present ideas, participate in discussions, and offer their suggestions and improvements to the glossary. By combining many viewpoints in a transparent process, the Open Glossary of Edge Computing presents a resource that can be used by journalists, analysts, vendors and practitioners.

Resources

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

About the Open Glossary of Edge Computing

The Linux Foundation’s Open Glossary of Edge Computing project curates and defines terms related to the field of edge computing, collecting common and accepted definitions into an openly licensed repository. The Open Glossary project leverages a diverse community of contributors to collaborate on a shared lexicon, offering an organization and vendor-neutral platform for advancing a common understanding of edge computing and the next generation internet. The Open Glossary is governed as an open source project using a transparent and meritocratic process. The glossary is freely licensed under the terms of the Creative Commons Attribution-ShareAlike 4.0 International license (CC-BY-SA-4.0), in order to encourage use and adoption. For more information, see https://www.lfedge.org/projects/openglossary/.

About State of the Edge

The State of Edge (http://stateoftheedge.com) is a member-supported research organization that produces free reports on edge computing and was the original creator of the Open Glossary of Edge Computing, which was donated to The Linux Foundation. Version 2.0 of the Open Glossary of Edge Computing is being incorporated into the 2019 report. The State of the Edge welcomes additional participants, contributors and supporters. If you have an interest in participating in upcoming reports or submitting a guest post to the State of the Edge Blog, feel free to reach out by emailing info@stateoftheedge.com.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

What to Expect from the Open Glossary of Edge Computing in 2019

By Blog, Open Glossary of Edge Computing

By Alex Marcham, LF Edge and State of the Edge Contributor and Technical Marketing Manager at Vapor IO, and Matt Trifiro, Co-Chair at State of the Edge; Chair at Open Glossary of Edge Computing and CMO at Vapor IO

The Open Glossary of Edge Computing began as a utilitarian appendix to the 2018 State of the Edge report. It had humble goals: to cut through the morass of vendor- and pundit-driven definitions around edge computing and, instead, deliver a crisply-defined common lexicon that would enhance understanding and accelerate conversations around all things edge.

Very quickly, it became clear that the Open Glossary could be a powerful and unifying force in the fledgling world of edge computing and that no one entity should own it. Instead, everybody should own it. Thus began our partnership with The Linux Foundation. We converted the glossary into a GitHub repo, placed it under a Creative Commons license, and created an open source project around it.

In January 2019, the Open Glossary became one of the founding projects of LF Edge, the Linux Foundation’s umbrella group for edge computing projects. The Open Glossary now plays a critical role that spans all of the LF Edge projects with its mission to collaborate around a single point of reference for edge computing terminology. This community-driven lexicon helps mitigate confusing marketing buzzwords by offering a foundation of clear and well-understood words and phrases that can be used by everyone.

In 2019, the momentum of the Open Glossary project will continue unabated. Open Glossary has four main projects this year, building on its successes in 2018:

Grow the Community

The Open Glossary project depends on open collaboration, and the community is actively building engagement by seeking out participation from key stakeholder groups in edge computing. Within The Linux Foundation itself, the Open Glossary team has sought input from not only all of the LF Edge projects but also adjacent projects, such as the CNCF’s Kubernetes IoT and Edge working group. In addition, the Open Glossary has formed alliances with other edge computing groups and foundations, including the TIA, iMasons, and the Open19 Foundation. Partnering with other non-profits and consortia will help the project grow its base of passionate collaborators who are dedicated to expanding and improving the Open Glossary.

The Taxonomy Project

Readers of the Open Glossary want more than mere definitions; they want to know how the constituent parts fit together as a whole. To answer this request, the Open Glossary has begun the Taxonomy Project, a working group that seeks to create a classification system for edge computing across three core areas:

  • Edge infrastructure
  • Edge devices
  • Edge software

The Taxonomy Project will draw upon the expertise of subject matter experts in each of the core areas to define and hopefully also visualize the complex relationships between the different components in edge computing. The Taxonomy Project WG is being led by Alex Marcham and the Open Glossary will publish the first taxonomies in Q3 2019.

The Edge Computing Landscape Map

At the request of The Linux Foundation, the State of the Edge project also contributed its Edge Computing Landscape Map, which is now a working group under the auspices of the Open Glossary. The Landscape WG is being led by Wesley Reisz and has just started out. Currently, they have regular weekly meetings (Tuesdays at noon Pacific Time) and the group is actively working on refining the categories the LF Edge Landscape. The group seeks wider participation and will be asking for help to test and validate the proposed categories.

You can see the landscape map evolve at https://landscape.lfedge.org and join the mailing list here.

Glossary 2.0

Edge computing is a rapidly evolving area of development, deployment and discussion, and the Open Glossary contributors aim to keep the glossary up to date with the latest developments in industry and academia, driven by contributions from our project members. During 2019, the Open Glossary team aims to release its 2.0 version of the Open Glossary, which will be a timely update that continues to form the basis for clear and concise communication on the edge.

To see open issues and pull requests or to make contributions, visit the GitHub repo. Go here to join the mailing list.

In Summary

2019 is shaping up to be a pivotal year for edge technologies, their proponents and of course their end users. The Open Glossary project, as part of The Linux Foundation, is dedicated to continuing its unique mission of bringing a single, open and definitive lingua franca to the world of edge computing. We hope you’ll join us during 2019 as we focus on these goals for the project, and look forward to your contributions.

Contributors can get involved with the Open Glossary project by joining the mailing lists and contributing and commenting on the GitHub repository for the project.

Alex Marcham is a technical marketing manager at Vapor IO. He is one of the primary contributors to the Open Glossary and leads the Taxonomy Project working group. Matt Trifiro is CMO of Vapor IO and is the Chair of the Open Glossary project.