Written by Rex St. John, EGX Developer Relations at NVIDIA
This article originally ran on Rex’s LinkedIn page. For more content like this, connect with him on LinkedIn.
After building Edge Computing ecosystems at Intel and Arm, I have recently made the switch to working on Edge Computing at NVIDIA. Several people have asked me to share my perspective and learnings, so I am starting this informal, personal series on the topic. All opinions shared here are my own personal opinions and not those of my employer.
Objective
In this article, I will share two reasons why some experts in the industry are investing in hypervisor technology as well as two interesting open source edge hypervisor solutions to be aware of. For edge definition nitpickers (you know who you are), I am going to be referring to the “Device Edge” here. There are many other definition for “Edge,” if you are curious, read this LF Edge white paper.
The Hovercraft Analogy
For those of you who are unfamiliar, a hypervisor is kind of like a hovercraft that your programs can sit inside. Like hovercrafts, hypervisors can provide protective cushions which allow your applications to smoothly transition from one device to another, shielding the occupants from the rugged nature of the terrain below. With hypervisors, the bumps in the terrain (differences between hardware devices), are minimized and mobility of the application is increased.
Benefits of Hypervisors
Benefits of hypervisors include security, portability and reduced need to perform cumbersome customization to run on specific hardware. Hypervisors also allow a device to concurrently run multiple, completely different, operating systems. Hypervisors also can help partition applications from one another for security and reliability purposes. You can read more about hypervisors here. They frequently are compared to, used together with or even compete with containers for similar use cases, though they historically require more processing overhead to run.
Two Reasons Why Some (Very Smart) Folks Are Choosing Hypervisors For The Edge
A core challenge in Edge Computing is the extreme diversity in hardware that applications are expected to run on. This, in turn, creates challenges in producing secure, maintainable, scalable applications capable of running across all possible targets.
Unlike their heavier datacenter-based predecessors, light-weight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge. Here are two reasons why some in the industry are taking a careful look at edge hypervisors.
Reason 1: Avoiding The Complexity And Overhead of Kubernetes
One potential reason for taking a hypervisor-based approach at the edge is that there may be downsides in pursuing Kubernetes for smaller clusters. These include the difficulty in building and managing a team who can properly setup and scale a real-world Kubernetes application due to the overhead and complexity of Kubernetes itself. In some cases, such as in running a cluster of 4-5 nodes, it might be desirable to use more streamlined approaches involving a controller and light-weight hypervisors. This is the approach taken by EVE, mentioned in more detail below.
Reason 2: Ease Of Modernizing Brown-Field Industrial IT
Anotherpressing reason for choosing edge hypervisors is that “brown-field” installations of existing edge hardware are extremely expensive to upgrade to follow modern IT “best practices.” Hypervisors provide a path forward that does not involve rewriting old systems from scratch as the code running on older machines can frequently be shifted into a hypervisor and neatly managed and secured from there (a process referred to as “Workload Consolidation.”)
Let’s take a look at two interesting examples of edge hypervisors to understand further.
Hypervisor #1: Project ACRN
The first edge hypervisor we will look at is called ACRN, which is a project hosted by the Linux Foundation. ACRN has a well documented architecture and offers a wide range of capabilities and configurations depending on the situation or desired outcome.
ACRN seeks to support industrial use cases by offering a specific partitioning between high-reliability processes and those which do not need to receive preferential security and processing priority. ACRN accomplishes this separation by specifying a regime for sandboxing different hypervisor instances running on the device as shown above. I recommend keeping an eye on ACRN as it seems to have significant industry support. ACRN supported platforms currently tend to be strongly x86-based.
Hypervisor #2: EVE (part of LF Edge)
Also a project hosted on the Linux Foundation, EVE differs from ACRN in that it belongs to the LFEdge project cluster. Unlike ACRN, EVE also tends to be more agnostic about supported devices and architectures. Following the instructions hosted on the EVE Github page, I was able to build and run it on a Raspberry Pi 4 within the space of ten minutes, for example.
In terms of design philosophy, EVE is positioning itself as the “Android of edge devices.” You can learn more about EVE by watching this recent webinar featuring the Co-Founder of Zededa, Roman Shaposhnik. EVE makes use of a “Controller” structure which provisions and manages edge nodes to simplify the overhead of operating an edge cluster.
Why an Open, Trusted Edge is Key to Realizing the True Potential of Digital Transformation
By Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA
This content originally ran on the ZEDEDA Medium Blog – click here for more content like this.
In part one of this series, I walked through various approaches to ecosystems and highlighted how business value tends to find a natural equilibrium across stakeholders. In this second installment I’ll walk through the importance of an open edge for scaling ecosystems and realizing the true potential of digital transformation, as well as providing some tips on building ecosystems that I’ve picked up over the past years.
Enabling increasingly connected Intranets of Things
Imagine if the overall internet was built as a closed ecosystem, controlled by a small set of organizations, much less one. Of course, there are browsing restrictions placed at a company level and in some countries, but the internet simply wouldn’t have made the same massive impact on society without fundamental openness and interoperability.
All data is created at the edge, whether it be from user-centric or IoT devices. A few years ago, the number of devices on the internet surpassed the global population and the growth for IoT devices is expected to continue at an exponential rate. This will not only unlock new paths to value, but also necessitate an increase in edge computing.
However, as it turns out, the term “Internet of Things” is actually a bit of a misnomer. It’s really about a series of increasingly connected Intranets of Things. Starting with simple examples like the story of Fred and Phil from part one of this series, ecosystems will get increasingly larger and more interconnected as the value to do so exceeds the complexity and risk. So how do we carry this out at scale?
The importance of an open edge
Previously, I’ve outlined the unique challenges that need to be addressed when deploying applications closer to devices in the physical world, outside the confines of a traditional data center. Edge and IoT solutions inherently require a diverse collection of ingredients and expertise to deploy and over the past five years the emerging market has attempted to address this fragmentation with a dizzying landscape of proprietary platforms — each with wildly different methods for data collection, security and management.
That said, having hundreds of closed, siloed platforms and associated ecosystems is definitely not a path to scale. As with the internet itself, it is important to have an open, consistent infrastructure foundation for IoT and edge computing, and from there companies can decide how open or closed they want to build their business ecosystems on top.
The diversity of the IoT edge makes it impractical to develop one catch-all standard to bind everything together. Open source software frameworks are an excellent way to bridge together various ingredients, unifying rather than reinventing standards, accommodating legacy installations, and enabling developers to focus on value creation. LF Edge is an example of a collaborative effort to build an open edge computing foundation in partnership with other open source and standards-oriented initiatives. The key to this being the base for a truly open edge ecosystem is the vendor-neutral governance offered by the Linux Foundation.
In this recent two-part blog on Edge AI I highlighted that the winners in the end will be the ones creating differentiated value through their domain knowledge, building necessarily unique software and hardware and offering great services — not those that are reinventing the middle over and over again. AI models for common tasks such as identifying a person’s demographic, detecting a license plate number or determining if an object is a water bottle or weapon will be commonplace, meanwhile there will always be room to differentiate with AI when specific industry context is in play.
Trust is essential
I’ve written in the past about the “Holy Grail of Digital” being selling or sharing data, resources and services across total strangers, all while maintaining privacy on your terms. This is the ultimate scale factor, but we need to turn to technology for help because it simply isn’t feasible to take people out to dinner fast enough one by one to build the necessary trust relationships.
Consumers are often comfortable pledging allegiance to specific brands and giving up a little privacy as long as they trust the provider and get value. However, in order to build more complex ecosystem relationships that span private and public boundaries at scale we not only need open interoperability but also ensure that no single entity owns the trust.
As such, we also need to collaborate on a technology foundation that automates the establishment of trust as data flows across heterogeneous systems. We’re seeing the industry increasingly step up here, from distributed ledger efforts like IOTA and Hyperledger that provide smart contracts to the Trust Over IP Foundation and the emerging Project Alvarium which aims to build out the concept of data confidence fabrics by layering trust insertion technologies with a system-based approach. Both the ToIP Foundation and Alvarium are focused on facilitating trust in both machine-generated data and across human relationships.
While these efforts don’t replace the need to build business relationships (and take people out to dinner!), they will provide necessary acceleration. Moreover, beyond helping us scale complex ecosystems these tools can also aid in combating the increasing issue with deepfakes and ensuring ethical AI solutions, in addition to accelerating workload consolidation at the edge and dealing with regulatory requirements like GDPR… but these are blog topics for another time!
Tips on building an ecosystem (hint, hint: domain knowledge rules)
When I started building the IoT ecosystem with the team at Dell back in 2015 it was clear that the market was going to go vertical before going horizontal. This is a typical pattern in any new market and proprietary offers often get the initial traction while solutions based on an open foundation always win in the end when it comes to sheer scale (recall the Apple — Android discussion from part one…).
So, our ecosystem strategy starting in 2015 was to go super broad before going deep, partnering widely and enabling the “cream to rise to the top”. I joked with the team back then that we were going to do “Tinder” and then “D-Harmony”. Sure enough, the partners that had the most initial traction were those that were laser-focused on one use case, meanwhile the horizontal “peanut-butter platforms” were stalling. Case in point, if you do a little bit of everything you rarely do one thing really well.
But it was really about these vertically-focused platforms’ domain knowledge that resonated with a specific customer need and desired outcome. They built their technology platforms to get to data but their real value was that domain knowledge. Over time as we get to a more consistent, open foundation for Edge and IoT solutions these providers will either need to pivot to being more like system integrators, or build differentiated software and/or hardware. So, in 2017 we purposely switched from going broad to being more deliberate in partnerships, focusing more on partners that have realized the importance of separating domain knowledge from the underlying technology foundation and helping with matchmaking across the partner landscape.. Enter “D-Harmony”!
In closing
Deploying IoT and edge computing solutions takes a village and it’s important to establish an entourage of partners that have a “go to dance move” rather than working with those that are trying to do too much and as a result not doing anything particularly well. Five years have passed and a lot of providers have felt the pain of trying to own everything and have since realized the importance of having focus and establishing meaningful partnerships. So, now I joke with the team at ZEDEDA as we build up our ecosystem of market-leading pure-play solutions and domain experts that we’re going straight to “Z-Harmony”!
Thanks for reading. In the third and final part of this series, I’ll provide more insights on how we get to “advanced class” and more on what we’re doing at ZEDEDA to help build the necessarily open foundation to facilitate ecosystem scale. In the meantime, feel free to drop me a line with any comments or questions. @defshepherd
Written By Jason Shepherd, LF Edge Governing Board Member and VP of Ecosystem at ZEDEDA
This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.
In my 2020 predictions for Edge Computing piece earlier this year, I highlighted an increase in collaboration on interoperability, between different IT and OT stakeholders and building trust in data. In this three-part blog, I’ll expand on some related thoughts around building ecosystems, including various technical and business considerations and the importance of an open underlying foundation at the network edge.
For more on this topic, make sure to register and tune in tomorrow, July 28 for Stacey Higginbotham’s virtual event Everything is Connected — How to Build Products and Ecosystems for a Connected Age! Stacey always puts on a great event and this will be a great series of panel discussions on this important topic. I’m on a roundtable about building partnerships for data sharing and connected services.
The new product mindset
I’ve previously written about the “new product mindset” to highlight the importance of developing products and services with the cumulative lifetime value they will generate in mind, compared to just the immediate value delivered on the day of launch. In the rapidly-evolving technology landscape, competitive advantage is based on the ability to innovate rapidly and continuously improve an offering through software- and, increasingly, hardware-defined experiences, not to mention better services.
Related to this is the importance of ecosystems. Whether you build one or join one, it’s more important than ever in this world of digital transformation to be part of an ecosystem to maintain a competitive edge. This includes consideration on two vectors — both horizontal for a solid and scalable foundation and vertical for domain-specific knowledge that drives targeted outcomes. Ultimately, it’s about creating a network effect for maximizing value, both for customers and the bottom line.
Approaches to building ecosystems
An ecosystem in the business world can take various forms. In simplest terms it could be a collection of products and services that are curated to be complementary to a central provider’s offering. Alternatively, it could consist of partners working towards a common goal, with each benefitting from the sum of the parts. Another ecosystem could take the form of an operations-centric supply chain or a group of partners targeting common customers with a joint go-to-market strategy.
Approaches to ecosystem development span a spectrum of closed to open philosophies. Closed ecosystems are based on closely-governed relationships, proprietary designs, and, in the case of software, proprietary APIs. Sometimes referred to as “walled gardens,” the tight control of closed ecosystems can provide a great customer experience. However, this tends to come with a premium cost and less choice. Apple is a widely cited example of a company that has taken a more closed ecosystem approach.
Then there are “closed-open” approaches which have APIs and tools that you can openly program to — as long as you pay access fees or are otherwise advancing the sponsoring company’s base offer. Amazon’s “Works with Alexa” program is an example of this model, and a big driver in how they emerged as a smart home leader — they spent many years establishing relationships and trust with a broad consumer base. This foundation creates a massive network effect, and in turn, Amazon makes the bulk of their revenue from the Alexa ecosystem by selling more consumer goods and furthering customer relationships — much more so than on the Alexa-enabled devices themselves.
Scaling through open source
To fully grasp the business trade-offs of closed vs. open ecosystems, consider Apple’s iOS and Android. While Apple provides a curated experience, the open source Android OS provides more choice and, as a result, has over 85 percent of the global mobile OS market share. Android device makers may not be able to collect as much of a premium as Apple because of the open competition and difficulty in differentiating with both hardware and a better user experience by skinning the stock Android UI. However, a key upside element is the sheer volume factor.
Plus, that’s not to say there still isn’t an opportunity to differentiate with Android as the base of your ecosystem. As an example, Samsung has built a strong ecosystem approach through their Galaxy brand and has recently branched out even further by announcing partnerships with leading content providers to offer a better-together story with their products. We’ve seen this play before — if you’re old enough to have rented video tapes at a brick-and-mortar store (be kind, rewind!), you’ll recall that VHS was an inferior technology to Beta but it won out after a short format battle because the backers of VHS got the most movie studios onboard.
It’s ultimately about taking a holistic approach, and these days open source software like Android is an essential driver of ecosystem enablement and scale. On top of the benefit of helping developers reduce “undifferentiated heavy lifting,” open source collaboration provides an excellent foundation for creating a network effect that multiplies value creation.
After all, Google didn’t purchase the startup Android in 2005 (for just $50M!) and seed it into the market just to be altruistic — the result was a driver for a massive footprint that creates revenue through outlets including Google’s search engine and the Play store. Google doesn’t reveal how much Android in particular contributes to their bottom line, but in a 2015 court battle with Oracle over claims of a Java-related patent infringement it was speculated that it was already well over $30B a year. Further, reaching this user base to drive more ad revenue is now free for Google, compared to them reportedly paying Apple $1B in 2014 for making their search engine the default for the iPhone.
Striking a balance between risk and reward
There’s always some degree of a closed approach when it comes to resulting commercial offers, but an open source foundation undoubtedly provides the most flexibility, transparency and scalability over time. However, it’s not just about the technology — also key to ecosystems is striking a balance between risk and reward.
Ecosystems in the natural world are composed of organisms and their environments — this highly complex network finds an equilibrium based on ongoing interactions. The quest for ecosystem equilibrium carries over to the business world, and especially holds true for IoT solutions at the edge due to the inherent relationship of actions taken based on data from tangible events occurring in the physical world.
Several years back, I was working with a partner that piloted an IoT solution for a small farmer, “Fred,” who grows microgreens in greenhouses outside of the Boston area. In the winter, Fred used propane-based heaters to maintain the temperature in his greenhouses and if these heaters went out during a freeze he would lose thousands of dollars in crops overnight. Meanwhile, “Phil” was his propane supplier. Phil was motivated to let his customers’ propane tanks run almost bone dry before driving his truck out to fill them so he could maximize his profit on the trip.
As part of the overall installation, we instrumented Fred’s propane tank with a wireless sensor and gave both Fred and Phil visibility via mobile apps. With this new level of visibility, Fred started freaking out any time the tank wasn’t close to full and Phil would happily watch it drop down to near empty. This created some churn up front but after a while it turned out that both were satisfied if the propane tank averaged 40% full. Side note: Fred was initially hesitant about the whole endeavor but when we returned to check in a year later, you couldn’t peel him away from his phone monitoring all of his farming operations.
Compounding value through the network effect
Where it starts getting interesting is compounding value-add on top of foundational infrastructure. Ride sharing is an example of an IoT solution (with cars representing the “things”) that has created an ecosystem network effect in the form of supplemental delivery services by providers (such as Uber Eats) partnering with various businesses. There’s also an indirect drag effect for ecosystems, for example Airbnb has created an opportunity for homeowners to rent out their properties and this has produced a network effect for drop-in IoT solutions that simplify tasks for these new landlords such as remotely ensuring security and monitoring for issues such as water leaks.
The suppliers and workers in a construction operation also represent an ecosystem, and here too it’s important to strike a balance. Years back, I worked with a provider of an IoT solution that offered cheap passive sensors that were dropped into the concrete forms during the pouring of each floor of a building. Combined with a local gateway and software, these sensors told the crew exactly when the concrete was sufficiently cured so they could pull the forms faster.
While the company’s claims of being able to pull forms 30% faster sounded great on the surface, it didn’t always translate to a 30% reduction in overall construction time because other logistics issues often then became the bottleneck. This goes beyond the supply chain — one of the biggest variables construction companies face is their workforce, and people represent yet another complex layer in an overall ecosystem. And beyond construction workers, there are many other stakeholders during the building process spanning financial institutions, insurance carriers, real estate brokers, and so forth. Once construction is complete then come tenants, landlords, building owners, service providers and more. All of these factors represent a complex ecosystem that continues to find a natural equilibrium over time as many variables inevitably change.
In a final ecosystem example, while IoT solutions can drive entirely new services and experiences in a smart city, it’s especially important to balance privacy with value creation when dealing with crossing public and private domains. A city’s internally-focused solution for improving the efficiency of their waste services isn’t necessarily controversial, meanwhile if a city was to start tracking citizens’ individual recycling behaviors with sensors this is very likely to be perceived as an encroachment on civil liberties. I’m a big recycler (and composter), and like many people I’ll give up a little privacy if I receive significant value, but there’s always a fine line.
In closing
The need to strike an equilibrium across business ecosystems will be commonplace in our increasingly connected, data-driven world. These examples of the interesting dynamics that can take place when developing ecosystems highlight how it’s critical to have the right foundation and holistic approach to enable more complex interactions and compounding value, both in the technical sense and to balance risk and privacy with value created for all parties involved.
Stay tuned for part two of this series in which I’ll walk through the importance of having an open foundation at the network edge to provide maximum flexibility and scale for ecosystem development.
Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO
As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.
Introduction
The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.
Nebula Architecture
Project Nebula is designed based on the following key ideas:
Agnostic to device CPU architecture, supporting both x86 and ARM;
Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
Agnostic to data analytics services, supporting on-premise and cloud deployment;
Support small to large scale deployment;
Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture
Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.
Nebula Demo
Installation
Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.
The basic resource requirement of Nebula is:
CPU: 2 virtual CPU cores
Memory: 8GB
Storage: 150GB
Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.
Nebula Service Console
Vendor Portal
After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.
Nebula Management Portal
In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.
Edge Service Hierarchy
For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.
Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.
After the release, users can see and deploy these edge services.
Device Registration
Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.
Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.
After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.
After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.
After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.
Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.
Next
From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.
Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO
Exploration and Practices of Edge Computing
Chapter 1:
Cloud computing and edge computing
After nearly ten years of rapid growth, cloud computing has become a widely accepted and widely used centralized computing method in various industries around the world. In recent years, with the rise of “new infrastructure construction” such as big data, artificial intelligence, 5G and Industrial Internet, the demand for edge computing has soared. No matter from the perspective of telecom carriers, public cloud service providers, or enterprise users in various industries, they hope to create an edge computing solution that is most suitable for their scenario in some form.
Cloud edge collaboration
Whether it is Mobile Edge Computing (MEC), Cloud Edge, or factory-side Device Edge, users want to be able to have an edge computing platform that cooperates with the cloud platform. Cloud edge collaboration is not only about opening up the data channel at run-time, but also the overall collaboration from the perspective of the entire life cycle management and the full-stack technology. In addition to the highly fragmented edge computing field of various industry platforms, it is undoubtedly more challenging to get through full-stack integration with multiple cloud computing platforms.
The dilemma of edge computing
At the bottom of the edge computing platform is the part about infrastructure, or device management. Due to the natural characteristics of the edge computing model, available device resources usually have considerable constrains. Whether it is from hardware specifications such as CPU, memory, storage, network, accelerator, or from OS, application framework and other software specifications, they are far from comparable to cloud computing platforms or even ordinary PCs. On the other hand, edge applications running on devices often have very different requirements for CPU and OS due to historical reasons and vendor heterogeneity. No matter from the cost of space, time, capital and other aspects, it adds more pressure.
Containerization and virtualization
In order to achieve cloud-edge collaboration, solve the problems of device management and edge application management, the most common practice in various industries now is to run containerized applications on devices and managed by cloud platforms. Its advantage is to make full use of the proven technology, platform, personnel and skills in cloud computing. The disadvantage is that it is more suitable for modern edge applications, and it is more difficult to implement containerization for legacy systems with different OSes. The representative of containerization is the EdgeX Foundry edge computing framework hosted under the LF Edge umbrella.
For legacy systems with different OSes, the realistic way is to package the application in a virtual machine and run it on the hypervisor platform on the device side. Virtualization platforms for edge applications may be further integrated with containerized platforms to create an unified operating mechanism across OSes. For example, the ACRN project under the Linux Foundation, and the EVE project under LF Edge are virtualization platforms specifically for devices.
Edge computing operation and monitoring
Whether it is containerization, virtualization or a combination of the two, the practice of operating edge computing will eventually and only be done on the cloud side. Otherwise, for large-scale production deployment, the accumulation of efficiency, safety, cost and other issues will become an inevitable nightmare. So from the perspective of operation and management, although these devices are not necessarily located on the cloud side, because they are all managed from the cloud side, they are operated and maintained in a manner similar to cloud computing. Device Cloud in the sense.
As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.
Written by Balaji Ethirajulu, LF Edge Chair of the Marketing Outreach Committee and Senior Director of Product Management at Ericsson
This blog originally ran on the Ericsson website – click here for more articles like this one.
As telecom industry is moving through the 5G transformation, I’ve been thinking about automation, network flexibility, Total Cost of Ownership (TCO), and faster time to market. For the last two or three years, we’ve witnessed a massive growth in open-source software, related to cloud native.
There are plenty of projects in the Cloud Native Computing Foundation (CNCF) that have already graduated, while some are in the incubating stage and others have entered a sandbox status. Many of them are addressing the various needs of telecom networks and other vertical segments. One of the key areas for the telecom industry is Network Functions Virtualization Infrastructure (NFVI).
Industry challenges
New technologies come and go, and this has happened for decades. But what we’ve been witnessing for the past several years is something that we’ve not experienced before. The acceleration of new technology innovation is mind boggling. The pace at which new technologies are being introduced is rapidly increasing. We’re now able to create new business models by leveraging these new technologies. For example, 5G will enable services across all industries. To do this, we need highly optimized and efficient networks, more automation, CI/CD, edge computing, reliable networks with good security, network and service agility, and faster time to market.
Addressing those challenges
Ericsson has been investing in R&D to leverage these technologies including open source to address the challenges outlined above. One of these key areas is cloud native applications and the infrastructure to support it. Together with open-source technologies they provide the key characteristics to address many of these challenges.
For the last several years, many enterprises and communication service providers (CSPs) have been moving their workloads to a virtualized infrastructure. They have realized some benefits of virtualization. But cloud native brings more benefits in terms of reduced TCO, more automation, faster time to market and so on.
In addition, enterprise needs are different from telecom needs. As many cloud-native technologies are maturing, they need to address the telecom network requirements such as security, high availability, reliability, and real-time needs. Ericsson is leveraging many cloud native open-source technologies and hardening them to meet the rigorous needs of telecom applications and infrastructure.
Our cloud native journey and open source
Over the past few years, Ericsson has been investing in cloud native technologies to transform our portfolio and meet the demands of 5G and digital transformation. Many of our solutions, including 5G applications, orchestration, and cloud infrastructure are based on cloud native technologies and they use a significant amount of CNCF open-source software. In addition, Ericsson has its own Kubernetes distribution called ‘Ericsson Cloud Container Distribution’, certified by CNCF. Ericsson’s cloud native based ‘dual-mode 5G Cloud Core’ utilizes several CNCF technologies and adopts them to a CSP’s needs. Several major global operators have embraced cloud native 5G solutions.
Ericsson supports and promotes many Linux foundation projects including LFN, LF Edge and CNCF. As a member of the CNCF community, we are actively drive Telecom User group (TUG), which is an important initiative focusing on CSP needs.
Our NFVI already includes OpenStack and other virtualization technologies. As we step further into 5G transformation, Ericsson NFVI, based on Kubernetes, will support various network configurations such as bare metal (K8s, Container as a service or CaaS) – and CaaS +VMs, including Management and Orchestration (MANO).
Some CSPs have migrated their workloads from Physical Network Function (PNF) to Virtual network Function (VNF). The percentage of migration varies among CSPs and region to region. Some are migrating from PNF to CNF (Cloud Native Network Function). So, a typical network may include PNF, VNF and CNF. It’s because of this that NFVI needs to support various configurations, and both virtual machines and containers.