Category

Industry Article

Scaling Ecosystems Through an Open Edge (Part One)

By Blog, Industry Article, Trend

Written By Jason Shepherd, LF Edge Governing Board Member and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Image for post

In my  piece earlier this year, I highlighted an increase in collaboration on interoperability, between different IT and OT stakeholders and building trust in data. In this three-part blog, I’ll expand on some related thoughts around building ecosystems, including various technical and business considerations and the importance of an open underlying foundation at the network edge.

For more on this topic, make sure to register and tune in tomorrow, July 28 for Stacey Higginbotham’s virtual event ! Stacey always puts on a great event and this will be a great series of panel discussions on this important topic. I’m on a roundtable about building partnerships for data sharing and connected services.

The new product mindset

I’ve  about the “new product mindset” to highlight the importance of developing products and services with the cumulative lifetime value they will generate in mind, compared to just the immediate value delivered on the day of launch. In the rapidly-evolving technology landscape, competitive advantage is based on the ability to innovate rapidly and continuously improve an offering through software- and, increasingly, hardware-defined experiences, not to mention better services.

Related to this is the importance of ecosystems. Whether you build one or join one, it’s more important than ever in this world of digital transformation to be part of an ecosystem to maintain a competitive edge. This includes consideration on two vectors — both horizontal for a solid and scalable foundation and vertical for domain-specific knowledge that drives targeted outcomes. Ultimately, it’s about creating a network effect for maximizing value, both for customers and the bottom line.

Approaches to building ecosystems

An ecosystem in the business world can take various forms. In simplest terms it could be a collection of products and services that are curated to be complementary to a central provider’s offering. Alternatively, it could consist of partners working towards a common goal, with each benefitting from the sum of the parts. Another ecosystem could take the form of an operations-centric supply chain or a group of partners targeting common customers with a joint go-to-market strategy.

Approaches to ecosystem development span a spectrum of closed to open philosophies. Closed ecosystems are based on closely-governed relationships, proprietary designs, and, in the case of software, proprietary APIs. Sometimes referred to as “walled gardens,” the tight control of closed ecosystems can provide a great customer experience. However, this tends to come with a premium cost and less choice. Apple is a widely cited example of a company that has taken a more closed ecosystem approach.

Then there are “closed-open” approaches which have APIs and tools that you can openly program to — as long as you pay access fees or are otherwise advancing the sponsoring company’s base offer. Amazon’s “Works with Alexa” program is an example of this model, and a big driver in how they emerged as a smart home leader — they spent many years establishing relationships and trust with a broad consumer base. This foundation creates a massive network effect, and in turn, Amazon makes the bulk of their revenue from the Alexa ecosystem by selling more consumer goods and furthering customer relationships — much more so than on the Alexa-enabled devices themselves.

Scaling through open source

To fully grasp the business trade-offs of closed vs. open ecosystems, consider Apple’s iOS and Android. While Apple provides a curated experience, the open source Android OS provides more choice and, as a result, . Android device makers may not be able to collect as much of a premium as Apple because of the open competition and difficulty in differentiating with both hardware and a better user experience by skinning the stock Android UI. However, a key upside element is the sheer volume factor.

Plus, that’s not to say there still isn’t an opportunity to differentiate with Android as the base of your ecosystem. As an example, Samsung has built a strong ecosystem approach through their Galaxy brand and has recently branched out even further by  to offer a better-together story with their products. We’ve seen this play before — if you’re old enough to have rented video tapes at a brick-and-mortar store (be kind, rewind!), you’ll recall that VHS was an inferior technology to Beta but it won out after a short format battle because the backers of VHS got the most movie studios onboard.

It’s ultimately about taking a holistic approach, and these days open source software like Android is an essential driver of ecosystem enablement and scale. On top of the benefit of helping developers reduce “undifferentiated heavy lifting,” open source collaboration provides an excellent foundation for creating a network effect that multiplies value creation.

After all, Google didn’t purchase the startup Android in 2005 (for just $50M!) and seed it into the market just to be altruistic — the result was a driver for a massive footprint that creates revenue through outlets including Google’s search engine and the Play store. Google doesn’t reveal how much Android in particular contributes to their bottom line, but in a 2015 court battle with Oracle over claims of a Java-related patent infringement . Further, reaching this user base to drive more ad revenue is now free for Google, compared to them reportedly paying Apple $1B in 2014 for making their search engine the default for the iPhone.

Striking a balance between risk and reward

There’s always some degree of a closed approach when it comes to resulting commercial offers, but an open source foundation undoubtedly provides the most flexibility, transparency and scalability over time. However, it’s not just about the technology — also key to ecosystems is striking a balance between risk and reward.

Ecosystems in the natural world are composed of organisms and their environments — this highly complex network finds an equilibrium based on ongoing interactions. The quest for ecosystem equilibrium carries over to the business world, and especially holds true for IoT solutions at the edge due to the inherent relationship of actions taken based on data from tangible events occurring in the physical world.

Several years back, I was working with a partner that piloted an IoT solution for a small farmer, “Fred,” who grows microgreens in greenhouses outside of the Boston area. In the winter, Fred used propane-based heaters to maintain the temperature in his greenhouses and if these heaters went out during a freeze he would lose thousands of dollars in crops overnight. Meanwhile, “Phil” was his propane supplier. Phil was motivated to let his customers’ propane tanks run almost bone dry before driving his truck out to fill them so he could maximize his profit on the trip.

As part of the overall installation, we instrumented Fred’s propane tank with a wireless sensor and gave both Fred and Phil visibility via mobile apps. With this new level of visibility, Fred started freaking out any time the tank wasn’t close to full and Phil would happily watch it drop down to near empty. This created some churn up front but after a while it turned out that both were satisfied if the propane tank averaged 40% full. Side note: Fred was initially hesitant about the whole endeavor but when we returned to check in a year later, you couldn’t peel him away from his phone monitoring all of his farming operations.

Compounding value through the network effect

Where it starts getting interesting is compounding value-add on top of foundational infrastructure. Ride sharing is an example of an IoT solution (with cars representing the “things”) that has created an ecosystem network effect in the form of supplemental delivery services by providers (such as Uber Eats) partnering with various businesses. There’s also an indirect drag effect for ecosystems, for example Airbnb has created an opportunity for homeowners to rent out their properties and this has produced a network effect for drop-in IoT solutions that simplify tasks for these new landlords such as remotely ensuring security and monitoring for issues such as water leaks.

The suppliers and workers in a construction operation also represent an ecosystem, and here too it’s important to strike a balance. Years back, I worked with a provider of an IoT solution that offered cheap passive sensors that were dropped into the concrete forms during the pouring of each floor of a building. Combined with a local gateway and software, these sensors told the crew exactly when the concrete was sufficiently cured so they could pull the forms faster.

While the company’s claims of being able to pull forms 30% faster sounded great on the surface, it didn’t always translate to a 30% reduction in overall construction time because other logistics issues often then became the bottleneck. This goes beyond the supply chain — one of the biggest variables construction companies face is their workforce, and people represent yet another complex layer in an overall ecosystem. And beyond construction workers, there are many other stakeholders during the building process spanning financial institutions, insurance carriers, real estate brokers, and so forth. Once construction is complete then come tenants, landlords, building owners, service providers and more. All of these factors represent a complex ecosystem that continues to find a natural equilibrium over time as many variables inevitably change.

In a final ecosystem example, while IoT solutions can drive entirely new services and experiences in a smart city, it’s especially important to balance privacy with value creation when dealing with crossing public and private domains. A city’s internally-focused solution for improving the efficiency of their waste services isn’t necessarily controversial, meanwhile if a city was to start tracking citizens’ individual recycling behaviors with sensors this is very likely to be perceived as an encroachment on civil liberties. I’m a big recycler (and composter), and like many people I’ll give up a little privacy if I receive significant value, but there’s always a fine line.

In closing

The need to strike an equilibrium across business ecosystems will be commonplace in our increasingly connected, data-driven world. These examples of the interesting dynamics that can take place when developing ecosystems highlight how it’s critical to have the right foundation and holistic approach to enable more complex interactions and compounding value, both in the technical sense and to balance risk and privacy with value created for all parties involved.

Stay tuned for part two of this series in which I’ll walk through the importance of having an open foundation at the network edge to provide maximum flexibility and scale for ecosystem development.

Exploration and Practices of Edge Computing: Cloud Managing Containerized Devices

By Blog, EdgeX Foundry, Industry Article, Trend

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.

Introduction

The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.

Nebula Architecture

Project Nebula is designed based on the following key ideas:

  • Agnostic to device CPU architecture, supporting both x86 and ARM;
  • Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
  • Agnostic to data analytics services, supporting on-premise and cloud deployment;
  • Support small to large scale deployment;
  • Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture

Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.

Nebula Demo

Installation

Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.

The basic resource requirement of Nebula is:

  • CPU: 2 virtual CPU cores
  • Memory: 8GB
  • Storage: 150GB

Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.

Nebula Service Console

Vendor Portal

After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.

Nebula Management Portal

In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.

Edge Service Hierarchy

For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.

Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.

After the release, users can see and deploy these edge services.

Device Registration

Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.

Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.

./install.sh init -u user-acccount -p user-account-password -n user-device-name

User Portal

After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.

After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.

After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.

Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.

Next

From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.

Exploration and Practices of Edge Computing

By Blog, EdgeX Foundry, Industry Article

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

Exploration and Practices of Edge Computing 

Chapter 1:

Cloud computing and edge computing

After nearly ten years of rapid growth, cloud computing has become a widely accepted and widely used centralized computing method in various industries around the world. In recent years, with the rise of “new infrastructure construction” such as big data, artificial intelligence, 5G and Industrial Internet, the demand for edge computing has soared. No matter from the perspective of telecom carriers, public cloud service providers, or enterprise users in various industries, they hope to create an edge computing solution that is most suitable for their scenario in some form.

Cloud edge collaboration

Whether it is Mobile Edge Computing (MEC), Cloud Edge, or factory-side Device Edge, users want to be able to have an edge computing platform that cooperates with the cloud platform. Cloud edge collaboration is not only about opening up the data channel at run-time, but also the overall collaboration from the perspective of the entire life cycle management and the full-stack technology. In addition to the highly fragmented edge computing field of various industry platforms, it is undoubtedly more challenging to get through full-stack integration with multiple cloud computing platforms.

The dilemma of edge computing

At the bottom of the edge computing platform is the part about infrastructure, or device management. Due to the natural characteristics of the edge computing model, available device resources usually have considerable constrains. Whether it is from hardware specifications such as CPU, memory, storage, network, accelerator, or from OS, application framework and other software specifications, they are far from comparable to cloud computing platforms or even ordinary PCs. On the other hand, edge applications running on devices often have very different requirements for CPU and OS due to historical reasons and vendor heterogeneity. No matter from the cost of space, time, capital and other aspects, it adds more pressure.

Containerization and virtualization

In order to achieve cloud-edge collaboration, solve the problems of device management and edge application management, the most common practice in various industries now is to run containerized applications on devices and managed by cloud platforms. Its advantage is to make full use of the proven technology, platform, personnel and skills in cloud computing. The disadvantage is that it is more suitable for modern edge applications, and it is more difficult to implement containerization for legacy systems with different OSes. The representative of containerization is the EdgeX Foundry edge computing framework hosted under the LF Edge umbrella.



For legacy systems with different OSes, the realistic way is to package the application in a virtual machine and run it on the hypervisor platform on the device side. Virtualization platforms for edge applications may be further integrated with containerized platforms to create an unified operating mechanism across OSes. For example, the ACRN project under the Linux Foundation, and the EVE project under LF Edge are virtualization platforms specifically for devices.

Edge computing operation and monitoring

Whether it is containerization, virtualization or a combination of the two, the practice of operating edge computing will eventually and only be done on the cloud side. Otherwise, for large-scale production deployment, the accumulation of efficiency, safety, cost and other issues will become an inevitable nightmare. So from the perspective of operation and management, although these devices are not necessarily located on the cloud side, because they are all managed from the cloud side, they are operated and maintained in a manner similar to cloud computing. Device Cloud in the sense.
As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website

Cloud native in NFVI: Why it’s smart business for 5G growth

By Blog, Industry Article

Written by Balaji Ethirajulu, LF Edge Chair of the Marketing Outreach Committee and Senior Director of Product Management at Ericsson

This blog originally ran on the Ericsson website – click here for more articles like this one. 

As telecom industry is moving through the 5G transformation, I’ve been thinking about automation, network flexibility, Total Cost of Ownership (TCO), and faster time to market. For the last two or three years, we’ve witnessed a massive growth in open-source software, related to cloud native.

There are plenty of projects in the Cloud Native Computing Foundation (CNCF) that have already graduated, while some are in the incubating stage and others have entered a sandbox status. Many of them are addressing the various needs of telecom networks and other vertical segments. One of the key areas for the telecom industry is Network Functions Virtualization Infrastructure (NFVI).

Industry challenges

New technologies come and go, and this has happened for decades. But what we’ve been witnessing for the past several years is something that we’ve not experienced before. The acceleration of new technology innovation is mind boggling. The pace at which new technologies are being introduced is rapidly increasing. We’re now able to create new business models by leveraging these new technologies. For example, 5G will enable services across all industries. To do this, we need highly optimized and efficient networks, more automation, CI/CD, edge computing, reliable networks with good security, network and service agility, and faster time to market.

Addressing those challenges

Ericsson has been investing in R&D to leverage these technologies including open source to address the challenges outlined above. One of these key areas is cloud native applications and the infrastructure to support it. Together with open-source technologies they provide the key characteristics to address many of these challenges.

For the last several years, many enterprises and communication service providers (CSPs) have been moving their workloads to a virtualized infrastructure. They have realized some benefits of virtualization. But cloud native brings more benefits in terms of reduced TCO, more automation, faster time to market and so on.

In addition, enterprise needs are different from telecom needs. As many cloud-native technologies are maturing, they need to address the telecom network requirements such as security, high availability, reliability, and real-time needs. Ericsson is leveraging many cloud native open-source technologies and hardening them to meet the rigorous needs of telecom applications and infrastructure.

Our cloud native journey and open source

Over the past few years, Ericsson has been investing in cloud native technologies to transform our portfolio and meet the demands of 5G and digital transformation. Many of our solutions, including 5G applications, orchestration, and cloud infrastructure are based on cloud native technologies and they use a significant amount of CNCF open-source software. In addition, Ericsson has its own Kubernetes distribution called ‘Ericsson Cloud Container Distribution’, certified by CNCF. Ericsson’s cloud native based ‘dual-mode 5G Cloud Core’ utilizes several CNCF technologies and adopts them to a CSP’s needs. Several major global operators have embraced cloud native 5G solutions.

Ericsson supports and promotes many Linux foundation projects including LFN, LF Edge and CNCF.  As a member of the CNCF community, we are actively drive Telecom User group (TUG), which is an important initiative focusing on CSP needs.

Our NFVI already includes OpenStack and other virtualization technologies. As we step further into 5G transformation, Ericsson NFVI, based on Kubernetes, will support various network configurations such as bare metal (K8s, Container as a service or CaaS) ­– and CaaS +VMs, including Management and Orchestration (MANO).

Some CSPs have migrated their workloads from Physical Network Function (PNF) to Virtual network Function (VNF). The percentage of migration varies among CSPs and region to region. Some are migrating from PNF to CNF (Cloud Native Network Function). So, a typical network may include PNF, VNF and CNF. It’s because of this that NFVI needs to support various configurations, and both virtual machines and containers.