Skip to main content
Monthly Archives

June 2020

LF Edge Member Spotlight: Section

By Blog, Member Spotlight, State of the Edge

The LF Edge community is comprised of a diverse set of member companies that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Molly Wojcik, Director of Education & Awareness for Section, to discuss the importance of open source, their drive to standardize and accelerate the edge, how they contribute to the State of the Edge project and the impact of being a member of the LF Edge community.

Can you tell us a little about your organization?

Section is all about empowering developers to be the heroes in edge computing innovation by giving them control and flexibility to run any workload along the edge continuum to meet the specific needs of their application. Users can deploy their own custom workloads (containers/serverless) or leverage best-of-breed software available in Section’s edge module marketplace. Solutions include web application firewalls (WAFs), bot management, image optimization, A/B & multivariate testing, virtual waiting room, and many more.

Section was founded in Sydney, Australia in 2012. Headquarters are now in Colorado, and the Section team spans across the U.S., Australia, and Europe. Section has been recognized by industry analysts and peers as a leader in the edge compute landscape and is backed by a group of top-shelf venture capital firms, led by The Foundry Group. 

Our platform significantly reduces cost and complexity for web engineers seeking to capitalize on the benefits of the edge for their application. Docker-powered and Kubernetes-orchestrated, the Section platform optimizes edge workload distribution, scalability, traffic routing, diagnostics, and more, to allow engineers to focus on their core application functionality while meeting performance, security, and scalability goals.

Why is your organization adopting an open source approach?

From day one, we have stuck by three guiding principles: open, control, and easy. We believe the framework we provide (and the edge) should be open. Engineers should have tangible control over that edge, and we should make it easy for them to use.

The key challenges of the edge are industry-wide and user-specific. Developing a truly successful edge fabric to meet the needs of tomorrow’s Internet will be dependent on certain levels of cooperation and standardization. The closed networks of proprietary software, such as legacy CDNs, which served the Internet from 2000 until today, aren’t capable of solving the hundreds of thousands of edge locations and truly custom workloads that engineers will need to run in those locations. Work underway by collaborative open standards bodies, such as LF Edge, the Open Networking Foundation and Multi-access Edge Computing are meeting this challenge head-on to build a truly successful edge fabric.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

In line with our drive to standardize and accelerate the edge, we were excited to join LF Edge and become part of this community-forged movement, working to move edge computing forward and build an open-source software stack. We believe that LF Edge will play a similar pivotal role in advancing the edge computing ecosystem as the Cloud Native Computing Foundation (CNCF) has done in advancing the microservices ecosystem.

What do you see as the top benefits of being part of the LF Edge community?

One of the top benefits for us is the opportunity to participate in a community alongside other vendors, developers, OEMs and infrastructure providers, working together to develop a common set of standards and achieve wider interoperability. At this point in the evolution of edge computing, we still struggle with shared definitions, and LF Edge has been working tirelessly since its start to create an open and standard framework for the technology, which industry leaders can coalesce around.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

The edge ecosystem is extremely diverse, and where some may specialize in IoT or the device/user edge, our focus tends to lean more heavily on the infrastructure/service provider edge. Lending our experience and perspective alongside contributions from the larger LF Edge community helps bridge gaps to provide a more comprehensive understanding of edge challenges and solutions.

On a more personal level, I have recently taken on the role of Chair of the State of the Edge Landscape Working Group. The LF Edge Interactive Landscape uses the CNCF’s Interactive Landscape as a guide and framework, intending for it to play a similar role in the edge community as a go-to resource. The interactive map categorizes LF Edge projects, in addition to edge-related organizations and technologies, to offer a comprehensive overview of the edge ecosystem. It is dynamically generated from data maintained in a community-supported GitHub account.

I’ve been involved as an active contributor and facilitator within the Landscape working group since its beginnings with LF Edge in early 2019. With a better together approach, I believe that building community is fundamental to advancing the edge ecosystem, so becoming Chair of the Edge Landscape Working Groups represents an ideal opportunity to play a central part in building this important shared resource.

What do you think sets LF Edge apart from other industry alliances?

The fact that LF Edge is an umbrella organization independent of hardware, silicon, cloud or OS, not working to build leads, but rather to align and educate, is refreshing. Its neutrality allows many diverse community members to participate in its mission, from enterprise to journalists to non-profits. This in turn is enabling LF Edge to ask the big questions and be unafraid to explore how edge computing will transform the Internet.

How will  LF Edge help your business?

Section aims to provide unmatched flexibility for developers to customize, deploy, and optimize workloads for their unique application architecture. We want to empower developers to be able to choose their software, the number of endpoints, and where the application edge should live. We were the first within the edge platform space to offer full integration with agile development workflows. We thrive on being part of LF Edge, helping build awareness of developer and engineer needs at the edge. LF Edge represents the opportunity for us to help build a shared vocabulary and vision for edge computing, that equally works towards our mission of empowering application engineers to run any workload, anywhere. 

What advice would you give to someone considering joining LF Edge?

As with any community, you only get out what you put it. If you’re looking to broaden and deepen your connections, knowledge, and contributions to really drive edge computing forward, LF Edge provides a lot of different opportunities to give and get.

To find out more about our members or how to join LF Edge, click here. To learn more about the Edge Computing Landscape, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #stateoftheedge-landscape channels. 

 

EdgeX Foundry Kubernetes Installation

By Blog, EdgeX Foundry

Written by Jason Bonafide, EdgeX Foundry Contributor and Principal Software Engineer at Dell Technologies

In an ever-growing world of connected devices, there is plenty of opportunity in edge computing. While devices are getting smaller and smarter, there is always a need to share data. With that, I have just the platform for you!

EdgeX Foundry is a vendor-neutral open source project hosted by LF Edge building a common open-framework for IoT edge computing. EdgeX Foundry offers a set of interoperable plug-and-play components which aim to satisfy IoT solutions of all variations.

The goal of this blog is to walk through techniques which can be used in deploying EdgeX Foundry to a Kubernetes cluster. Establishing a foundation for deploying EdgeX in Kubernetes is the main takeaway from this tutorial.

Tools and technologies used

Why Deploy EdgeX to a Kubernetes cluster?

Kubernetes provides the following feature set:

  • Service Discovery and load balancing
  • Storage orchestration
  • Automated roll-outs and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management

EdgeX Foundry is built on micro-service architecture. The micro-service architecture is powerful, but it can make deploying and managing an application more complex because of all the components. Kubernetes makes deploying micro-service applications more manageable.

Glossary

Kubernetes

  • Affinity: Act of constraining a Pod to a particular Node.
  • ConfigMap: An API object used to store non-confidential data in key-values pairs.
  • Deployment: Provides declarative updates for Pods and ReplicaSets.
  • Ingress: Exposes HTTP and HTTPS routes from outside the cluster to Services within the cluster.
  • kubelet: Primary “node agent” that runs on each Node. It works in terms of a PodSpec.
  • LivenessProbe: The means used in a Kubernetes Deployment to check that a Container is still working and to determine whether or not it needs to be restarted.
  • Node: Virtual or physical machine in which Pods are run.
  • PersistentVolume: A piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using StorageClasses.
  • PersistentVolumeClaim: A request for storage by a user.
  • Pod: The smallest deployable unit of computing that can be created and managed in Kubernetes.
  • PodSpec: A YAML or JSON object that describes a Pod.
  • ReadinessProbe: The means used in a Kubernetes Deployment to check when a Container is ready to start accepting traffic.
  • Secret: An object that contains a small amount of sensitive data such as a password, a token or a key.
  • Service: An abstract way to expose an application running on a set of Pods as a network service.
  • StartupProbe: The means used in a Kubernetes Deployment to check whether or not the Container’s application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don’t interfere with the application startup.
  • StorageClass: Provides a way for administrators to describe the “classes” of storage they offer.
  • Volume: A directory, possibly containing some data which can be accessed by a Container.
  • Volume Mount: A property for a Container in which a Volume is bound the Container.

Helm

  • Chart: An artifact which contains to a collection of Kubernetes resource files.
  • Named Template: A template which has an identifying name. Named-templates are also referred to as “partials”. Named-templates are similar to that of a programming language’s function which can be used to re-use code (or in this case YAML configuration).
  • Template: A file that can hold placeholders {{}} and are interpolated by specified values
  • yaml: A configuration file within Helm which enables abstraction of configurable items which can be applied to templates.

Setting up manifests directory

Create the directory structure below on your machine. The folders will be used and populated throughout this tutorial.

project-root/
edgex/
templates/
edgex-redis/
edgex-core-metadata/
edgex-core-data/
edgex-core-command/

Before we create definition files

Kubernetes provides a recommended set of labels. These labels provide a grouping mechanism which facilitate management of Kubernetes resources which are bound to an application. Below is Kubernetes’ recommended set of labels:

app.kubernetes.io/name: <application name>
app.kubernetes.io/instance: <installation>
app.kubernetes.io/version: <application version>
app.kubernetes.io/component: <application component>
app.kubernetes.io/part-of: <organization>
app.kubernetes.io/managed-by: <orchestrator>

A concrete example of this would be:

app.kubernetes.io/name: edgex-core-metadata
app.kubernetes.io/instance: edgex
app.kubernetes.io/version: 1.2.1
app.kubernetes.io/component: api
app.kubernetes.io/part-of: edgex-foundry
app.kubernetes.io/managed-by: Kubernetes

With the above label structure, and kubectl’s support of filtering resources by labels, we’ve created enough labels which give us plenty of flexibility when searching for specific resources. An example of this would look like:

$ kubectl get pod -l app.kubernetes.io/name=edgex-core-metadata

In a cluster with many applications, the above command will result in selecting only edgex-core-metadata pods.

On to our first application – edgex-redis

EdgeX Foundry supports Redis as a persistent datastore. Due to the fact that Containers are ephemeral, data created within a Container will live only as long as the Container. In order to keep data around past the life its Container, we will need to use a PersistentVolume.

Lets say that we want to create a PersistentVolume with the following in mind:

  • Our cluster contains a StorageClass named hostpath.
  • Our use-case requires 10Gi of storage (not to be confused with GB – Kubernetes has its own Resource Model).
  • Redis data needs to be stored on a cluster node’s file-system at /mnt/redis-volume. It is worth noting that hostPath storage plugin is not recommended in production. If your cluster contains multiple nodes, you will need to ensure that the edgex-redis Pod is scheduled on the same node. This is to ensure that edgex-redis references the same hostPath storage each time the Pod is scheduled. Please refer to Assigning Pods to Nodes documentation for more information.
  • We anticipate many nodes having Read and Write access to the PersistentVolume.

With the above requirements, our PersistentVolume definition in templates/edgex-redis/pv.yaml should be updated to look like this.

Although the PersistentVolume has not been created yet, we’ve defined a resource that will make the storage we defined in the PersistentVolume definition available for binding. Next, we will need to create a resource that will assign a PersistentVolume to an application. This is accomplished using a PersistentVolumeClaim.

The pattern used in this tutorial establishes a one-to-one relationship between application and a PersistentVolume. That means, for each PersistentVolume, there will exist a PersistentVolumeClaim for an application. With that in mind, our claim for storage capacity, access modes, and StorageClass should match the PersistentVolume we defined earlier.

Lets go over the requirements for our storage use-case:

  • 10Gi storage.
  • Read and Write access by many nodes.
  • A StorageClass named hostpath exists in our cluster.

Given the above requirements, the PersistentVolumeClaim definition file at templates/edgex-redis/pvc.yaml should be updated to look like this.

Now that the PersistentVolume and PersistentVolumeClaim have been defined, lets move on to the Deployment.

Similar to what was done with the PersistentVolume and PersistentVolumeClaim, lets list things that describe the Redis application:

  • For simplicity sake, lets say we only create 1 instance (replica).
  • When a new version is rolled out, we want to kill all of the old pods before bringing up new pods. This is referred to as a Recreate deployment strategy.
  • The Deployment consists of a single process (Container).
  • The Container image will come from Dockerhub’s redis image of latest
  • The Container will listen to connections on port 6379.
  • The application will write data to /var/lib/redis. The Container’s data directory will be mounted to the PersistentVolume via the PersistentVolumeClaim named edgex-redis which we created earlier. The Volume mapped to the PersistentVolumeClaim will be named redis-data and can be mounted as Volume for the Container.
  • The application is in a Started state when a connection can be successfully established over TCP on port 6379. This check will be tried every 10 On the first success, the Container will be in a Started state, however, on the 5th failure to obtain a connection, the Container will be killed. When this StartupProbe is enabled, LivenessProbe and ReadinessProbe are disabled until StartupProbe succeeds.
  • Every 30 seconds, only after a 15 second delay, the Deployment will ensure that the Container is alive by establishing a connection over TCP on port 6379. On the first success, the Container will be in a Running state, however, on 3 failures, the Container will be restarted.
  • Every 30 seconds, only after a 15 second delay, the Deployment will try to determine if the Container is ready to accept traffic. Over TCP, the Deployment will attempt to establish a connection on port 6379. On 3 failures, the pod is removed from Service load balancers.

Given the above requirements, we can define our Deployment in templates/edgex-redis/deployment.yaml should be updated to look like this.

Lastly, for edgex-redis, we will need to establish network access to our Pod within the Kubernetes cluster. This is accomplished by defining a Service.

As for our Service requirements:

  • selector should only match the labels defined in the edgex-redisDeployment’sPodSpec.
  • Map external port 6379 to Container port 6379 with a name of port-6739. Each port requires a name.

On to the next application – edgex-core-metadata

EdgeX application supports the concept of externalized configuration. This is a neat feature enabling the platform’s portability. For our Deployment, we will define our configuration as a Secret. Normally configuration would be defined in a ConfigMap, however, since Redis credentials reside in our configuration file, we will stash the entire configuration file in a Secret.

We are going to leverage the default application configuration and make just a few just a few modifications:

  • Core Metadata Service Host property has been updated to listen on 0.0.0 which ensures that the Container is listening on all network interfaces.
  • Core Data Service Host property has been updated to edgex-core-data Later on when edgex-core-data is installed, a Service will expose edgex-core-data as the hostname of edgex-core-data.
  • Database Host property has been updated to edgex-redis Service

Feel free to refer to the reference project configuration files here.

Lets create the secret by performing the following steps:

  • Download https://github.com/jbonafide623/edgex-lf-k8s/blob/master/secrets/edgex-core-metadata-secret or create your own.
  • Execute $ kubectl create secret generic edgex-core-metadata –from-file=configuration.toml=edgex-core-metadata-secret (where edgex-core-metadata-secret points to the file downloaded in the previous step).

Now for the edgex-core-metadata Deployment. Lets list out some of the requirements:

  • We will create only 1 instance (replica).
  • When a new version is rolled out, we want to kill all of the old pods before bringing up new pods. This is referred to as a Recreate deployment strategy.
  • The Deployment consists of a single process (Container).
  • The Container image will come from Dockerhub’s edgexfoundry/docker-core-metadata-go image of 2.1 tag.
  • Override the Dockerfile’s Entrypoint so that –confdir points to /config.
  • Disable security via EDGEX_SECURITY_STORE environment variable. This can be done by setting the flag to “false”.
  • The Container will listen to HTTP requests on port 48081.
  • The application is in a Started state when a n HTTP GET request to /api/v1/ping results in a 200 response. This check will be tried every 10 On the first success, the Container will be in a Started state, however, on the 5th failure to obtain a connection, the Container will be killed. When this StartupProbe is enabled, LivenessProbe and ReadinessProbe are disabled until StartupProbe succeeds.
  • Every 30 seconds, only after a 15 second delay, the Deployment will ensure that the Container is alive by sending an HTTP GET request to /api/v1/ping. On the first success, the Container will be in a Running state, however, on 3 failures, the Container will be restarted.
  • Every 30 seconds, only after a 15 second delay, the Deployment will try to determine if the Container is ready to accept traffic. Over TCP, the Deployment will attempt to send an HTTP GET request to /api/v1/ping. On 3 failures, the pod is removed from Service load balancers.
  • Establish a limit on cpu usage of 1 and request a starting allocation of 5 cpus.
  • Establish a limit on memory usage of 512Mi and request a starting allocation of 256Mi memory.
  • Mount the configuration Secret with name edgex-core-metadata to the Container’s path /config. Recall /config is the path that we supply as a –confdir override to the application’s image.

Given the above requirements, we can define our Deployment definition file at templates/edgex-core-metadata/deployment.yaml and it should look like this.

Lastly, lets expose the application so that it can be accessed from within and outside of the cluster. For our Service lets define the some requirements:

  • selector should only match the labels defined in the edgex-core-metadataDeployment’sPodSpec.
  • Map external port 48081 to Container port 48081 with a name of port-48081. Each port requires a name.
  • Expose the application outside of the cluster via NodePort type on port 30801

Given the above requires we can define our Service definition file at templates/edgex-core-metadata/service.yaml and it should look like this.

Incorporating Helm

The decision to include Helm in the container-orchestration stack is based on the following Helm features:

  • Helm Facilitates a single responsibility which is to manage Kubernetes resources.
  • Enables easy application installation and rollback.
  • Supports ordered creation of resources via hooks.
  • Provides access to installations of popular applications via Helm Hub.
  • Encourages code-reuse.

Each Helm chart contains values.yaml and Chart.yaml files. In the root of the project directory, lets go ahead and create these files by executing:

touch values.yaml
touch Chart.yaml

Chart.yaml contains information about the chart. With that in mind, lets add the following content to Chart.yaml:

apiVersion: v2
name: edgex
description: A Helm chart for Kubernetes

# A chart can be either an ‘application’ or a ‘library’ chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They’re included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.2.1

values.yaml allows you to create a YAML object which can be referenced in your chart’s template files.

Lets make use of what values.yaml can offer us.

For edgex-redis, lets list out some properties that might be beneficial to abstract out as a configurable option:

  • Application name: if you noticed, almost every resource uses the name edgex-redis. These resources can be accessed by other applications. For instance, edgex-core-metadata references the Service defined for edgex-redis. If we abstract the name out to yaml, the application name will only need to change in a single place (values.yaml).
  • Deployment strategy: If there is a particular environment where you are concerned with high-availability, you may want to leverage RollingUpdate In smaller environments, you may or may not care about a RollingUpdate strategy and would chose Recreate in effort to conserve computing resources.
  • Image name and tag: There exist scenarios where artifacts/dependencies are vetted and kept “in-house”. Scenarios like these include artifact/image repositories of their own. With that in mind, having the ability to switch the image registry during installation can be a huge benefit.
  • Port: The port at which the Container listens on is something that can easily be referenced in many places within a single chart. As you define various network components such as Ingresses, Services, and even the Container port, it would be nice to refer back to a single place.
  • Replicas: Depending on the environment’s resources, the number of replicas may change.
  • StorageClassName: There may exist a scenario where the StorageClass may completely vary from cluster to cluster or even within a single cluster.

Considering the above, we can define our edgex-redis configuration object like this:

edgex:
redis:
name: edgex-redis
deployment:
strategy: Recreate
image:
name: redis
tag: latest
port: 6379
replicas: 1
storageClassName: hostpath

We can apply the same pattern with a few adjustments for edgex-core-metadata:

edgex:
metadata:
name: edgex-core-metadata
deployment:
strategy: Recreate
image:
name: edgexfoundry/docker-core-metadata-go
tag: 1.2.1
port: 48081
replicas: 1
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 0.5
memory: 256Mi

In the configuration object for edgex-core-metadata we added a resources object which allows us to adjust limits and requests during installation.

Remember earlier when we talked about recommended labels and how these labels apply to each of our resources? With Helm, we can create a named-template and include the named-template in each place where the labels are referenced.

Here, in the file templates/_labels.tpl, we are creating a named-template named edgex.labels. This named-template takes in two arguments a ctx (context) and AppName (application’s name).

{{/*
Define a standard set ouf resource labels.

params:
(context) ctx – Chart context (scope).
(string) AppName – Name of the application.
*/}}
{{ define “edgex.labels” }}
app.kubernetes.io/app: {{ .AppName }}
app.kubernetes.io/instance: {{ .ctx.Release.Name }}
app.kubernetes.io/version: {{ .ctx.Chart.AppVersion }}
app.kubernetes.io/component: api
app.kubernetes.io/part-of: edgex-foundry
app.kubernetes.io/managed-by: {{.ctx.Release.Service }}
helm.sh/chart: {{ .ctx.Chart.Name }}-{{ .ctx.Chart.Version | replace “+” “_” }}
{{ end }}

Now that we’ve defined configuration in values.yaml and created edgex.labels named template, lets apply them to a simple definition file in templates/edgex-redis/service.yaml.

apiVersion: v1
kind: Service
metadata:
name: {{ .Values.edgex.redis.name }}
labels:
{{- include “edgex.labels” (dict “ctx” . “AppName” $.Values.edgex.redis.name) | indent 4 }}
spec:
selector:
{{- include “edgex.labels” (dict “ctx” . “AppName” $.Values.edgex.redis.name) | indent 4 }}
ports:
– port: {{ .Values.edgex.redis.port }}
name: “port-{{ .Values.edgex.redis.port }}”

When the template is rendered, placeholder values will be interpolated by either properties specified in the values file or via –set flag. When helm install is executed with no -f flag, values.yaml in the root of chart is used by default. If we want to override values during installation, the property can be overridden using –set flag.

For an application as portable as EdgeX is, tying in configuration overrides can tremendously speed up deployments.

Finalizing the chart using a reference project

Up to this point, we have:

  • Created raw Kubernetes YAML files for edgex-redis and edgex-core-metadata.
  • Explained usage of yaml in Helm chart.
  • Adjusted edgex-redis Service definition file to reference Helm values and named-templates.

This, by no means is an end-to-end deployment. The patterns described above, gives you enough to apply to the remaining pieces you wish to include in your deployment. Here you can refer to the reference project which contains a finalized Helm chart responsible for Deploying: – edgex-redis – edgex-core-metadata – edgex-core-data – edgex-core-command

Following the pattern above, edgex-core-data and edgex-core-command will also require configuration files mounted as Secrets. You can create them by executing:

$ kubectl create secret edgex-core-data generic –from-file=configuration.toml=[path to edgex-core-data’s configuration.toml]

$ kubectl create secret edgex-core-command generic –from-file=configuration.toml=[path to edgex-core-command’s configuration.toml]

All of the Secrets can be accessed here in the reference project.

When your Helm chart is finalized, you can install it by executing:

# From the root of the project

$ helm install edgex –name-template edgex

In a sense, you get a free verification from the Container probes for each application. If your applications successfully start up and remain in a Running state, that is a good sign!

Each application can be accessed at [cluster IP]:[application’s node port]. For example, lets say the cluster IP is 127.0.0.1 and we want to access the /api/v1/ping endpoint of edgex-core-metadata, we can invoke the following curl request:

$ curl -i 127.0.0.1:30801/api/v1/ping

where nodePort 30801 is defined in edgex-core-metadata Service definition.

In closing

In this tutorial, we’ve only scratched the surface with respect to the potential that EdgeX Foundry has to offer. With the components we’ve deployed, the foundation is in place to continuously deploy EdgeX in Kubernetes clusters.

As a member of this amazing community I highly recommend checking out EdgeX Foundry Official documentation. From there you will have access to much more details about the platform.

As for next steps, I highly recommend connecting a Device Service to your EdgeX Core Services environment within Kubernetes. With the plug-and-play nature of the Device Sevice’s configuration, you can certainly have a device up and running within EdgeX intuitively.

This tutorial was built on a foundational project worked on at Dell. I would like to acknowledge Trevor Conn, Jeremy Phelps, Eric Cotter, and Michael Estrin at Dell for their contributions to the original project. I would also like to acknowledge the EdgeX Foundry community as a whole. With so many talented and amazing members, the EdgeX Foundry project is a great representation of the community that keeps it growing!

Visit the EdgeX Foundry website for more information or join Slack to ask questions and engage with community members. If you are not already a member of the community, it is really easy to join. Simply visit the wiki page and/or check out the EdgeX Foundry Git Hub.

Enabling a More Sustainable Energy Model with Edge Computing

By Blog, Open Glossary of Edge Computing, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

This blog previously ran on the Section website. For more content like this, click here.

The tech sector has been under mounting scrutiny over the last decade by environmental groups, businesses, and customers for its increasing energy consumption levels. Attention has recently been turning to the ways in which edge computing can help build more sustainable solutions.

The Impact of Coronavirus Lockdowns on Internet Usage

While worldwide energy consumption has significantly dropped as a result of global lockdown restrictions, Internet usage has seen a huge spike.

NETSCOUT, a provider of network and application performance monitoring tools, saw a 25-35% increase in worldwide Internet traffic patterns in mid-March, the time when the shift to remote work and online learning happened for much of the world’s population. That number has stayed pretty consistent. There has been particularly increased use of bandwidth-intensive applications like video streaming, Zoom and online gaming. The spike in Internet use has implications for the sector’s sustainability targets and makes the urgency of a call for change even louder.

Before we look specifically at sustainability in the edge computing field, let’s pull back to take a look at the wider sector.

The Energy Challenge

The information and communication technology (ICT) ecosystem accounts for over 2% of global emissions, putting it on a par with the aviation industry’s carbon footprint, at least according to a 2018 study. Did you know that data centers use an estimated 200 terawatt (Twh) hours each year? This represents around 1% of global electricity demand and data centers contribute around 0.3% to overall carbon emissions.

Where these figures could lie in the future is harder to predict. Widely cited forecasts say the ICT sector and data centers will take a larger slice of overall electricity demand. Some experts predict this could rise as high as 8% by 2030, while the most aggressive models predict that electricity usage by ICT could surpass 20% of the global total by the time a child born today hits his or her teens, with data centers using over one-third of that.

Keeping Future Energy Demand in Check

However, improvements in energy efficiency in the data center field are already making a difference. While the amount of computing performed in data centers more than quintupled between 2010 and 2018, the amount of energy consumed by data centers worldwide only rose by 6% across the same period.

The ICT sector has been hard at work to keep future energy demand in check in various ways. These include:

Streamlining Computer Processes

The shift away from legacy enterprise data centers operated by traditional enterprises such as banks, retailers, and insurance companies, to newer commercially operated cloud data centers has been making a big difference to overall energy consumption.

In a recent company blog, Urs Hölzle, Google’s senior VP technical infrastructure, pointed to a study in Science showing the gains in energy efficiency. Hölzle wrote, “a Google data center is twice as energy efficient as a typical enterprise data center. And compared with five years ago, we now deliver around seven times as much computing power with the same amount of electrical power.”

The study shows how an overall shift to hyperscale data centers has helped reduce the amount of traditional enterprise data center capacity and subsequently reduced overall energy consumption. One of the authors is Jonathan Koomey, a former Lawrence Berkeley National Laboratory scientist, who has been studying the subject of data center energy usage for over two decades. Koomey says that forecasts that focus on data growth projections alone ignore the energy efficiency gains the sector has been making.

Facebook has also been exploring ways to maximise efficiency over the last decade. The social media giant started the Open Compute Project in 2011 to share hardware and software solutions aimed at making computing more energy-efficient.

One recent novel idea is that data centers could act as energy suppliers and sell excess electricity to the grid instead of keeping it as an insurance policy. Kevin Hagen, Iron Mountain’s VP of environment, social and governance strategy, describes backup generators and UPS batteries as “useless capital.”

Instead, he asks, “What would it look like if all that money was actually invested in energy systems that we got to use when we were trying to arbitrage energy during the day and night on a regular basis? What if we share control with the utility, so they can see there’s an asset and use it back and forth?”

New Ways to Cool Data Centers

According to Global Market Insights, cooling systems represent on average 40% of entire data center energy consumption. Data center design specialists have been looking into different ways to approach reducing energy needs specifically for cooling, which have been largely approached in the same way for the last three decades. These include locating data centers in cooler areas, using AI to regulate the data center’s cooling systems to match the weather, warm-water cooling, immersion cooling, and rear door cooling systems.

It’s an important problem to be focused on since data centers worldwide contribute to industry’s consumption of 45% of all available clean water.

A Focus on Sustainables

Greenpeace has been putting data centers under the spotlight for over a decade, calling for them and other digital infrastructure to become 100% renewably powered. There has been great progress since 2010 when Greenpeace started its ClickClean campaign when IT companies were negligible contributors to renewable-power purchase agreements; today Google is the world’s largest corporate purchaser of renewable energy.

Apple was at the top of Greenpeace’s list of the top greenest tech companies last year. It already boasts 100% renewable energy to power all its global data centers, and is currently focused on cleaning up its supply chain. Greenpeace has recently called out AWS, however, for a lack of renewables in Virginia’s “data center alley”, which it says is powered by “dirty energy.”

Corporate data centers have traditionally been part of the “dirty energy” problem, but are beginning to participate in seeking out renewable energy. Some of the initiatives in this space include DigitalRealty’s announcement on using wind energy in its 13 Dallas, TX data centers and IronMountain’s Green Power Pass specifically for enterprise customers to be able to participate in renewable energy purchase for its growing number of data centers.

The Role of Edge Computing in Working Towards More Efficient Solutions

Edge computing is also increasingly being looked to as an area that can help work towards more efficient solutions. These include:

Energy Resource Efficiencies

technical paper published on Arxiv.org earlier this month lifted the hood on Autoscale, Facebook’s energy-sensitive load balancer. Autoscale reduces the number of servers that need to be on during low-traffic hours and specifically focuses on AI, which can run on smartphones in abundance and lead to decreased battery life through energy drain and performance issues.

The Autoscale technology leverages AI to enable energy-efficient inference on smartphones and other edge devices. The intention is for Autoscale to automate deployment decisions and decide whether AI should run on-device, in the cloud, or on a private cloud. Autoscale could result in both cost and efficiency savings.

Other energy resource efficiency programs are also underway in the edge computing space.

Reducing High Bandwidth Energy Consumption

Another area edge can assist in is to reduce the amount of data traversing the network. This is especially important for high-bandwidth applications like YouTube and Netflix, which have skyrocketed in recent years, and recent months in particular. This is partly due to the fact each stream is composed of a large file, but also due to how video-on-demand content is distributed in a one-to-one model. High bandwidth consumption is linked to high energy usage and high carbon emissions since it uses the network more heavily and demands greater power.

Edge computing could help optimize energy usage by reducing the amount of data traversing the network. By running applications at the user edge, data can be stored and processed close to the end user and their devices instead of relying on centralized data centers that are often hundreds of miles away. This will lead to lower latency for the end user and could lead to a significant reduction in energy consumption.

Smart Grids and Monitoring Can Lead to Better Management of Energy Consumption

Edge computing can also play a key role in being an enabler of solutions that help enterprises better monitor and manage their energy consumption. Edge compute already supports many smart grid applications, such as grid optimization and demand management. Allowing enterprises to track and monitor energy usage in real-time and visualize it through dashboards, enterprises can better manage their energy usage and put preventative measures in place to limit it where possible. This kind of real-time assessment of supply and demand can be particularly useful for managing limited renewable energy resources, such as wind and solar power.

Examples in the Wild

Schneider Electric

Schneider Electric, a specialist in energy management and automation, is beginning to see clients seek more efficient infrastructure and edge computing solutions. This includes helping organizations to digitize their manufacturing processes by using data to produce real-time feedback and alerts to gain efficiencies across the supply chain, including the reduction of waste and carbon emissions. Natalya Makarochkina, Senior VP, International, Secure Power at Schneider Electric, says edge computing is allowing this to happen in a scalable and sustainable way.

Makarochkina highlights two Schneider customers that have used edge computing to improve sustainability and make efficiency savings. The first is a specialty grocer that used edge computing to upgrade IT infrastructure and reduce its physical footprint and consumption requirements, which led to a 35% reduction in engineering cost and a 50% increase in deployment speed for the end user. The second is a co-working space provider that used Schneider’s edge infrastructure options to launch an on-site data center that offers IT infrastructure on demand, offering tenants greater operational efficiencies.

“Edge computing and hybrid cloud play a pivotal role in sustainability because they are designed specifically to put applications and data closer to devices and their users. This helps organisations to better react to changes in consumer demands and improve processes to create more sustainable products.”– Natalya Makarochkina, Senior VP, International, Secure Power at Schneider Electric

Section

At Section, we’re working to provide more accessible options for developers to build edge strategies that deliver on sustainability targets. Our patent-pending Adaptive Edge Engine technology allows developers to piece together bespoke edge networks that are optimized across various attributes, with carbon-neutral underlying infrastructure included in the portfolio of options. Other attributes include cost, performance, and security/regulatory requirements.

Similarly to Facebook’s Autoscale, Section’s Adaptive Edge Engine tunes the required compute resources at the edge in response to, not only the application’s required compute attributes (e.g. PCI compliance or carbon neutrality), but also continuously in response to traffic served and performance. Our ML forecasting engine sits behind the workload placement and decision engine which continuously places workload and scales the underlying infrastructure (up and down) so we use the least amount of compute while providing the maximum availability and performance.

Summary

Not many would argue that technology’s immersion in our day-to-day lives is only expected to expand. Technology providers must work together to create sustainable solutions to meet these growing demands. While not an end-all solution, edge computing has the potential to play a key role in helping to control the negative impacts that accompany rising energy demands.

Section is a LF Edge Member. To learn more about LF Edge or any of our members, visit https://www.lfedge.org/members/.

EdgeX Foundry Welcomes New Contributors!

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

The EdgeX Foundry community has been blessed with a large number of active contributors over the last three years, with many of those being with us since launch.  But all communities need new members and different perspectives to continue to improve and grow.  Yet, we know that it can be intimidating to post your first contribution to an open source project.  As such, we want to take a moment and thank the following people who posted their first contribution to EdgeX Foundry in Q1 of 2020.

Q1 New Contributors:

  • jluous
  • Aricg
  • jinlinGuan
  • kaisawind
  • venkata-subbareddyK
  • kurokobo
  • venkata-lakshmi-penna
  • worldmaomao
  • DaveZLB

You can find these contributors on github and see what other projects they are working on. We encourage our new contributors to keep up the great work and we look forward to their next contribution.  You are helping to improve and grow EdgeX and our community. Thank you for all your help!

If you would like to learn more about EdgeX Foundry,  the world’s first plug and play ecosystem-enabled open platform for the IoT Edge, visit the Getting Started page. You’ll learn more about the project and get  step-by-step directions about how to start your EdgeX journey.

EdgeX Foundry has global industry support LF Edge members and open source contributors. EdgeX offers the opportunity to collaborate on IoT solutions built using existing connectivity standards combined with their own proprietary innovations.

In fact, EdgeX Foundry recently announced the sixth release in its roadmap – the Geneva release offers simplified deployment, optimized analytics, secure connectivity for multiple devices and more robust security. Learn more about the Geneva release:

Visit the EdgeX Foundry website for more information or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 6 million and more downloads!

Private LTE/5G ICN Akraino Blueprint

By Akraino Edge Stack, Blog

Written by Amar Kapadia, member of the Akraino Edge Stack community, co-founder at Aarna Networks, Inc. and an ONAP specialist. This blog originally ran on the Aarna Networks blog. You can find more content like this here

As one of the main contributors, I’m thrilled to state that the Private LTE/5G Integrated Cloud Native NFV/App Stack blueprint in LF Edge’s Akraino Edge Stack project got approved last month.

Given the opening up of unlicensed/licensed private spectrum all around the world (e.g. CBRS in the US), Private LTE/5G promises to be very exciting market. Six end users (Airbus, Globe, Orange, Tata Communications, T-Mobile, and Verizon), a number of vendors (such as us), and individuals are collaborating on this blueprint demo which will be created in a completely open manner and will contain, to the degree possible, open source components.

The key components of this blueprint are:

Private LTE/5G ICN Blueprint Software Stack

  • NFVI hardware: Standard server, switch, storage components

  • NFVI software: Kubernetes with OVN (SDN), Virtlet (to run VMs), Multus (for multiple CNI), Istio, and SD-EWAN (to connect an app across clouds). A main component in the NFVI software will also be an open source 5G UPF CNF.

  • Orchestrator: ONAP with AF integration, OpenNESS

  • Workloads (CNFs): Facebook Magma for vEPC, TIP OCN and Polaris for 5GC

  • Workloads (CNAs): We are starting with the applications in the original ICN blueprint, viz. 360° video, EdgeX Foundry, video AI/ML. However, we might change things around to collaborate with other Akraino blueprints such as the 5G MEC blueprint that is working on cloud gaming, HD video, and live broadcasting.

We will first start with Private LTE over CBRS but then quickly move over to Private 5G and edge computing.

As an open source effort, we could always use more help, Please join us if this is interesting!

The LF Edge Interactive Landscape

By Blog, Landscape, LF Edge, State of the Edge

New tool aims to help users understand and navigate the expansive edge computing ecosystem, requesting collaboration from the edge community.

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group

A few years ago, the Cloud Native Computing Foundation (CNCF) introduced their CNCF Cloud Native Interactive Landscape, which quickly became a go-to resource for the cloud-native ecosystem. Using this as a guide and framework, the State of the Edge project has been building the LF Edge Interactive Landscape.

The LF Edge Interactive Landscape is dynamically generated from data maintained in a community-supported Github account. Based on user inputs and overseen by the State of the Edge Landscape Working Group, the map categorizes LF Edge projects alongside edge-related organizations and technologies to provide a comprehensive overview of the edge ecosystem.

The State of the Edge Landscape Working Group needs help from the larger edge community to continue to build out and improve this resource. Pull requests and issue submissions are welcome and encouraged, whether for new additions or for edits to existing listings.

How to Add a New Listing to the LF Edge Interactive Landscape

To add a new listing to the LF Edge Interactive Landscape, follow the steps using one of the options below:

Option 1: Submit a PR

  1. Visit the community Github repository at https://github.com/State-of-the-Edge/lfedge-landscape
  2. Open a pull request to add your listing to landscape.yml. Follow formatting of peer listings, making sure to include all required information and logo file:
    1. Name of organization or technology
    2. Homepage url
    3. .svg logo (Important: Only .svg formatted logos are accepted.) – see https://github.com/State-of-the-Edge/lfedge-landscape#logos for help converting/creating proper SVGs
    4. Twitter url (if applicable)
    5. Crunchbase url
    6. Assigned category (Descriptions for categories can be found in the README.md)

Full instructions available at https://github.com/State-of-the-Edge/lfedge-landscape#new-entries

Option 2: Open an issue

  1. Visit the community Github repository at https://github.com/State-of-the-Edge/lfedge-landscape
  2. Open an issue that includes all required information and logo file (reference Option 1).

Option 3: Email

  1. Email glossary-wg-landscape@lists.lfedge.org with all of the required information and logo (reference Option 1).

How to Modify a Listing in the LF Edge Interactive Landscape

To modify or make suggestions on an existing listing in the LF Edge Interactive Landscape, open an issue in the Github repository and be sure to include the following information:

  • Name of organization or technology, as listed in the landscape.
  • Detailed description of the modifications that you are requesting.

For more detailed information and instructions, you may refer to the README.md in the Github repository.

About State of the Edge

Founded in 2017, State of the Edge (recently acquired by LF Edge) provides a vendor-neutral, community-driven platform for open research on edge computing while also seeking to align the market on what edge computing truly is and what’s needed to implement it. State of the Edge publishes free research on Edge Computing, maintains the Open Glossary of Edge Computing and oversees the LF Edge Interactive Landscape. Follow State of the Edge on Twitter via @StateoftheEdge.

Molly Wojcik was recently appointed Chair of the State of the Edge Landscape Working Group. She is the Director of Education & Awareness at Section, an edge compute platform technology provider, an LF Edge member organization. Molly has been involved as an active contributor and facilitator within the Landscape working group since its beginnings with LF Edge in early 2019. If you have questions or would like to be involved int he LF Edge Landscape, feel free to email Molly at molly@section.io.

EdgeX Foundry Use Case: Wipro’s Surface Quality Inspection for a Manufacturing Plant

By Blog, EdgeX Foundry, Use Cases

Written by LF Edge members from Wipro. For more information about Wipro, visit their website.

The use case is “Automated surface quality inspection” for a manufacturing plant that produces “Piston Rods” used in different applications like Utility, Mining, Construction and Earth Moving.

In the manufacturing plant, the production of piston rods goes through multiple stages like induction hardening, friction welding, threading & polishing.  Each of these stages have a quality checkpoint to detect defects in early stages and to ensure that the rods produced are in line with design specifications & function properly.  After it goes through rigorous multi stage process, it reaches final station where surface level quality inspection happens.  At this stage, the quality inspection happens manually, requiring highly skilled & experienced human inspector to look for different types of defects like scratches, material defects & handling defects on the surface of rods.  Based on the final quality inspection results, it goes either to packaging & shipment area or to rework or scrap area.

Quality check at every stage is extremely critical to prevent quality problems down the line leading to recalls & reputational damages.  The problem with manual inspections – they are costly, time consuming and heavily dependent on human intelligence & judgement.  “Automated surface quality inspection” is an image analytics solution based on AI / ML that helps overcome these challenges.  The solution identifies and classifies defects based on image analytics to enable quality assessment and provides real-time visibility to Key Performance Indicators through dashboards to monitor plant performance.

EdgeX architectural tenets, production grade readily available edge software stack, visibility of long term support with bi-annual release roadmap and user friendly licensing for commercial deployments made us adopt EdgeX as the base software stack.

The readily available IoT gateway functionalities helped us focus more on building business application specific components than the core software stack needed for an edge gateway.  This helped us in rapid development of proof of concept for the use case we envisioned.

Key components of the solution include:

  • Edge Gateway hardware, Industrial grade camera & a sensor to detect piston rods placed in final quality inspection station
  • Software components:
    • Gateway powered by EdgeX software stack with enhancements to support the use case
    • Deep learning model trained with previously analyzed & classified defects
    • Intel’s OpenVino toolkit for maximizing inference performance on the edge gateway
    • EdgeX Device Service Layer enhancements to enable missing southbound connectivity
    • Automated decision making in the edge by leveraging EdgeX provided local analytics capability
    • EdgeX AppSDK customizations to connect to business applications hosted on cloud
    • Manufacturing Performance Dashboard to define, create and monitor Key Performance Indicators

Once the rod reaches inspection station, the gateway triggers the camera to take surface pictures.  The image captured is fed into the inference engine running on gateway, which looks for the presence of different types of surface defects.  The inference output is fed into the business application hosted on cloud to provide actionable insights with rich visual aids.

The analysis of historical data for similar pattern of defects, correlating data from OT systems with inference output for a given time period and providing feedback to OT systems for potential corrective actions are possible future enhancements.

Benefits:

  • Reduced inspection cost: Automated quality inspections, reduced need for manual inspection
  • Centralized management: Real time visibility to plant operations from remote locations
  • Consistent performance: Less dependency on human judgement, which varies from one person to another
  • Reduced inspection time: Consistent & reliable results in a much shorter time
  • Learning on the go: Continuous retraining of deep learning models make automated inspections accurate & closer to reality
  • Available 24 x 7

If you have questions or would like more information about this use case or LF Edge member Wipro, please email naga.shanmugam@wipro.com.

What is Baetyl?

By Baetyl, Blog

Baetyl is an open edge computing framework of LF Edge that extends cloud computing, data and service seamlessly to edge devices. It can provide temporary offline, low-latency computing services, and include device connect, message routing, remote synchronization, function computing, video access pre-processing, AI inference, device resources report etc.

About architecture design, Baetyl takes modularization and containerization design mode. Based on the modular design pattern, Baetyl splits the product to multiple modules, and make sure each one of them is a separate, independent module. In general, Baetyl can fully meet the conscientious needs of users to deploy on demand. Besides, Baetyl also takes containerization design mode to build images. Due to the cross-platform characteristics of docker to ensure the running environment of each operating system is consistent. In addition, Baetyl also isolates and limits the resources of containers, and allocates the CPU, memory and other resources of each running instance accurately to improve the efficiency of resource utilization.

Advantages

  • Shielding Computing Framework: Baetyl provides two official computing modules(Local Function Module and Python Runtime Module), also supports customize module(which can be written in any programming language or any machine learning framework).
  • Simplify Application Production: Baetyl combines with Cloud Management Suite of BIE and many other productions of Baidu Cloud(such as CFCInfiniteEasyEdgeTSDBIoT Visualization) to provide data calculation, storage, visible display, model training and many more abilities.
  • Service Deployment on Demand: Baetyl adopts containerization and modularization design, and each module runs independently and isolated. Developers can choose modules to deploy based on their own needs.
  • Support multiple platforms: Baetyl supports multiple hardware and software platforms, such as X86 and ARM CPU, Linux and Darwin operating systems.

Components

As an edge computing platform, Baetyl not only provides features such as underlying service management, but also provides some basic functional modules, as follows:

  • Baetyl Master is responsible for the management of service instances, such as start, stop, supervise, etc., consisting of Engine, API, Command Line. And supports two modes of running service: native process mode and docker container mode
  • The official module baetyl-agent is responsible for communication with the BIE cloud management suite, which can be used for application delivery, device information reporting, etc. Mandatory certificate authentication to ensure transmission security;
  • The official module baetyl-hub provides message subscription and publishing functions based on the MQTT protocol, and supports four access methods: TCP, SSL, WS, and WSS;
  • The official module baetyl-remote-mqtt is used to bridge two MQTT Servers for message synchronization and supports configuration of multiple message route rules. ;
  • The official module baetyl-function-manager provides computing power based on MQTT message mechanism, flexible, high availability, good scalability, and fast response;
  • The official module baetyl-function-python27 provides the Python2.7 function runtime, which can be dynamically started by baetyl-function-manager;
  • The official module baetyl-function-python36 provides the Python3.6 function runtime, which can be dynamically started by baetyl-function-manager;
  • The official module baetyl-function-node85 provides the Node 8.5 function runtime, which can be dynamically started by baetyl-function-manager;
  • SDK (Golang) can be used to develop custom modules.

Architecture

../_images/design_overview.pngArchitecture

Contributing

If you are passionate about contributing to open source community, Baetyl will provide you with both code contributions and document contributions. More details, please see: How to contribute code or document to Baetyl.

Contact us

As the first open edge computing framework in China, Baetyl aims to create a lightweight, secure, reliable and scalable edge computing community that will create a good ecological environment. In order to create a better development of Baetyl, if you have better advice about Baetyl, please contact us: