Category

EdgeX Foundry

LF Edge Demos at Open Networking & Edge Summit

By Blog, EdgeX Foundry, Event, Fledge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard

Open Networking & Edge Summit, which takes place virtually on September 28-30, is co-sponsored by LF Edge, the Linux Foundation and LF Networking. With thousands expected to attend, ONES will be the epicenter of edge, networking, cloud and IoT. If you aren’t registered yet – it takes two minutes to register for US$50 – click here.

Several LF Edge members will be at the conference leading discussions about trends, presenting use cases and sharing best practices. For a list of LF Edge focuses sessions, click here and add them to your schedule. LF Edge will also host a pavilion – in partnership with our sister organization LF Networking – that will showcase demos, including the debut of two new ones that feature a collaboration between Project EVE and Fledge and Open Horizon and Secure Device Onboarding. Check out the sneak peek of the demos below:

Managing Industrial IoT Data Using LF Edge (Fledge, EVE)

Presented by Flir, Dianomic, OSIsoft, ZEDEDA and making its debut at ONES, this demo showcases the strength of Project EVE and Fledge. The demo Fledge will show how the two open source projects work together to securely manage, connect, aggregate, process, buffer and forward any sensor, machine or PLC’s data to existing OT systems and any cloud. Specifically, it will show a FLIR IR Camera video and data feeds being managed as described.

 

Real-Time Sensor Fusion for Loss Detection (EdgeX Foundry):

Presented by LF Edge members HP, Intel and IOTech, this demo showcases the strength of the Open Retail Initiative and EdgeX Foundry. Learn how different sensor devices can use LF Edge’s EdgeX Foundry open-middleware framework to optimize retail operations and detect loss at checkout. The sensor fusion is implemented using a modular approach, combining point-of-sale , computer vision, RFID and scale data into a POC for loss prevention.

This demo was featured at the National Retail Federation Show in January. More details about the demo can be found in HP’s blog and  Intel blog.

               

Low-touch automated onboarding and application delivery with Open Horizon and Secure Device Onboard

Presented by IBM and Intel, this demo features two of the newest projects accepted into the LF Edge ecosystem – Secure Device Onboard was announced in July while Open Horizon was announced in April.

An OEM or ODM can generate a voucher with SDO utilities that is tied to a specific device. Upon purchase, they can send the voucher to the purchaser. With LF Edge’s Open Horizon Secure Device Onboard integration, an administrator can load the voucher into Open Horizon and pre-register the device. Once the device is powered on and connected to the network, it will automatically authenticate, download and install the Open Horizon agent, and begin negotiation to receive and run relevant workloads.

For more information about ONES, visit the main website: https://events.linuxfoundation.org/open-networking-edge-summit-north-america/. 

Exploration and Practices of Edge Computing: Cloud Managing Containerized Devices

By Blog, EdgeX Foundry, Industry Article, Trend

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.

Introduction

The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.

Nebula Architecture

Project Nebula is designed based on the following key ideas:

  • Agnostic to device CPU architecture, supporting both x86 and ARM;
  • Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
  • Agnostic to data analytics services, supporting on-premise and cloud deployment;
  • Support small to large scale deployment;
  • Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture

Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.

Nebula Demo

Installation

Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.

The basic resource requirement of Nebula is:

  • CPU: 2 virtual CPU cores
  • Memory: 8GB
  • Storage: 150GB

Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.

Nebula Service Console

Vendor Portal

After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.

Nebula Management Portal

In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.

Edge Service Hierarchy

For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.

Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.

After the release, users can see and deploy these edge services.

Device Registration

Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.

Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.

./install.sh init -u user-acccount -p user-account-password -n user-device-name

User Portal

After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.

After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.

After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.

Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.

Next

From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.

EdgeX Foundry Welcomes New Contributors for Q2

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

The second quarter has been really busy for the EdgeX community.  We released Geneva and are working hard on Hanoi, our fall release.  This release was made possible through the hard work of 52 community members contributing code in GitHub over the past three months.  Over the past three years, EdgeX has enjoyed 117 unique contributors and the community is continuously growing. We want to welcome and recognize our four first time contributors from Q2.

We encourage our new contributors to keep up the great work and we look forward to their next contribution.  You are helping to improve and grow EdgeX and our community.

Q2 New Contributors’ Usernames:

nbfhscl

bill-mahoney

charles-knox-intel

wogsland

You can find these contributors on github and see what other projects they are working on.

We would be remiss if we didn’t thank our other contributors who posted code, help with documentation, or answered questions on our slack workspace in Q2.   We had over 80k lines of code committed from 50 unique (66 YTD) developers making 665 commits (1.3k YTD).  And here are our top ten committers for the second quarter:

lenny-intel tonyespy
ernestojeda rsdmike
cherrycl difince
lranjbar iain-anderson
hahattan jamesrgregg

You can find most of them on our slack workspace (edgexfoundry.slack.com) where we have had over 2000 messages from 101 members!  On our slack channels, you can ask questions and get help, or you can follow our working groups’ channels.

Do you want to get involved with EdgeX Foundry-The World’s First Plug and Play Ecosystem-Enabled Open Platform for the IoT Edge or just learn more about the project and how to get started?  Either way, visit our Getting Started page and you will find everything that you need to get going.  We don’t just need developers, we could use tech writers, translators, and many other disciplines.

EdgeX Foundry is an open source project hosted by LF Edge that is building a common open platform for IoT Edge computing. The interoperable platform enables an ecosystem of plug-and-play components that unifies the marketplace and accelerates the deployment of IoT solutions across a wide variety of industrial and enterprise use cases.

EdgeX is unique in its scope, broad industry support, credibility, investment, vendor-neutrality, and Apache 2.0 open source licensing model. As such, EdgeX is a key enabler of digital transformation for IoT Use Cases and businesses across many different vertical markets.

EdgeX offers all interested developers or companies the opportunity to collaborate on IoT solutions built using existing connectivity standards combined with their own proprietary innovations.

Visit the EdgeX Foundry website for more information or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 6 million and more downloads!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

LF Edge Member Spotlight: Mocana

By Blog, EdgeX Foundry, LF Edge, Member Spotlight

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Dave Smith, President of Mocanato discuss the importance of open source, collaborating with industry leaders in edge computing, security, how they leverage the EdgeX Foundry framework and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Mocana revolutionizes OT and IoT with cyber protection as a service for trustworthy systems. The company helps device operators bridge the adoption challenge between vendors and service providers, and delivers key cybersecurity benefits to the emerging 5G network, edge computing applications, and SD-WAN enterprise networks. Mocana protects the content delivery supply chain and device lifecycle for tamper-resistance from manufacture to end of life, with root-of-trust and chain-of-trust anchors. Mocana measures devices for sustained integrity and the trustworthiness of operations and data to power artificial intelligence/machine learning analytics. The Mocana team of security professionals works with semiconductor vendors and certificate authorities to integrate with emerging technologies to comply with data privacy and protection standards. The goal of cyber protection as a service is to eliminate the initial cost of modernization for device vendors and empower service providers to offer subscription-based services for the effective and efficient expansion of corporate and industrial digital transformation strategies.

Mocana’s core technology protects more than 100 million devices today, and is trusted by more than 200 of the largest energy, government, healthcare, manufacturing, IoT, telecommunications & networking, and transportation companies globally.

Why is your organization adopting an open-source approach?

Mocana is eager to support the global body of customers adopting the EdgeX Foundry open source solution. OpenSSL is by far the most broadly integrated and implemented open source security stack. It comes freely available and is distributed as part of the LF Edge distributions. However, in recent years OpenSSL has come under scrutiny because of critical security vulnerabilities and the resulting issuance of CVEs. The Heartbleed vulnerability from 2014 was a notable exploit, and there are several other recent CVEs that have generated concern in the information security community. The strategy of taking a defensive position through ongoing patching of vulnerabilities continues to challenge efforts to protect mission-critical OT environments.

Since the founding of the LF Edge projects, the goal has been to pull together a body of code to standardize the microservices delivery and orchestration for edge computing systems and devices. The projects continues to advance commercial third-party solutions to address key functional areas, especially for mission-critical and vertical industry applications. Mocana’s solution is based upon a commercially supported, NIST FIPS 140-2 certified, cryptographic module. Many of the company’s Fortune 500 customers have realized significant benefits from the ability to quickly migrate from default products integrated with OpenSSL to Mocana’s offering, leveraging its OpenSSL connector.

Why did you join LF Edge, and what sort of impact do you think LF Edge has on edge computing, networking, and IoT industries?

Developing, deploying, operating, and managing IoT and edge computing requires a community of key, forward-looking technology innovators. The IoT-edge ecosystem spans a wide supply chain from first silicon to the cloud, and includes system integrators, end-user operators and asset owners. Mocana was one of the first 50 founding members of EdgeX Foundry in 2017. Early on, the company took an industry leadership position by driving industry adoption through off-the-shelf solutions developed through stakeholder collaboration. This approach addressed a variety of common use cases delivered by new edge computing technologies and applications, and required much more than a reference architecture. Mocana recognized the need for the user community and developing ecosystem to leverage community-developed code (e.g. Github) to reduce feature and software code duplication and enable the broadest possible market adoption. The customer benefit reduces the implementation risk for such new technologies and accelerates community stakeholder time to market.

What do you see as the top benefits of being part of the LF Edge community?

Mocana values LF Edge’s ecosystem breadth and depth of community members and stakeholders, which includes chip companies, device ODMs, OEMs, carrier service providers, and asset owner/operators. Each contributes key use case challenges that have been invaluable for ensuring that LF Edge can support key technology developments and marketplace challenges.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

As key contributor to the community, Mocana worked with the EdgeX Foundry Security Working Group and offered insights and guidance on vital security use cases. The company ensured there was always a path to address developing cybersecurity mandates and best practices from NIST Cybersecurity Framework and ISA/IEC 62443. As a result, the community has delivered a number of key security functions. They added a reverse proxy, provided a method to secure the key store with the ability to manage it, and has integrated access to session-based security to the microservices.

Perhaps most important, Mocana has enabled the community to incorporate a scalable, robust, and commercially supported cybersecurity offering for EdgeX Foundry production development and deployments.

Mocana developed its OpenSSL connector to ease migration from default project configurations with OpenSSL to Mocana’s TrustCenter and TrustPoint offerings. This solution aligns well with the project’s objectives to accelerate adoption and deployments of standardized implementations addressing key edge computing use cases with microservices.

What do you think sets LF Edge apart from other industry alliances?

Delivering actual code that organizations can download, compile, run, and then operate is a tremendous benefit compared to most other industry alliances. It is a major differential in comparison to groups that only suggest frameworks and prescriptions of possible features, implementations, and suggested “best practices.”

How will LF Edge help your business?

Demand is growing for edge computing solutions. Hitting 5 million downloads of the EdgeX Foundry SDK in May are proof of that. Mocana also is beginning to see initial commercial success and adoption in the innovation and R&D centers by key community members. The company’s ability to enable its fully integrated TrustCenter and TrustPoint solutions leveraging an OpenSSL connector provides a clear and rapid path to EdgeX device security lifecycle management and supply chain provenance. Plus, it will increase adoption of Mocana’s latest edge device offerings from the community.

What advice would you give to someone considering joining LF Edge?

Find your niche in one of LF Edge’s nine collaborative projects where your offering can deliver the most value and contribute. There has never been a better time to participate in this open source community, which is looking for complementary solutions and ways to deepen the ecosystem.

To learn more about EdgeX Foundry, click here. To find out more about our members or how to join LF Edge, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack or the EdgeX Foundry Slack to share your thoughts and engage with community members.

Exploration and Practices of Edge Computing: Cloud Managing Virtualized Devices

By Blog, EdgeX Foundry

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website

Introduction to the Architecture of Pallas

The previous article introduced how to build and install virtualized devices, but did not touch how to manage large-scale virtualized devices from the cloud. This article introduces the architecture of Pallas to achieve the above goal.

Pallas is the second milestone of the project Asteroid after Ceres. In this release, the basic problems of cloud managing virtualized devices are solved:

  • The device-cloud connection is narrow and unstable
  • Large scale of devices
  • Serious security concerns, often forbidding any open ports

The main functions of the Pallas architecture are implemented by the device manager and the agent virtual machine. Its design key points are:

  • Adopt MQTT protocol suitable for narrow-band, unstable network connections
  • The device manager in the cloud adopts the design concept of typical Internet architecture, with multiple layers, micro-services, multiple buffers, and read-write separation
  • The device is automatically registered to the device manager, and the device initiates all connections and response happens in the cloud
  • Close all ports on the device, run without VPN / SD-WAN, cross public networks
  • Each device has a randomly created, globally unique and permanent ID

Installation requirements

  • vSphere Hypervisor is installed on the device.
    • Device agent
      • CPU: 1 x86-64 vCore
      • Memory: 512MB
      • Storage: 5GB
  • Device manager
    • CPU: 2 vCores
    • Memory: 8GB
    • Storage: 100GB
    • Network: Open ports 443 and 1883, the device is visible

Note: In order for the agent to work properly, the device requires at least vSphere Essentials Kit license, or apply for the Enterprise Edition 60-day trial using the method described in the previous article. 

Download and Installation

Download Pallas

The Pallas installation package can be downloaded from the Flings website of the VMware CTO office. You need to register a VMware community account in advance. The download package contains the device manager OVA, agent VM OVA and user guide.

Note: The download package provided on the Flings website is a technical preview, which does not include commercial support. It is recommended that you carefully read its installation and user guide before initialize the installation.

Install device manager

The way to install device manager is the same as the typical way to install an OVA of virtual machines, which can be completed in sequence by referring to the steps in the Pallas installation guide.

Note: Although device manager is packaged in OVA mode, it does not depend on any specific virtualization infrastructure or cloud platform. In an alternative, it can be converted from OVA to other formats, or install on any cloud platform that supports OVA format.

Install device agent

In order to simplify the process of installing device agent, it is recommended to use the virtual machine OVA instead of the binary package.

 
You can use OVF Tools to install OVA remotely, or leverage ESXi UI to install directly as below.

ovftool –acceptAllEulas –name=pallas_agent –datastore=DATASTORENAME -dm=thin –X:injectOvfEnv –powerOn pallas_agent_ubuntu.ova ‘vi://USERNAME:PASSWORD@ESXIHOST’

Configuration and Use

Configuration

Before you start using it, it is critical to configure the device agent. In order to ensure communication security, remember to modify /etc/vmware/pallas_agent/pallas_agent.conf file before encrypt the file in the following manner.
python3 /root/agent/install/encrypte_password.py YOUR-PASSWORD

If the device is connected to the cloud via WiFi or a telco carreir mobile network, the corresponding PCIe or USB NIC needs to be passed through to the device agent virtual machine. In this way, the device agent virtual machine can be registered to the device manager.

Use

After the device is registered to the device manager, you can perform CRUD-like operations on users, devices, and virtual machines like other ordinary management tools.

 
 
 
It should be noted that because the communication protocol of metadata between the device manager and the device is based on MQTT, the status updates of the device and the virtual machine on top of it are completed asynchronously. If you don’t find a task completed in “real time”, you may need to wait for a while or refresh the status.
 
For tasks such as deploying a virtual machine or patching a device, large files are downloaded via HTTPS and auto resuming.
 
To complete all functions
 above, there is no need for mutual IP visibility between the manager and the device, nor the installation of any VPN or SD-WAN. The device can also be safely behind a firewall, NAT, or gateway.
 
 
With the support of these basic functions, it is easy to expand, manage large-scale virtualized devices from the cloud, and deploy edge applications like EdgeX Foundry framework on them. In the workshop on EdgeX Foundry China Day in December 2019, we have demonstrated the deployment of edge applications based on EdgeX Foundry framework with cloud management of virtualized devices.
 
 

Next

The previous two articles described how to build and install virtualized devices, and how to manage virtualized devices from the cloud. 

In the preface, in addition to virtualization devices, another solution is containerized devices.

In fact, the use of containerized equipment is very common, and it often implies local orchestration, cloud operation and maintenance.

  • Local orchestration means that single-point failures of devices cannot be handled well. Even the most common deployment approach of EdgeX Foundry is to deploy core container instances of microservices on one single device.
  • Cloud operation and maintenance means managing containerized devices from the cloud. Most manufacturers have their own specialized solutions, which brings another problem of technology fragmentation.

The solutions to these two problems will be discussed in subsequent chapters. The next article will introduce a cloud managing containerized device solution to solve the problem of fragmentation.

EdgeX Foundry Device Actuation from the Cloud

By Blog, EdgeX Foundry

Written by Jason Bonafide, EdgeX Foundry Contributor and Principal Software Engineer at Dell Technologies

EdgeX Foundry is a platform that serves as middleware in edge computing – serving between physical sensing and actuating devices and our information technology (IT) systems. Edgex’s interoperability enables communication between real world devices with simplicity in mind.

EdgeX Foundry is composed of several interoperable service layers. The layer that we will focus on in this tutorial is called Device Services Layer. The role of a Device Service is to connect “things” (sensors and devices) into EdgeX. For those looking to establish communication between devices and perform actions on conditional system-state, this tutorial is for you!

At Dell Technologies, we wanted to establish bi-directional communication between simulated devices on the edge (in an isolated network) with EdgeX Foundry. This enabled us to deploy a single up-to-date EdgeX Foundry cluster in VMWare’s One Cloud. This would speed up the process in bootstrapping systems on the edge which can showcase the variety of protocols supported by EdgeX. Demonstrating this flow, is the goal of this tutorial.

Tools and Technologies

While this tutorial aims to suit the needs of a variety of setups, below are used in this tutorial:

System Overview

Figure A contains the following components:

  • MQTT Device Service: Application which connects our Temperature Sensor to EdgeX.
  • Temperature Sensor: Simulated MQTT device on the edge.
  • EdgeX Foundry: EdgeX Core Services which handle the device readings, data, and actuation.

Device Overview

EdgeX Foundry’s MQTT Device Service leverages 3 topics when integrating a device with the platform:

  • Device-list Protocol Topic: This topic is used by MQTT Device Service for invoking a command on the device. In our example, we refer to this topic as the CommandTopic. This configuration can be modified at toml.
  • Driver IncomingTopic: This topic is by MQTT Device Service for receiving data, and publishing the data to core-data for persistence. This configuration can be modified at toml.
  • Driver ResponseTopic: This topic is used by core-command in receiving responses from the device actuation. This configuration is can be modified at toml.

Simulated Temperature Sensor Controller Device

  • Publishes temperature every 5 seconds to DataTopic.
  • Subscribes to CommandTopic which contains commands from edgex-core-command.
  • Publishes response for inbound command to RepsonseTopic.

This application used to simulate a temperature sensor can be found here.

Defining Device Profiles

As a pre-requisite to configuring the MQTT Device Service, we will need to define the Device Profiles for the devices.

A Device Profile defines the device’s values and operation methods (Read or Write). The device values and operation methods are essential in defining the “what” and “how” we can begin to tell EdgeX Foundry how it can interact with our devices.

Each Device Profile contains the following sections:

  • Identification: fields which pertain to the identification of a Device Profile.
  • DeviceResources: specification of sensor values within a device which may be read from or written to either individually or as part of a deviceCommand.
  • DeviceCommands: defines access to read and writes for simultaneous device resources.
  • CoreCommands: specifies the commands which are available via the core-command microservice.

Temperature Sensor Device Profile

# Identification properties
name: “TemperatureSensor”
manufacturer: “Generic”
model: “MQTT-123”

# Labels that we can use for filtering especially when invoking endpoints throughout the microservices.
labels:
– “temperature”
– “sensor”
– “controller”
– “mqtt”
description: “Simulated temperature sensor controller device”

# The Sensor is a simple one which only provides read-only access to its constantly updating temperature value.
deviceResources:
– name: temperature
description: “Sensor temperature value”
properties:
value:
{ type: “string”, “readWrite”: “R” }
units:
{ type: “string”, readWrite: “R” }

# Commands that we are concerned with in regards to interacting with the device.
deviceCommands:
– name: temperature
get:
– { index: “1”, “operation”: “get”, deviceResource: “temperature” }

# Defining of endpoints within core-command that we can use to interact with the sensor.
coreCommands:
– name: temperature
get:
path: “/api/v1/device/{deviceId}/temperature”

# 200 response with the temperature value in the response.
responses:
– code: “200”
description: “Get the temperature”
expectedValues: [ “temperature” ] – code: “500”
description: “internal server error”
expectedValues: []

Configuring MQTT Device Service

EdgeX Foundry offers an MQTT Device Service in which we will configure for our devices.

Host Configuration

While we intend to communicate with our device on a local network, we will still want to configure the Host for device-mqtt-go to be localhost. Later on in this tutorial, we will talk about port-forwarding to this process.

Each system may look different as to where you can find EdgeX and how you may connect to it. Be sure to update the Host and Port configuration properties for the clients. The device service will need to talk to some Core EdgeX Services.

Device Configuration

We leverage [[DeviceList]] to create our pre-defined Temperature Sensor.

In this section, we flesh out the device’s metadata.

  • Name: TemperatureSensor
  • Profile: TemperatureSensor
  • Description = ‘Simulated MQTT Temperature Sensor device’
  • Labels = [ ‘MQTT’, ‘Temperature’, ‘Sensor’ ]

Next we define MQTT protocol configuration:

  • Schema: tcp
  • Host: [MQTT Borker Host] – In my case, this points to our eclipse mosquitto instance in the Kubernetes cluster.
  • Port: [MQTT Broker Port] – In my case, this ports to the expose NodePort for eclipse moquitto.

Driver Configuration

Other than general MQTT configuration, which will need to point to the eclipse-mosquitto instance, you may or may not need to update the IncomingTopic and ResponseTopic. The example we use in the tutorial, uses the default configuration values provided within the example of the Device Service.

  • IncomingTopic: DataTopic – topic in which our device will push temperature value to.
  • ResponseTopic: ResponseTopic – topic in which we will publish the response from the command invocation.

Finalized configuration.toml

IP Addresses have been redacted and should be modified to better fit your scenario.

# Turning the log-level to DEBUG so that we can capture publish/subscribe actions
[Writable] LogLevel = ‘DEBUG’

# Configuration for the device MQTT service:
# Set the Host my machine’s IP address
# Leaving the defaults for the remaining configurations.
[Service] BootTimeout = 30000
CheckInterval = ’10s’
ClientMonitor = 15000
Host = ‘localhost’
Port = 49982
Protocol = ‘http’
StartupMsg = ‘device mqtt started’
Timeout = 5000
ConnectRetries = 10
Labels = [] EnableAsyncReadings = true
AsyncBufferSize = 16

# We are not using the Registry within this example
[Registry] Host = ‘localhost’
Port = 8500
Type = ‘consul’

# Using defaults
[Logging] EnableRemote = false
File = ”

# Configure clients for the other microservices:
# Pointing at the EdgeX Deployments in the VMWare One Cloud.
[Clients] [Clients.Data] Protocol = ‘http’
Host = ‘k8s.cluster’
Port = 48080

[Clients.Metadata] Protocol = ‘http’
Host = ‘k8s.cluster’
Port = 48081

[Clients.Logging] Protocol = ‘http’
Host = ‘k8s.cluster’
Port = 48061

# Configuration for the device:
# Ensure that ProfilesDir is pointing to the directory which the device profile configrations reside.
[Device] DataTransform = true
InitCmd = ”
InitCmdArgs = ”
MaxCmdOps = 128
MaxCmdValueLen = 256
RemoveCmd = ”
RemoveCmdArgs = ”
ProfilesDir = ‘./res/lf-edge’
UpdateLastConnected = false

# Pre-define Devices:
# Temperature Sensor and AC Fan devices.
# Configured to point at MQTT broker in the Cloud.
# Using CommandTopic as the topic which will handle actuating the devices.
# Configure automated events on the temperature Device Resource
[[DeviceList]] Name = ‘Temperature Sensor’
Profile = ‘TemperatureSensorProfile’
Description = ‘Simulated MQTT Temperature Sensor device’
Labels = [ ‘MQTT’, ‘Temperature’, ‘Sensor’ ] [DeviceList.Protocols] [DeviceList.Protocols.mqtt] Schema = ‘tcp’
Host = ‘k8s.cluster’
Port = ‘1883’
ClientId = ‘CommandPublisher’
User = ‘admin’
Password = ‘public’
Topic = ‘CommandTopic’
[[DeviceList.AutoEvents]] Frequency = ’20s’
OnChange = false
Resource = ‘temperature’

# Driver configs:
# DataTopic is the topic the Device Service will use to provide to Core Data.
# Reference MQTT Broker in the Cloud.
# ResponseTopic is the topic which the Device scripts will use to communicated responses on command invocations.
[Driver] IncomingSchema = ‘tcp’
IncomingHost = ‘k8s.cluster’
IncomingPort = ‘1883’
IncomingUser = ‘admin’
IncomingPassword = ‘public’
IncomingQos = ‘0’
IncomingKeepAlive = ‘3600’
IncomingClientId = ‘IncomingDataSubscriber’
IncomingTopic = ‘DataTopic’
ResponseSchema = ‘tcp’
ResponseHost = ‘k8s.cluster’
ResponsePort = ‘1883’
ResponseUser = ‘admin’
ResponsePassword = ‘public’
ResponseQos = ‘0’
ResponseKeepAlive = ‘3600’
ResponseClientId = ‘CommandResponseSubscriber’
ResponseTopic = ‘ResponseTopic’
ConnEstablishingRetry = ’10’
ConnRetryWaitTime = ‘5’

Router Port-Forwarding

Given a scenario where EdgeX Foundry is deployed in the cloud, and our devices are on a local network, one option is to leverage port-forwarding our your router.

In my scenario, EdgeX Foundry exists in VMWare’s One Cloud while my MQTT Device Service and device simulations reside on my local home network. While it is not recommended for this setup on a home network (for security purposes), I am going to use my home network and configure port-forwarding on my router to demonstrate this flow for similar environments. This will allow me to register my MQTT Device Service within EdgeX Foundry using my public IP address.

With the help of port-forwarding, I can configure my router to forward all requests to http://[PUBLIC IP]:49982 to the machine which is running the MQTT Device Service on port 49982.

The flow demonstrated in Figure B assumes ingress/egress routing supports communication between both entities.

Actuating MQTT Devices from the cloud

Before diving in, here are the host and port configurations I will use to access EdgeX:

  • edgex-core-data: k8s.cluster:30800
  • edgex-core-metadata: k8s.cluster:30801
  • edgex-core-command: k8s.cluster:30802

Earlier, we’ve configured MQTT Device Service to run on localhost on a machine within a local network. If your setup is similar to mine, this will not be accessible from a cloud environment. Fortunately, we can use core-metadata’s API to modify the URLs at which the Device Service can be accessed from the cloud. Since device-mqtt-go is still running on localhost with port-forwarding configured, the Device Service can be accessed by EdgeX in the cloud.

Let’s get the Device Service’s Addressable details in effort to capture this device information we will use to update some routing information.

# Invoking core-metadata

curl –location –request GET ‘k8s.cluster:30801/api/v1/addressable’

This will yield a response similar to:

[
{
“created”: 1594771598214,
“modified”: 1594771598214,
“origin”: 1594771598226,
“id”: “0da6eda1-fad9-41b6-a1c3-c50aab4679f8”,
“name”: “edgex-device-mqtt”,
“protocol”: “HTTP”,
“method”: “POST”,
“address”: “localhost”,
“port”: 49982,
“path”: “/api/v1/callback”,
“baseURL”: “http://localhost:49982”,
“url”: “http://localhost:49982/api/v1/callback”
}
]

Now lets use the previous response body and update the following fields to reference our network’s public IP: – address – baseUrl – url

Note that [PUBLIC IP] is used in place of your the IP address that you will use when routing to your network

# Invoking core-metadata

curl –location –request PUT ‘k8s.cluster:30801/api/v1/addressable’ \
–header ‘Content-Type: application/json’ \
–data-raw ‘{
“id”: “0da6eda1-fad9-41b6-a1c3-c50aab4679f8”,
“name”: “edgex-device-mqtt”,
“protocol”: “HTTP”,
“method”: “POST”,
“address”: “[PUBLIC IP]”,
“port”: 49982,
“path”: “/api/v1/callback”,
“baseURL”: “http://[PUBLIC IP]:49982”,
“url”: “http://[PUBLIC IP]:49982/api/v1/callback”
}’

At this point, the Device Service is routable from the cloud. Next, let’s get the TemperatureSensor device ID:

# Invoking core-command

curl –location –request GET ‘k8s.cluster:30802/api/v1/device/name/TemperatureSensor’

This will result in a response like this:

{
“id”: “7af6bad7-7c4f-408b-9a62-eb909e0aad4f”,
“name”: “TemperatureSensor”,
“adminState”: “UNLOCKED”,
“operatingState”: “ENABLED”,
“labels”: [
“MQTT”,
“Temperature”,
“Sensor”
],
“commands”: [
{
“created”: 1594771641016,
“modified”: 1594771641016,
“id”: “118cb3c7-3c1b-4088-bd88-7c27d304aba9”,
“name”: “temperature”,
“get”: {
“path”: “/api/v1/device/{deviceId}/temperature”,
“responses”: [
{
“code”: “200”,
“description”: “Get the temperature”,
“expectedValues”: [
“temperature”
] },
{
“code”: “500”,
“description”: “internal server error”
}
],
“url”: “http://localhost:48082/api/v1/device/7af6bad7-7c4f-408b-9a62-eb909e0aad4f/command/118cb3c7-3c1b-4088-bd88-7c27d304aba9”
},
“put”: {
“url”: “http://localhost:48082/api/v1/device/7af6bad7-7c4f-408b-9a62-eb909e0aad4f/command/118cb3c7-3c1b-4088-bd88-7c27d304aba9”
}
}
] }

We can extract the get command from the response for TemperatureSensor. This URL can be used to get the temperature value from the device itself.

Something worth noting is that the URL generated within the response above, uses the configured Host and Port values from core-command. Depending on your setup, you may need to invoke a different URL to access core-command. In my particular instance, I am exposing core-command via NodePort Service type. This means that core-command is accessible only through a port range allowed by Kubernetes. The port assigned to core-command is not the same as the port used within the cluster. With a load-balancer, ingress, or some proper network configuration, it could certainly be set up in such that the internal DNS names can be resolved.

# Invoking core-command

curl –location –request GET ‘http://k8s.cluster:30802/api/v1/device/7af6bad7-7c4f-408b-9a62-eb909e0aad4f/command/118cb3c7-3c1b-4088-bd88-7c27d304aba9’

response

{
“device”: “TemperatureSensor”,
“origin”: 1594773351741563527,
“readings”: [
{
“origin”: 1594773351741189868,
“device”: “TemperatureSensor”,
“name”: “temperature”,
“value”: “119.017532”,
“valueType”: “String”
}
],
“EncodedEvent”: null
}

We have now demonstrated bi-directional communication between devices on a local network and EdgeX Foundry in the cloud. While this is a simple demonstration, this flow can enable provisioning edge-device clusters which interact with EdgeX Foundry quickly. With an EdgeX instance deployed in the cloud, this flow (depending on networking), is portable enough to experiment with the different protocols EdgeX Foundry supports.

This tutorial is a start of what can be accomplished with EdgeX Foundry. We’ve walked through a simple flow where we can establish bi-directional communication between edge devices and EdgeX Foundry in the cloud. With all the supported protocols, it is rather simple to get EdgeX up and running with the the Device Service of your choice.

Visit the EdgeX Foundry website for more information or join Slack to ask questions and engage with community members. If you are not already a member of the community, it is really easy to join. Simply visit the wiki page and/or check out the EdgeX Foundry Git Hub.

Calling all innovators!! You’re invited to compete in the EdgeX Foundry Challenge Shanghai 2020

By Blog, EdgeX Foundry

Written by Intel’s Ying J Lu, EdgeX Foundry Community Member

On July 3, the EdgeX Foundry Challenge Shanghai 2020 was successfully kicked off as a joint effort among EdgeX community members Intel, VMware, Dell Technologies, HP, Thundersoft, IOTech, Tencent, and Shanghai startup incubator Innospace.

Leaders from the Linux Foundation and LF Edge welcomed community members for a virtual kick-off of the EdgeX Challenge, aimed at building a learning platform for EdgeX Foundry ecosystem partners in China. This initiative enables development of open, scaled industry solutions. It brings together participants with in-depth industry knowledge and developers to identify use cases to solve top industry problems and build demonstrations that move toward an integrated commercial solution. So far, more than 16K people have tuned in to view the challenge kick-off.

Why is edge important?

In today’s data-driven landscape, the Internet of Things provides a number of sensors and terminal devices with a variety of protocols, creating complexity to integrate them into a unified platform. To address the increasing demand to introduce new workloads such as artificial intelligence at the edge, fuse data on customers’ premises and the interwork with different cloud service providers,  this and future EdgeX challenges will help to accelerate a community of practice for architecting modern solutions.

As the world’s learning open edge computing framework, EdgeX Foundry is well positioned to address the edge opportunities as well as its complexity. “Intel is committed to support development of EdgeX Foundry”, says Wei Chen, Vice President of the Intel IOT Group, “including active code contribution, strong evangelism, and active development within the open source EdgeX ecosystem. Intel is also supporting the EdgeX China Project, which specifically supports the EdgeX Foundry community in China.”

Intel has also been actively collaborating in the greater open edge ecosystem by introducing OpenVINOTM toolkits to accelerate AI at the edge, growing the Open Retail Initiative to create and drive adoption of data rich solutions in retail leveraging open source ingredients, and launching Edge reference implementations for Retail and Industrial use cases available at the Intel’s Edge Software Hub. All of these efforts focus on nurturing a healthy and strong edge compute ecosystem.

The Challenge Highlights

The EdgeX Challenge Shanghai is co-hosted by the Linux Foundation and Science & Technology Commission of Shanghai Municipality (STCSM). The goal is to create a space for collaboration, using state-of-the-art technology frameworks, such as EdgeX Foundry which is part of the Linux Foundation, to address business challenges related to commerce and industrial verticals.

There are two major tracks covering the following industries:

  • Commerce: Retail, banking, hospitality, education, smart home, healthcare, cities and campus
  • Industrial: Manufacturing, power, oil and gas, utilities

The Challenge contest consists of two rounds, before winners are selected. In the preliminary round, all participants are required to submit an ideation document to describe how EdgeX-based technical frameworks are used in conjunction with a relevant industry use case. Judges will then select ten use cases to compete in the final round. Each team must demonstrate and showcase their solution which will consist of an interview by the judge’s review committee. Three levels of prizes will be awarded to winning teams.

Challenge Timeline:

Join the Competition

Registration for the EdgeX Challenge Shanghai is open until July 20, 2020. All are welcome—system integrators (SIs), independent solution vendors (ISVs), original equipment manufacturers (OEM, developers, universities and research institutes—to join the challenge.

Register here by July 20, to join the EdgeX Challenge Shanghai.

To see the kickoff from July 3, visit the EdgeX Challenge Shanghai 2020 video challenge here.

Collaborate with the EdgeX Foundry community

EdgeX Foundry – Delivering choice, flexibility, and collaboration at the IOT Edge

By Blog, EdgeX Foundry

Written by Keith Steele, CEO of IOTech Systems

After 3 years, I step down as the chair of the EdgeX Foundry Technical Steering Committee later this month. When I took on the role, I believed Open Software Platforms were fundamental to the success of edge computing and supported that view by backing EdgeX Foundry and starting IOTech. Today, with more than 6 million downloads, and many customer deployments later, both EdgeX and IOTech are strong. As we enter the next phase, here are my reflections and thoughts about where we go next…

From little acorns…..EdgeX Foundry

EdgeX Foundry was launched at Hannover Messe in April 2017 as an open source project hosted by The Linux Foundation. With this as a backdrop, we held our first project meeting in Boston in June 2017 and companies were invited to participate in creating EdgeX as an industrial IoT global edge software platform standard.

There were about 60 of us who showed up in person from all around the world, and even more joined by phone. We set our vision at this meeting and formed the EdgeX Technical Steering Committee to deliver that vision.

For the first 2 years, the team set about turning what was effectively a platform demonstrator project into a deployable product, including a complete rewrite of the code from Java into GO and C to reduce footprint and improve latency, critical requirements for edge-based systems. This culminated in the Edinburgh release in June 2019, when we announced to the world, we were satisfied the quality was sufficient to support deployed systems. Up to that point we had promoted EdgeX for use for pilot only implementations.

The Fundamentals

There are several fundamental technical and business tenets we set out so solve with EdgeX Foundry:

Open vs. Proprietary – The idea behind EdgeX is to maximize choice so users do not have to lock themselves into proprietary technologies that, by design, limit choice. Given the implicit heterogeneity at the edge, ‘open’ at a minimum means the EdgeX platform must be silicon, hardware, operating system, software application and cloud agnostic.

Secure, Pluggable and Extensible Software Architecture – To offer choice and flexibility to our users the EdgeX team chose a modern, distributed, microservices based software architecture, which we believe supports the inherent complexities at the edge.

Edge Software Application ‘Plug and Play’ – EdgeX provides a standard open framework around which an ecosystem can emerge. It facilitates interoperable plug-and-play software applications and value add services providing users with real choice, rather than having to deal with siloed applications, which may potentially require huge system integration efforts to deliver and end to end IoT Edge solution.

Time-Critical Performance and Scalability – Edge applications need access to ‘real-time’ data e.g. millisecond or even microsecond response times, often with absolute real-time predictability requirements. Access to real time data is a fundamental differentiator between the edge and cloud computing worlds. EdgeX addresses round trip response time requirements in the milliseconds with target operating environments are server and gateway class computers running standard Windows or Linux operating systems.

The EdgeX project decided to leave it to the ecosystem to address Time-Critical edge systems, which require ultra-low footprint, microsecond performance and even hard real time predictability.  IOTech, like many other EdgeX Foundry contributors, has created a commercial solution for EdgeX Foundry – called Edge XRT. It is important real-time requirements are understood in full, as decisions taken can significantly impact success or failure of edge projects.

Connectivity and Interoperability- A major difference between the edge and cloud is inherent heterogeneity and complexity at the edge.  The edge is where the IT computer meets the OT ‘thing’ and there is a multitude of ‘connectivity’ protocols at or close to real time. EdgeX provides reference implementations of some key protocols along with SDKs to readily allow users to add new protocols. The commercial ecosystem also provides many additional connectors, making connectivity a configuration versus a programming task. Likewise, Northbound there are multiple cloud and other IT endpoints and EdgeX provides flexible connectivity to and from these different environments. EdgeX is cloud agnostic coming with many standard integrations.

LF Edge launches

Equally as important as the technical deliverables for an open source project is the ecosystem of companies which support it. The EdgeX ecosystem was greatly enhanced in January 2019 when EdgeX Foundry became one of five founding projects in LF Edge, an umbrella organization created by The Linux Foundation that “aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” LF Edge’s objective is to bring together complementary open source edge technologies.

As one of the Stage 3 projects under LF Edge, EdgeX Foundry momentum increased with additional global collaboration. The increased amplification and support across LF Edge projects, community and members has helped turn EdgeX into a high velocity project.

Delivering  complementary products and solutions

EdgeX is a framework around which a global ecosystem of vendors can emerge to offer complementary edge products and services including commercial support, training and customer pilot programs, which add value to the baseline product.

Here are a few examples:

EdgeX in Retail

A great example of how companies deliver solutions around EdgeX is Intel’s Open Retail Initiative (ORI), a collaborative effort led by Intel and top technology companies. The goal of ORI is to accelerate the scalable deployment of data-rich solutions optimized for in-store retail, from the edge to the cloud. ORI leverages EdgeX, alongside vendor proprietary solutions, to deliver use cases based on ecosystem components (or “ingredients”) having a common, open framework EdgeX, which enables an ecosystem of interchangeable components and accessible data—a future-ready platform for innovation. Members of the ORI community include some of the industry’s leading device makers, independent software vendors, system integrators, and consultants.

Remote monitoring telemetry of energy supply networks

In another application, one of the world’s leading energy infrastructure operators collaborated with a Global SI on several Industrial IoT projects aimed at digitalizing its main assets and energy distribution processes.

The SI helped the customer to implement a real-time monitoring system for several gas compression and storage injection stations, using the EdgeX platform for data collection and aggregation, and the at-the-edge calculation of relevant performance metrics.

This application provides the customer with a real-time overview of station operational conditions, and an historical view of data trends to more deeply understand the critical information needed by future analytics to improve Operational & Management activities

The business value to the client included: maintenance activities improved by real-time remote monitoring; data from different machines analyzed on the edge and gathered in a central platform, remote monitoring enabling new users’ education and system knowledge.

Underlying software infrastructure for an AI Edge Platform

Accenture, the world’s largest Systems Integrator has based the Edge offering for its Applied Intelligence Platform+ (AIP+) on EdgeX. AIP+ is marketed globally, and includes a collection of modular, pre-integrated AI services and capabilities to accelerate and scale new outcomes. The AIP+ Edge Agent includes tools and software to support the deployment and management of machine-learning and analytics-based intelligence on devices.

Accenture was particularly attracted by the EdgeX open ecosystem: open-source and container-based pedigree, no lock-in (cloud/ chipset/ OS agnostic), and the potential of strategic partnerships to complement Accenture’s AIP+ focus.

IOTech’s Contribution to the Ecosystem

In each of the above situations the participating companies all benefited from the choice, flexibility, and the collaboration EdgeX enables at the IoT edge. All of the above examples also exploited IOTech’s contributions to the EdgeX ecosystem using our hardened, extended and fully supported licensed version of EdgeX, Edge Xpert.

As mentioned earlier IOTech is also delivering a time critical version of the platform named XRT, and an extensive range of industrial grade north and southbound connectors.

The scale, complexity, and variability of systems at the edge means the management of edge-based systems can be challenging. Today’s enterprise management and deployment systems work very well in enterprise / cloud environments but are not well suited to edge deployments.  Resource constraints, intermittent connectivity of the edge, and the large variety of OT protocols are just some of the issues that present additional challenges to managing and deploying edge solutions. Also, local security constraints may mean that access to and from the cloud is severely curtailed or not permitted, so cloud-based management will not be possible.

What’s Next

The project has 170 code contributors. EdgeX Foundry just completed its sixth release (Geneva), and in just twelve months since the first official ‘ready for primetime’ V1 release back in June 2019, has achieved millions of container downloads world-wide.

The EdgeX Foundry project goes is strong with huge momentum behind its V1 Release.  We are now moving forward with EdgeX 2.0, which will see increased focus on outreach, including greater attention given to EdgeX users as well as developers.  Indeed, this is where I will be concentrating my efforts going forward after stepping down as TSC Chair in June.

I am very proud of what the team has collectively achieved with EdgeX in 3 years from our first meeting in Boston. We have taken EdgeX from a Dell CTO led project to be the premier open edge software platform across the world.

From a personal standpoint, I would offer these key observations and takeaways as to why EdgeX has been successful:

  • Create a real problem to solve that can unify a community around solving it and provide real business value to those who contribute.
  • Collaboration is key with consistency of participation, maintaining key technical community leadership and contributors for at least a couple of years is very important. Projects like EdgeX take time, the problem is complex and there is no short cut to delivering quality.
  • Do not boil the ocean, as the diversity of your project increases look for alignment and complementary projects that can move your forward. EdgeX security is a prime example of this.
  • Understand your objectives and design the project accordingly, our objective in EdgeX was to grow a global standard. We took a decision in the early days to focus on quality, testing and API consistency to ensure what we delivered was of product quality, this has ensured we have a supportable and evolvable product that users can rely on.
  • Set the expectations of your target users. The team worked hard to make the 4th release (Edinburgh release) ‘deployment ready’ and, once we announced this, our downloads increased and we’re now heading to mass adoption on a global scale.

Most importantly, EdgeX Foundry has been a great team effort and a lot of fun! I am very much looking forward to the next phase of success.

Some Closing Thoughts

The full promise of IoT will be achieved when you combine the power of cloud and edge computing: delivering real value that allows businesses to analyze and act on their data with incredible agility and precision, giving them a critical advantage against their competitors.

The key challenges at the edge related to latency, network bandwidth, reliability, security, and OT heterogeneity cannot be addressed in cloud-only models – the edge needs its own answers.

EdgeX Foundry and the LF Edge ecosystem offer an answer, maximizing user choice and flexibility and enabling effective collaboration across multiple vertical markets at the edge helping to power the next wave of business transformation.

To learn more, please visit the EdgeX website and LF Edge website and get involved!

 

New Training Course Aims to Make it Easy to Get Started with EdgeX Foundry

By Announcement, EdgeX Foundry, Training

Course explains what EdgeX Foundry is, how it works, how to use it in your edge solutions, leveraging the support of LF Edge’s large ecosystem 

SAN FRANCISCO, July 1, 2020 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFD213 – Getting Started with EdgeX Foundry.

LFD213, was developed in conjunction with LF Edge, an umbrella organization under The Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. The course is designed for IoT and/or edge software engineers, system administrators, and operation technology technicians that want to assemble an edge solution.

The course covers how EdgeX Foundry is architected, how to download and run it, and how to configure and extend the EdgeX framework when needed. The four chapters of the course, which take approximately 15 hours to complete, provide a basic overview, a discussion of device services, which connect physical sensors and devices to the rest of platform, application services, how to send data from EdgeX to enterprise applications, cloud systems, external databases, or even analytics packages, and more.

Hands-on labs enable students to get and run EdgeX and play with some of its important APIs, as well as create a simple service (either device or application service) and integrate it into the rest of EdgeX.

EdgeX Foundry is an open-source, vendor-neutral, hardware- and OS-agnostic IoT/edge computing software platform that is a Stage 3 (Impact) project under LF Edge. In the simplest terms, it is edge middleware that sits between operational technology, physical sensing “things” and information technology systems. It facilitates getting sensor data from any “thing” protocol to any enterprise application, cloud system or on-premise database. At the same time, the EdgeX platform offers local/edge analytics to be able to offer low latency decision making at the edge to actuate back down onto sensors and devices. Its microservice architecture and open APIs allow for 3rd parties to provide their own replacement or augmenting components and add additional value to the platform. In short, EdgeX Foundry provides the means to build edge solutions more quickly and leverage the support of a large ecosystem of companies that participate in edge computing.

“EdgeX Foundry is on a phenomenal growth trajectory with multiple releases and millions of container downloads,” said Jim White, EdgeX Foundry Chair of the Technical Steering Committee and CTO of IOTech Systems.  “Given the scale of the adopting community and ecosystem, it is critical that there is proper training available to allow new adopters and prospective users to learn how to get started. The new training, created by the architects of EdgeX Foundry and managed by The Linux Foundation, will allow developers exploring EdgeX a faster and better path to understand and work with EdgeX while also accelerating our project’s adoption at scale.”

The course is available to begin immediately. The $299 course fee provides unlimited access to the course for one year including all content and labs. Interested individuals may enroll here.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

EdgeX Foundry Kubernetes Installation

By Blog, EdgeX Foundry

Written by Jason Bonafide, EdgeX Foundry Contributor and Principal Software Engineer at Dell Technologies

In an ever-growing world of connected devices, there is plenty of opportunity in edge computing. While devices are getting smaller and smarter, there is always a need to share data. With that, I have just the platform for you!

EdgeX Foundry is a vendor-neutral open source project hosted by LF Edge building a common open-framework for IoT edge computing. EdgeX Foundry offers a set of interoperable plug-and-play components which aim to satisfy IoT solutions of all variations.

The goal of this blog is to walk through techniques which can be used in deploying EdgeX Foundry to a Kubernetes cluster. Establishing a foundation for deploying EdgeX in Kubernetes is the main takeaway from this tutorial.

Tools and technologies used

Why Deploy EdgeX to a Kubernetes cluster?

Kubernetes provides the following feature set:

  • Service Discovery and load balancing
  • Storage orchestration
  • Automated roll-outs and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management

EdgeX Foundry is built on micro-service architecture. The micro-service architecture is powerful, but it can make deploying and managing an application more complex because of all the components. Kubernetes makes deploying micro-service applications more manageable.

Glossary

Kubernetes

  • Affinity: Act of constraining a Pod to a particular Node.
  • ConfigMap: An API object used to store non-confidential data in key-values pairs.
  • Deployment: Provides declarative updates for Pods and ReplicaSets.
  • Ingress: Exposes HTTP and HTTPS routes from outside the cluster to Services within the cluster.
  • kubelet: Primary “node agent” that runs on each Node. It works in terms of a PodSpec.
  • LivenessProbe: The means used in a Kubernetes Deployment to check that a Container is still working and to determine whether or not it needs to be restarted.
  • Node: Virtual or physical machine in which Pods are run.
  • PersistentVolume: A piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using StorageClasses.
  • PersistentVolumeClaim: A request for storage by a user.
  • Pod: The smallest deployable unit of computing that can be created and managed in Kubernetes.
  • PodSpec: A YAML or JSON object that describes a Pod.
  • ReadinessProbe: The means used in a Kubernetes Deployment to check when a Container is ready to start accepting traffic.
  • Secret: An object that contains a small amount of sensitive data such as a password, a token or a key.
  • Service: An abstract way to expose an application running on a set of Pods as a network service.
  • StartupProbe: The means used in a Kubernetes Deployment to check whether or not the Container’s application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don’t interfere with the application startup.
  • StorageClass: Provides a way for administrators to describe the “classes” of storage they offer.
  • Volume: A directory, possibly containing some data which can be accessed by a Container.
  • Volume Mount: A property for a Container in which a Volume is bound the Container.

Helm

  • Chart: An artifact which contains to a collection of Kubernetes resource files.
  • Named Template: A template which has an identifying name. Named-templates are also referred to as “partials”. Named-templates are similar to that of a programming language’s function which can be used to re-use code (or in this case YAML configuration).
  • Template: A file that can hold placeholders {{}} and are interpolated by specified values
  • yaml: A configuration file within Helm which enables abstraction of configurable items which can be applied to templates.

Setting up manifests directory

Create the directory structure below on your machine. The folders will be used and populated throughout this tutorial.

project-root/
edgex/
templates/
edgex-redis/
edgex-core-metadata/
edgex-core-data/
edgex-core-command/

Before we create definition files

Kubernetes provides a recommended set of labels. These labels provide a grouping mechanism which facilitate management of Kubernetes resources which are bound to an application. Below is Kubernetes’ recommended set of labels:

app.kubernetes.io/name: <application name>
app.kubernetes.io/instance: <installation>
app.kubernetes.io/version: <application version>
app.kubernetes.io/component: <application component>
app.kubernetes.io/part-of: <organization>
app.kubernetes.io/managed-by: <orchestrator>

A concrete example of this would be:

app.kubernetes.io/name: edgex-core-metadata
app.kubernetes.io/instance: edgex
app.kubernetes.io/version: 1.2.1
app.kubernetes.io/component: api
app.kubernetes.io/part-of: edgex-foundry
app.kubernetes.io/managed-by: Kubernetes

With the above label structure, and kubectl’s support of filtering resources by labels, we’ve created enough labels which give us plenty of flexibility when searching for specific resources. An example of this would look like:

$ kubectl get pod -l app.kubernetes.io/name=edgex-core-metadata

In a cluster with many applications, the above command will result in selecting only edgex-core-metadata pods.

On to our first application – edgex-redis

EdgeX Foundry supports Redis as a persistent datastore. Due to the fact that Containers are ephemeral, data created within a Container will live only as long as the Container. In order to keep data around past the life its Container, we will need to use a PersistentVolume.

Lets say that we want to create a PersistentVolume with the following in mind:

  • Our cluster contains a StorageClass named hostpath.
  • Our use-case requires 10Gi of storage (not to be confused with GB – Kubernetes has its own Resource Model).
  • Redis data needs to be stored on a cluster node’s file-system at /mnt/redis-volume. It is worth noting that hostPath storage plugin is not recommended in production. If your cluster contains multiple nodes, you will need to ensure that the edgex-redis Pod is scheduled on the same node. This is to ensure that edgex-redis references the same hostPath storage each time the Pod is scheduled. Please refer to Assigning Pods to Nodes documentation for more information.
  • We anticipate many nodes having Read and Write access to the PersistentVolume.

With the above requirements, our PersistentVolume definition in templates/edgex-redis/pv.yaml should be updated to look like this.

Although the PersistentVolume has not been created yet, we’ve defined a resource that will make the storage we defined in the PersistentVolume definition available for binding. Next, we will need to create a resource that will assign a PersistentVolume to an application. This is accomplished using a PersistentVolumeClaim.

The pattern used in this tutorial establishes a one-to-one relationship between application and a PersistentVolume. That means, for each PersistentVolume, there will exist a PersistentVolumeClaim for an application. With that in mind, our claim for storage capacity, access modes, and StorageClass should match the PersistentVolume we defined earlier.

Lets go over the requirements for our storage use-case:

  • 10Gi storage.
  • Read and Write access by many nodes.
  • A StorageClass named hostpath exists in our cluster.

Given the above requirements, the PersistentVolumeClaim definition file at templates/edgex-redis/pvc.yaml should be updated to look like this.

Now that the PersistentVolume and PersistentVolumeClaim have been defined, lets move on to the Deployment.

Similar to what was done with the PersistentVolume and PersistentVolumeClaim, lets list things that describe the Redis application:

  • For simplicity sake, lets say we only create 1 instance (replica).
  • When a new version is rolled out, we want to kill all of the old pods before bringing up new pods. This is referred to as a Recreate deployment strategy.
  • The Deployment consists of a single process (Container).
  • The Container image will come from Dockerhub’s redis image of latest
  • The Container will listen to connections on port 6379.
  • The application will write data to /var/lib/redis. The Container’s data directory will be mounted to the PersistentVolume via the PersistentVolumeClaim named edgex-redis which we created earlier. The Volume mapped to the PersistentVolumeClaim will be named redis-data and can be mounted as Volume for the Container.
  • The application is in a Started state when a connection can be successfully established over TCP on port 6379. This check will be tried every 10 On the first success, the Container will be in a Started state, however, on the 5th failure to obtain a connection, the Container will be killed. When this StartupProbe is enabled, LivenessProbe and ReadinessProbe are disabled until StartupProbe succeeds.
  • Every 30 seconds, only after a 15 second delay, the Deployment will ensure that the Container is alive by establishing a connection over TCP on port 6379. On the first success, the Container will be in a Running state, however, on 3 failures, the Container will be restarted.
  • Every 30 seconds, only after a 15 second delay, the Deployment will try to determine if the Container is ready to accept traffic. Over TCP, the Deployment will attempt to establish a connection on port 6379. On 3 failures, the pod is removed from Service load balancers.

Given the above requirements, we can define our Deployment in templates/edgex-redis/deployment.yaml should be updated to look like this.

Lastly, for edgex-redis, we will need to establish network access to our Pod within the Kubernetes cluster. This is accomplished by defining a Service.

As for our Service requirements:

  • selector should only match the labels defined in the edgex-redisDeployment’sPodSpec.
  • Map external port 6379 to Container port 6379 with a name of port-6739. Each port requires a name.

On to the next application – edgex-core-metadata

EdgeX application supports the concept of externalized configuration. This is a neat feature enabling the platform’s portability. For our Deployment, we will define our configuration as a Secret. Normally configuration would be defined in a ConfigMap, however, since Redis credentials reside in our configuration file, we will stash the entire configuration file in a Secret.

We are going to leverage the default application configuration and make just a few just a few modifications:

  • Core Metadata Service Host property has been updated to listen on 0.0.0 which ensures that the Container is listening on all network interfaces.
  • Core Data Service Host property has been updated to edgex-core-data Later on when edgex-core-data is installed, a Service will expose edgex-core-data as the hostname of edgex-core-data.
  • Database Host property has been updated to edgex-redis Service

Feel free to refer to the reference project configuration files here.

Lets create the secret by performing the following steps:

  • Download https://github.com/jbonafide623/edgex-lf-k8s/blob/master/secrets/edgex-core-metadata-secret or create your own.
  • Execute $ kubectl create secret generic edgex-core-metadata –from-file=configuration.toml=edgex-core-metadata-secret (where edgex-core-metadata-secret points to the file downloaded in the previous step).

Now for the edgex-core-metadata Deployment. Lets list out some of the requirements:

  • We will create only 1 instance (replica).
  • When a new version is rolled out, we want to kill all of the old pods before bringing up new pods. This is referred to as a Recreate deployment strategy.
  • The Deployment consists of a single process (Container).
  • The Container image will come from Dockerhub’s edgexfoundry/docker-core-metadata-go image of 2.1 tag.
  • Override the Dockerfile’s Entrypoint so that –confdir points to /config.
  • Disable security via EDGEX_SECURITY_STORE environment variable. This can be done by setting the flag to “false”.
  • The Container will listen to HTTP requests on port 48081.
  • The application is in a Started state when a n HTTP GET request to /api/v1/ping results in a 200 response. This check will be tried every 10 On the first success, the Container will be in a Started state, however, on the 5th failure to obtain a connection, the Container will be killed. When this StartupProbe is enabled, LivenessProbe and ReadinessProbe are disabled until StartupProbe succeeds.
  • Every 30 seconds, only after a 15 second delay, the Deployment will ensure that the Container is alive by sending an HTTP GET request to /api/v1/ping. On the first success, the Container will be in a Running state, however, on 3 failures, the Container will be restarted.
  • Every 30 seconds, only after a 15 second delay, the Deployment will try to determine if the Container is ready to accept traffic. Over TCP, the Deployment will attempt to send an HTTP GET request to /api/v1/ping. On 3 failures, the pod is removed from Service load balancers.
  • Establish a limit on cpu usage of 1 and request a starting allocation of 5 cpus.
  • Establish a limit on memory usage of 512Mi and request a starting allocation of 256Mi memory.
  • Mount the configuration Secret with name edgex-core-metadata to the Container’s path /config. Recall /config is the path that we supply as a –confdir override to the application’s image.

Given the above requirements, we can define our Deployment definition file at templates/edgex-core-metadata/deployment.yaml and it should look like this.

Lastly, lets expose the application so that it can be accessed from within and outside of the cluster. For our Service lets define the some requirements:

  • selector should only match the labels defined in the edgex-core-metadataDeployment’sPodSpec.
  • Map external port 48081 to Container port 48081 with a name of port-48081. Each port requires a name.
  • Expose the application outside of the cluster via NodePort type on port 30801

Given the above requires we can define our Service definition file at templates/edgex-core-metadata/service.yaml and it should look like this.

Incorporating Helm

The decision to include Helm in the container-orchestration stack is based on the following Helm features:

  • Helm Facilitates a single responsibility which is to manage Kubernetes resources.
  • Enables easy application installation and rollback.
  • Supports ordered creation of resources via hooks.
  • Provides access to installations of popular applications via Helm Hub.
  • Encourages code-reuse.

Each Helm chart contains values.yaml and Chart.yaml files. In the root of the project directory, lets go ahead and create these files by executing:

touch values.yaml
touch Chart.yaml

Chart.yaml contains information about the chart. With that in mind, lets add the following content to Chart.yaml:

apiVersion: v2
name: edgex
description: A Helm chart for Kubernetes

# A chart can be either an ‘application’ or a ‘library’ chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They’re included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.2.1

values.yaml allows you to create a YAML object which can be referenced in your chart’s template files.

Lets make use of what values.yaml can offer us.

For edgex-redis, lets list out some properties that might be beneficial to abstract out as a configurable option:

  • Application name: if you noticed, almost every resource uses the name edgex-redis. These resources can be accessed by other applications. For instance, edgex-core-metadata references the Service defined for edgex-redis. If we abstract the name out to yaml, the application name will only need to change in a single place (values.yaml).
  • Deployment strategy: If there is a particular environment where you are concerned with high-availability, you may want to leverage RollingUpdate In smaller environments, you may or may not care about a RollingUpdate strategy and would chose Recreate in effort to conserve computing resources.
  • Image name and tag: There exist scenarios where artifacts/dependencies are vetted and kept “in-house”. Scenarios like these include artifact/image repositories of their own. With that in mind, having the ability to switch the image registry during installation can be a huge benefit.
  • Port: The port at which the Container listens on is something that can easily be referenced in many places within a single chart. As you define various network components such as Ingresses, Services, and even the Container port, it would be nice to refer back to a single place.
  • Replicas: Depending on the environment’s resources, the number of replicas may change.
  • StorageClassName: There may exist a scenario where the StorageClass may completely vary from cluster to cluster or even within a single cluster.

Considering the above, we can define our edgex-redis configuration object like this:

edgex:
redis:
name: edgex-redis
deployment:
strategy: Recreate
image:
name: redis
tag: latest
port: 6379
replicas: 1
storageClassName: hostpath

We can apply the same pattern with a few adjustments for edgex-core-metadata:

edgex:
metadata:
name: edgex-core-metadata
deployment:
strategy: Recreate
image:
name: edgexfoundry/docker-core-metadata-go
tag: 1.2.1
port: 48081
replicas: 1
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 0.5
memory: 256Mi

In the configuration object for edgex-core-metadata we added a resources object which allows us to adjust limits and requests during installation.

Remember earlier when we talked about recommended labels and how these labels apply to each of our resources? With Helm, we can create a named-template and include the named-template in each place where the labels are referenced.

Here, in the file templates/_labels.tpl, we are creating a named-template named edgex.labels. This named-template takes in two arguments a ctx (context) and AppName (application’s name).

{{/*
Define a standard set ouf resource labels.

params:
(context) ctx – Chart context (scope).
(string) AppName – Name of the application.
*/}}
{{ define “edgex.labels” }}
app.kubernetes.io/app: {{ .AppName }}
app.kubernetes.io/instance: {{ .ctx.Release.Name }}
app.kubernetes.io/version: {{ .ctx.Chart.AppVersion }}
app.kubernetes.io/component: api
app.kubernetes.io/part-of: edgex-foundry
app.kubernetes.io/managed-by: {{.ctx.Release.Service }}
helm.sh/chart: {{ .ctx.Chart.Name }}-{{ .ctx.Chart.Version | replace “+” “_” }}
{{ end }}

Now that we’ve defined configuration in values.yaml and created edgex.labels named template, lets apply them to a simple definition file in templates/edgex-redis/service.yaml.

apiVersion: v1
kind: Service
metadata:
name: {{ .Values.edgex.redis.name }}
labels:
{{- include “edgex.labels” (dict “ctx” . “AppName” $.Values.edgex.redis.name) | indent 4 }}
spec:
selector:
{{- include “edgex.labels” (dict “ctx” . “AppName” $.Values.edgex.redis.name) | indent 4 }}
ports:
– port: {{ .Values.edgex.redis.port }}
name: “port-{{ .Values.edgex.redis.port }}”

When the template is rendered, placeholder values will be interpolated by either properties specified in the values file or via –set flag. When helm install is executed with no -f flag, values.yaml in the root of chart is used by default. If we want to override values during installation, the property can be overridden using –set flag.

For an application as portable as EdgeX is, tying in configuration overrides can tremendously speed up deployments.

Finalizing the chart using a reference project

Up to this point, we have:

  • Created raw Kubernetes YAML files for edgex-redis and edgex-core-metadata.
  • Explained usage of yaml in Helm chart.
  • Adjusted edgex-redis Service definition file to reference Helm values and named-templates.

This, by no means is an end-to-end deployment. The patterns described above, gives you enough to apply to the remaining pieces you wish to include in your deployment. Here you can refer to the reference project which contains a finalized Helm chart responsible for Deploying: – edgex-redis – edgex-core-metadata – edgex-core-data – edgex-core-command

Following the pattern above, edgex-core-data and edgex-core-command will also require configuration files mounted as Secrets. You can create them by executing:

$ kubectl create secret edgex-core-data generic –from-file=configuration.toml=[path to edgex-core-data’s configuration.toml]

$ kubectl create secret edgex-core-command generic –from-file=configuration.toml=[path to edgex-core-command’s configuration.toml]

All of the Secrets can be accessed here in the reference project.

When your Helm chart is finalized, you can install it by executing:

# From the root of the project

$ helm install edgex –name-template edgex

In a sense, you get a free verification from the Container probes for each application. If your applications successfully start up and remain in a Running state, that is a good sign!

Each application can be accessed at [cluster IP]:[application’s node port]. For example, lets say the cluster IP is 127.0.0.1 and we want to access the /api/v1/ping endpoint of edgex-core-metadata, we can invoke the following curl request:

$ curl -i 127.0.0.1:30801/api/v1/ping

where nodePort 30801 is defined in edgex-core-metadata Service definition.

In closing

In this tutorial, we’ve only scratched the surface with respect to the potential that EdgeX Foundry has to offer. With the components we’ve deployed, the foundation is in place to continuously deploy EdgeX in Kubernetes clusters.

As a member of this amazing community I highly recommend checking out EdgeX Foundry Official documentation. From there you will have access to much more details about the platform.

As for next steps, I highly recommend connecting a Device Service to your EdgeX Core Services environment within Kubernetes. With the plug-and-play nature of the Device Sevice’s configuration, you can certainly have a device up and running within EdgeX intuitively.

This tutorial was built on a foundational project worked on at Dell. I would like to acknowledge Trevor Conn, Jeremy Phelps, Eric Cotter, and Michael Estrin at Dell for their contributions to the original project. I would also like to acknowledge the EdgeX Foundry community as a whole. With so many talented and amazing members, the EdgeX Foundry project is a great representation of the community that keeps it growing!

Visit the EdgeX Foundry website for more information or join Slack to ask questions and engage with community members. If you are not already a member of the community, it is really easy to join. Simply visit the wiki page and/or check out the EdgeX Foundry Git Hub.