Monthly Archives

May 2021

AgStack joins with LF Edge and EdgeX Foundry to Create Agriculture IoT Solutions

By Blog, EdgeX Foundry

This post originally published on the AgStack website

The newest Linux Foundation project, AgStack, announces a liaison with LF Edge and EdgeX Foundry to dock existing EdgeX software for IoT into AgStack’s IoT Extension. AgStack has also agreed to participate in the EdgeX Foundry Vertical Solutions Working Group to explore the adoption and use of the EdgeX IoT/edge platform as an extension to its design of digital infrastructure for agriculture stakeholders.  Through mutual LF project cooperation, the two projects hope to create a complete edge-to-enterprise open-source solution as part of AgStack that can help the global agricultural ecosystem.

The goals of the cooperation between the two projects are to:

  • Utilize the existing EdgeX Foundry framework to quickly accelerate AgStack’s reach into the agriculture edge – providing a universal platform for communicating with the ag industry’s sensors, devices and gateways.
  • Extend the EdgeX framework to handle agricultural edge use cases and unique ag ecosystem protocols, models, data formats, etc.
  • Jointly work on edge-to-enterprise market ready solutions that can be easily demonstrated and used as the foundation for real world products benefiting ag industry creators and consumers.
  • Setup an exchange (as fellow LF projects) to mutually assist and share lessons learned in areas such as project governance, devops, software testing, security, etc.

“We are in the early stages of defining and building the AgStack platform and we prefer not to have to start from scratch or reinvent the wheel as we build our industry leading open-source platform” said Sumer Johal, Executive Director of AgStack.  “EdgeX Foundry gives us the opportunity to leapfrog our IoT / edge efforts by several years and take advantage of the ecosystem, edge expertise, and lessons learned that EdgeX has acquired in the IoT space.”

“As a versatile and horizontal IoT/edge platform, we are excited to partner with AgStack who can help to highlight how EdgeX can be used in  agriculture IoT use cases and how the AgStack and EdgeX communities can collaborate to scale digital transformation for the agriculture and food industries” said Jim White, Technical Steering Committee Chairman of EdgeX Foundry.  “Even though a fellow LF project, we view AgStack as one of our vertical customers – applying EdgeX to solve real world problems – and what better place to demonstrate that than in an industry that feeds the world.”

 About AgStack

  AgStack was launched by the Linux Foundation in May 2021.  AgStack seeks to improve global agriculture efficiency through the creation, maintenance and enhancement of free, re-usable, open and specialized digital infrastructure for data and applications.  Founding members of AgStack include Hewlett Packard Enterprise, Our Sci, bscompany, axilab, Digital Green, Farm Foundation, Open Team, NIAB and Product Marketing Association.  To learn more visit https://agstack.org.

 

Summary: Akraino’s Internet Edge Cloud (IEC) Type 4 AR/VR Blueprint Issues 4th Update

By Akraino, Blog

By Bart Dong

The IEC Type 4 focuses on AR VR applications running on the edge. In general, the architecture consists of three layers:

  • We use IEC as IaaS Layer.
  • We deploy TARS Framework as PaaS Layer to governance backend services.
  • And for the SaaS layer, AR/VR Application backend is needed.

There are multiple use cases for AR /VR itemized below. For Release 4, we focused on building the infrastructure and virtual classroom application, which I highlighted here:

Let’s have a quick look at the Virtual Classroom app. Virtual Classroom is a basic app that allows you to live a virtual reality experience simulating a classroom with teachers and students.

Generally, it has two modes, the Teacher mode, and the Student mode.

  • In Teacher mode
    • You will see the classroom from the teacher’s view.
    • You can see some students in the classroom listening to your presentation.
  • In Student mode
    • You will see the classroom from a student’s view.
    • You can see the teacher and other students on the remote side.

The whole architecture, shown below, consists of three nodes: Jenkins Master, Tars Master, and TARS Agent with AR/VR Blueprint.

  • For the Jenkins Master, we deployed a Jenkins Master for our private lab for testing.
  • For the TARS Master, we deploy a Tars Platform for serverless use case integration.
  • For the TARS agent, we deployed the Virtual Classroom back-end on this node and two front-end clients as the Virtual Classroom teacher and student on KVM.

It’s not a very difficult architecture. As I mentioned before, the TARS Framework plays an important role as PaaS Layer to governance backend services. Then let’s go a little bit further with TARS.

TARS is a high-performance microservice framework based on name service and TARS protocol, an integrated administration platform, and an implemented hosting service via a flexible schedule. TARS adds support for Arm, x86, and multiple platforms, including macOS, Linux, and Windows.

TARS can quickly build systems and automatically generate code, taking into account ease of use and high performance. At the same time, TARS supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python. TARS can help developers and enterprises quickly build their own stable and reliable distributed applications in a microservices manner to focus on business logic to improve operational efficiency effectively.

TARS is working in PaaS. It can run on physical machines, virtual machines, and containers, including Docker and Kubernetes. You can store TARS service data in Cache, DB, or a file system.

TARS framework supports Tars protocol, TUP, SSL, HTTP 1/2, protocol buffers, and other customized protocols.
 TARS protocol is IDL-based, binary, extensible, and cross-platform. These features allow servers to use different languages to communicate with each other using RPC. And TARS supports different RPC methods like synchronous, asynchronous, and one-way requests.

TARS has multiple functions to govern services and can integrate other DevOps Tools, such as Jenkins as the IEC Type 4.

For release 4, we updated  TARS to version 2.4.13, which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.

We provided a serial of TARS API to the Akraino API map, including application management APIs and service management APIs.

And last year, on March 10th, TARS evolved into a non-profit microservices foundation under the Linux Foundation umbrella.

The TARS Foundation is a neutral home for open source Microservices projects that empower any industry to quickly turn ideas into applications at scale. It’s not only TARS but a microservices Ecosystem. It will solve microservices problems with TARS characteristics, such as:

  • Agile Development with DevOps best practices
  • Built-in comprehensive service governance
  • Multiple languages supported
  • High performance with Scalability

Here is a summary of what we have done in release 4:

  • We focused on building the infrastructure and virtual classroom application, which I mentioned before.
  • We deployed the AR/VR Blueprint in Parserlabs, and used Jenkins to make CI/CD available in the Blueprint.
  • For release 4, we updated TARS to version 2.4.13 which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.
  • We then passed the security scan and the validation lab check. 

For the coming Release 5,  we are planning to deploy IEC Type 4 AR/VR on Kubernetes by K8STARS.

K8STARS is a convenient solution to run TARS services in Kubernetes.

It maintains the native development capability of TARS and automatic registration and configuration deletion of name service for TARS.

It supports the smooth migration of original TARS services to Kubernetes and other container platforms.

The K8STARS is a non-intrusive design, no coupling relationship with the operating environment.

Service can be run as easy as one command like:

kubectl apply -f simpleserver.yaml

If you are interested in TARS Foundation or TARS, please feel free to contact us via email tars@tarscloud.org

Anyone is welcome to join us in building the IEC Type 4 AR/VR blueprint and microservices ecosystem!  

Innovations and Key Implementations of Kubernetes for Edge: A summary of Kubernetes on Edge Day at KubeCon Europe 2021

By Blog

By Sagar Nangare

Kubernetes is the key component in the data centers that are modernizing and adopting cloud-native development architecture to deliver applications using containers. Capabilities like orchestrating VMs and containers together make Kubernetes the go-to platform for modern application infrastructure adopters. Telecom operators are also using Kubernetes to orchestrate their applications in a distributed environment involving many edge nodes.

But due to the large scale of telco networks that includes disparate cloud systems, Kubernetes adoption requires different architectures for different use cases. Specifically, if we look at a use case where Kubernetes is used to orchestrate edge workloads, there are various frameworks and public cloud managed Kubernetes solutions that are available that offer different benefits and give telecom operators choices to select the best fit. 

In recent Kubernetes on Edge Day sessions at KubeCon Europe 2021, many new use cases of Kubernetes for the edge have been discussed along with a showcase of cross-platform integration that may help enterprises adopting 5G edge and telecom operators to scale it to a high level.

 

Here is a high-level overview of some of the key sessions.

The Edge Concept

There are different concepts of edge that have been discussed so far by different communities and technology solution experts. But when Kubernetes is coming into infrastructure, IT operators need to clearly understand the key pillars on which the Kubernetes deployment will seamlessly deliver the low latency performance in telco or private 5G use cases. First, there should be a strong implementation of Kubernetes management at scale. Second, operators need to choose the lightweight K8s for edge solution which is preferably certified by CNCF. And third, there should have a lightweight OS deployed at every node from Cloud to the far edge.

Microsoft’s Akri Project: Microsoft’s Akri project is an innovation that will surely break into multiple Kubernetes-based edge implementations. It discovers and monitors far edge devices of brownfield devices that cannot have own compute – can be a part of Kubernetes cluster. Akri platform will let these devices exposed to the Kubernetes cluster.

AI/ML with TensorFlow: TensorFlow is a machine learning platform that takes inputs to generate insights. It can be deployed on cloud, on-premises, or edge nodes where ML operations need to perform. In one of the sessions, it has been shown that Kubernetes clusters deployed in the cloud and edge can host analytics tools set (Prometheus, EnMasse/MQQT, Apache Camel, AlertManager, Jupyter, etc) to process ML request with the lowest latency.

Architectures for Kubernetes on the edge: While deploying Kubernetes for an edge, there are many architecture choices that are varied per use case. And each architecture poses new challenges. But the bottom line is – there is no one-size-fits-all solution as various workloads have different requirements and IT teams are focusing on the connection between network nodes. So, the overall architecture needs to evolve centralized and distributed control planes.

Robotics: Kubernetes has also been implemented in Robotics. Sony engineers have showcased how the K8s cluster systems can be for distributed system integration of robots and perform specific tasks collaboratively.

Laser-based Manufacturing: Another interesting use case discussed by Moritz Kröger who is a Researcher at RWTH Chair for Lasertechnology that leveraged Kubernetes based distributed system. Kubernetes features like automation configuration management and flexibility in moving workloads in clusters give operational benefits to Laser manufacturing machines.

OpenYurt + EdgeXFoundry: OpenYurt is yet another open-source framework that extends the orchestration features of upstream Kubernetes to the edge. It is showcased that – it can integrate with EdgeXFoundtry in 5G IoT edge use cases where EdgeXFoundtry is used to manage the IoT devices and OpenYurt is used to handle server environments using OpenYurt plugins set.

Using GitOps: Kubernetes supports the cloud-native application orchestration as well as declarative orchestration. It is possible to apply the GitOps approach to achieve the Zero Touch Provisioning at multiple edges from the central data center.

Hong Kong-Zhuhai-Macao Bridge: Another use case discussed is – Kubernetes is implemented in edge infrastructure for managing applications that are managing sensors at Hong Kong-Zhuhai-Macao Bridge. The use case is unique as it focuses on how to define the sensor devices on the bridge as CRD in Kubernetes, how to associate each device with the CI/CD, and how to manage and operate the Applications deployed on edge nodes.

Node Feature Discovery: There are a vast number of end devices that can be part of thousands of edge nodes connected to data centers. Similar to Akri project, Node Feature Discovery (NFD) add-on can be used to detect and push into Kubernetes clusters to orchestrate with edge server as well as cloud systems.

Kuiper and KubeEdge: EMQ’s Kuiper is the open-source data analytics/streaming software that runs on edge devices that have low resource requirements. It can integrate with KubeEdge where we get a combined solution that leverage KubeEdge’s application orchestration capabilities and with streaming analytics. The combined solution delivers low latency, saving cost on bandwidth, ease in implementing business logic and operators can manage and deploy Kuiper software applications from the cloud.

EdgeX Foundry Reaches Four Years!

By Blog, EdgeX Foundry

In four short years, EdgeX has become the global standard at the IOT Edge; this blog examines why and looks to the future.

By Keith Steele, CEO IOTech, and member of LF Edge Board and EdgeX Technical Steering Group.

The emergence of Edge Computing

Back in 2017, “edge computing” was a forecast in a Gartner report, it was called “the mother of all infrastructures.” A new computing model, edge computing promised to alter how businesses interact with the physical world.

With edge computing, data from physical devices—whether it be a drone, sensor, robot, HVAC unit, autonomous car, or other intelligent device—is acquired, processed, analyzed, and acted upon by dedicated edge computing platforms. The processed data can be acted upon locally and then sent to the cloud as required for further action and analysis.

Edge computing helps businesses very rapidly and inexpensively store, process, and analyze portions of their data closer to where it is needed, reducing security risks and reaction times, and making it an important complement to cloud computing. It is, however, a complex problem.

As more devices generate more data, existing infrastructure, bandwidth restrictions, and organizational roadblocks can stand in the way of extracting beneficial insights. What is more, there is no one-size solution that fits everyone. Different applications require different types of compute and connectivity and need to meet a variety of compliance and technology standards.

EdgeX – A Strategic Imperative

This inherent complexity at the edge was recognized as a major barrier to market take- up at the edge; in the same way that common standards and platforms are applied across most of the IT stack, there was recognition that a common ‘horizontal’ software foundation at the edge was needed.

In June 2017, in Boston MA, under the auspices of the Linux Foundation, around 60 people from many technology companies, gathered from around the World, to constitute the EdgeX Foundry open-source project;  the attendees had one aim: To create EdgeX Foundry as the global open edge software platform standard!

.       

At the outset, the EdgeX team saw an open edge software platform as a strategic imperative.  EdgeX enables successful edge implementation strategies and makes the IT/OT boundary a key value-add in building new end-to-end IoT systems. Platforms like EdgeX support heterogeneous systems and ‘real time’ performance requirements on both sides of the boundary, promote choice and flexibility, and enable collaboration across multiple vendors.

Four years later with literally millions of downloads, thousands of deployments across multiple vertical markets, and a truly global ecosystem of technology and commercialization partners, we can   justifiably claim to have achieved our goal.

Over the years the project has had something like 200 code contributors, from companies as diverse as Dell, Intel, HP, IOTech, VMWare, Samsung, Beechwoods, Canonical, Cavium Networks, and Jiangxing Intelligence.  Some made small contributions, while others have stayed for the whole journey. This blog is a tribute to all who have contributed to the project’s continued success.

The Importance of Ecosystem

Open-source projects without a global ecosystem of partners to develop, productize, test, and even deploy stand little chance of success.

The EdgeX ecosystem was greatly enhanced in January 2019 when EdgeX, along with project Akraino, became one of two founding projects in LF Edge, which is an umbrella organization created by the Linux Foundation that “aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” This project aims to bring together complementary open-source technologies.

EdgeX standalone pre- 2019 was already of great value. However, as a founding project under the umbrella of LF Edge, it proved even more valuable as the momentum increased with additional global collaboration. The additional amplification and support across LF Edge projects, community, and members, helped turn EdgeX into a real high velocity project.

     

\EdgeX Fundamentals

There are several fundamental technical and business tenets we set out to solve with EdgeX:

Leveraging the power of Edge and Cloud Computing

Our starting point is that edge and cloud complement one another.

Cloud computing and data centers offer excellent support for large data storage and computing workloads that do not demand real-time insights. For example, a company may choose cloud computing to offload long-term data processing and analysis, or resource-intensive workloads like AI training.

However, the latency and potential costs of connections between the cloud and edge devices makes cloud computing impractical for some scenarios – particularly those in which enterprises need faster or real-time insights, or the edge produces massive amounts of data that should first be filtered to reduce cloud costs. In these cases, an edge computing strategy offers unique value.

Open vs. Proprietary

The idea behind EdgeX was to maximize choice so users did not have to lock themselves into proprietary technologies that, by design, limit choice.

Given the implicit heterogeneity at the Edge, ‘open’ at a minimum means the Edge X platform had to be silicon, hardware, operating system, analytics engine, software application and cloud agnostic. It would seem odd to embrace IoT diversity at the edge, but then be tied to a single cloud vendor, hardware supplier, application vendor or chip supplier.

Secure, Pluggable and Extensible Software Architecture

To offer choice and flexibility we chose a modern, distributed, microservices based software architecture, which we believe supported the inherent complexities at the edge. The other really big thing we defined was a set of open standard APIs that enable ‘plug and play’ of any software application at the Edge. Coming back to ‘product quality’ we also wanted these maintained in a way that any changes to the APIs did not mean huge rewrites for application providers.

Edge Software Application ‘Plug and Play’

A key promise of EdgeX is that it provides a standard open framework around which an ecosystem can emerge, offering interoperable plug-and-play software applications and value add services providing users with real choice, rather than having to deal with siloed applications, which may potentially require huge systems integration efforts.

EdgeX ensures that any application can be deployed and run on any hardware or operating system, and with any combination of application environments. This enables real flexibility to deliver interoperability between connected devices, applications, and services across a wide range of use cases.

Time-Critical Performance and Scalability

Many of the applications we want to run at the edge, including specialist AI and analytics applications, need access to ‘real-time’ data. These can be very challenging performance constraints, e.g., millisecond or even microsecond response times, often with absolute real-time predictability requirements. Edge systems can also be very large-scale and highly distributed.

The hardware available to run time-critical Edge applications is often highly constrained in terms of memory availability or the need to run at low power. This means edge computing software may need to be highly optimized and have a very small ‘footprint’.

Access to real time data is a fundamental differentiator between the edge and cloud computing worlds. With EdgeX we decided to focus on applications that required round trip response times in the milliseconds rather than microseconds.

Our target operating environments are server and gateway class computers running standard Windows or Linux operating systems. We decided to leave it to the ecosystem to address Time Critical Edge systems, which required ultra-low footprint, microsecond performance and even hard real time predictability. (My company – IOTech- just filled that gap with a product called Edge XRT. Its important real-time requirements are understood in full, as decisions taken can significantly impact success or failure of edge projects.)

Connectivity and Interoperability

A major difference between the edge and cloud is inherent heterogeneity and complexity at the edge. This is best illustrated in relation to connectivity and interoperability requirements, south and northbound:

  • Southbound: The edge is where the IT computer meets the OT ‘thing’ and there is a multitude of ‘things’ with which we will want to communicate, using a range of different ‘connectivity’ protocols at or close to real time. Many of these ‘things’ are legacy devices deployed with some old systems (brownfield). EdgeX provides reference implementations of some key protocols north and southbound along with SDK’s to readily allow users to add new protocols where they do not already exist; ensuring acquired data is interoperable despite the differences in device protocols. The commercial ecosystem also provides many additional connectors, making connectivity a configuration versus a programming task
  • Northbound: Across industry, we also have multiple cloud and other IT endpoints; therefore, EdgeX provides flexible connectivity to and from these different environments. In fact, many organizations today use multi-cloud approaches to manage risk, take advantage of technology advances, avoid obsolescence, obtain leverage over cloud price increases, and support organizational and supply-chain integration. EdgeX software provides for this choice by being cloud agnostic.

How does the EdgeX Vendor Ecosystem deliver customer solutions?

There are many companies offering value add products and services to the baseline open-source product, including mine, IOTech. There are also may examples of live deployments in vertical markets such as manufacturing and process automation, retail, transportation, smart venues, and cities etc.  See the EdgeX Adopter Series presentations for some examples.

Where Next for EdgeX?

The EdgeX project goes from strength to strength, with huge momentum behind its V1 Release and we will soon release EdgeX 2.0, a major release which includes all new and improved API set (eliminating technical debt that has incurred over 4 years), more message bus communications between services (replacing REST communications where you need more quality of service and asynchronous behavior), enhanced security services, and new device/sensor connectors.  The EdgeX 2.0 release will also emphasize outreach, including much more of a focus on users as well as developers.  With this release, the community launches the EdgeX Ready program.  The program is a means for organizations and individuals to demonstrate their ability to work with EdgeX.

Some Closing Thoughts

The full promise of IoT will be achieved when you combine the power of cloud and edge computing: delivering real value that allows businesses to analyze and act on their data with incredible agility and precision, giving them a critical advantage against their competitors.

The key challenges at the edge related to latency, network bandwidth, reliability, security, and OT heterogeneity cannot be addressed in cloud-only models – the edge needs its own answers.

EdgeX and the LF Edge ecosystem maximize user choice and flexibility and enable effective collaboration across multiple vertical markets at the edge helping to power the next wave of business transformation. Avoid the risk of getting left behind. To learn more, please visit the EdgeX website and LF Edge website and get involved!