Category

Blog

Building Automation: A Sweet Spot for EdgeX Foundry

By | Blog, EdgeX Foundry

Written by Andy Foster, active member of the EdgeX Foundry technical community and Product Director at IOTech

The key objective for LF Edge’s EdgeX Foundry is to create a flexible and open edge computing IoT platform that can support a range of different vertical use cases across markets such as manufacturing, oil & gas and smart energy. A key vertical market that has emerged for the community and IOTech, in particular, has been involved in multiple projects is Building Automation.

Why is Building Automation proving to be such a sweet spot for EdgeX? In our experience, the requirements of the next generation of Building Automation applications align perfectly with the capabilities of EdgeX.  In particular, a modern building is a sophisticated and heterogeneous environment consisting of diverse subsystems including lighting, HVAC and access control. Data can be generated from multiple sources using a range of different device/sensor technologies and protocols (e.g. Modbus, BACnet, DALI, MQTT and others).

The sensor data must be normalized by the IoT platform running on an Edge node e.g. an industrial Gateway or on-premise server. The data can then be fed into the building management analytics which can make smart control decisions in “near” real-time on how to automatically optimize the building environment for the benefit of the users and also the building owners.  As an open source project, EdgeX Foundry (can support any “Southbound” OT protocol or “Northbound” Cloud endpoint) and its platform independence (not tied to a specific operating system or hardware/silicon) is key to its suitability for Building Automation use cases. Most importantly, EdgeX also provides “out-of-the-box” connectivity for a number of the protocols most commonly required in a smart building system. As of the most recent EdgeX Edinburgh release the list includes Modbus, BACNet, MQTT and OPC UA Device Services.

The below video showcases the demo created by IOTech on behalf of the Linux Foundation to highlight the key capabilities that make EdgeX a great choice of Edge IoT Platform for use in Building Automation applications. It shows the integration of multiple devices commonly found in commercial buildings (light and temperature sensors, HVAC controller, DALI lighting controllers, power meter and access control system). Based on occupancy data, the Edge Analytics automatically regulates the building environment, controlling the light levels and HVAC settings in multiple independent zones based on the ambient light and temperature readings from each zone. Overall power consumption of the building is also monitored and key data streams are exported to a Big Data application running on AWS Cloud.

For more information about EdgeX Foundry, visit the website here. To learn more about the other edge computing projects under the LF Edge umbrella, visit the website here.

Using DockerHub in Akraino Edge Stack & Other Linux Foundation Projects

By | Akraino, Blog

By Kaly Xin, Eric Ball,  and Cristina Pauna

DockerHub is the world’s largest library and community for container images. It offers a huge repository for storing container images and it is available world wide. It can automatically build container images from GitHub and Bitbucket and push them to Docker Hub. These are just a few of the features it provides, but maybe one of the best features is that it offers seamless support for multi arch images through fat manifest.

Why Docker Hub is recommended for Multi-Arch

Docker Hub registry is able to store manifest list (or fat manifests). A manifest list acts as a pointer to other images built for a specific architecture thus making it possible to use the same name for images that are built on hardware with different architectures.

Figure 1: Docker registry storing amd64, arm64 images and their fat manifest

In the picture above akraino/validation:k8s-latest is the fat manifest, and its name can be used to reference both images akraino/validation:k8s-amd64-latest and akraino/validation:k8s-arm64-latest. Inspecting the manifest offers the details on what images it has, for what hardware architecture and what OS.

Figure 2: Docker fat manifest details

How does it work?

When building an image for a specific arch, the arch is added in the tag of the image (akraino/validation:k8s-amd64-latest and akraino/validation:k8s-arm64-latest).

After the images are pushed in the Docker Hub repo, the manifest can be created from the two images. Its name will be the same as the two images but with the arch removed from the tag (akraino/validation:k8s-latest).

To do this in CI with Jenkins, the Jenkins slave has to have docker and a couple of other LF tools installed. The connection to Docker Hub is done through LF scripts (see releng-global-jjb for more info) and all you need to do is define the jjb jobs .

The Akraino validation project is already pushing to Dockerhub, so if you would like to check out some template code, take a look at ci-management/jjb/validation. The docker images are pushed in the official repo and the docker build jobs are running daily. 

In the figure below, the main Jenkins job (validation-master-docker) triggers two parallel jobs that build and push into the Docker Hub registry the amd64 (akraino/validation:k8s-amd64-latest ) and arm64 (akraino/validation:k8s-arm64-latest ) images.  At the end, the fat manifest (akraino/validation:k8s-latest ) is done in a separate job. 

When pulling the image, the name of the manifest is used  (akraino/validation:k8s-latest); the correct image will be pulled based on the architecture of the host from which the pull is made.

Figure 4: Pulling a docker image from two different hardware architecture servers using the same name

What’s next

Docker Hub has been integrated in LF projects like OPNFV from the beginning and is now integrated in Akraino too, so other open source projects can refer to this successful experience to integrate Docker Hub in their pipeline.

What to Expect from the Open Glossary of Edge Computing in 2019

By | Blog, Open Glossary of Edge Computing

By Alex Marcham, LF Edge and State of the Edge Contributor and Technical Marketing Manager at Vapor IO, and Matt Trifiro, Co-Chair at State of the Edge; Chair at Open Glossary of Edge Computing and CMO at Vapor IO

The Open Glossary of Edge Computing began as a utilitarian appendix to the 2018 State of the Edge report. It had humble goals: to cut through the morass of vendor- and pundit-driven definitions around edge computing and, instead, deliver a crisply-defined common lexicon that would enhance understanding and accelerate conversations around all things edge.

Very quickly, it became clear that the Open Glossary could be a powerful and unifying force in the fledgling world of edge computing and that no one entity should own it. Instead, everybody should own it. Thus began our partnership with The Linux Foundation. We converted the glossary into a GitHub repo, placed it under a Creative Commons license, and created an open source project around it.

In January 2019, the Open Glossary became one of the founding projects of LF Edge, the Linux Foundation’s umbrella group for edge computing projects. The Open Glossary now plays a critical role that spans all of the LF Edge projects with its mission to collaborate around a single point of reference for edge computing terminology. This community-driven lexicon helps mitigate confusing marketing buzzwords by offering a foundation of clear and well-understood words and phrases that can be used by everyone.

In 2019, the momentum of the Open Glossary project will continue unabated. Open Glossary has four main projects this year, building on its successes in 2018:

Grow the Community

The Open Glossary project depends on open collaboration, and the community is actively building engagement by seeking out participation from key stakeholder groups in edge computing. Within The Linux Foundation itself, the Open Glossary team has sought input from not only all of the LF Edge projects but also adjacent projects, such as the CNCF’s Kubernetes IoT and Edge working group. In addition, the Open Glossary has formed alliances with other edge computing groups and foundations, including the TIA, iMasons, and the Open19 Foundation. Partnering with other non-profits and consortia will help the project grow its base of passionate collaborators who are dedicated to expanding and improving the Open Glossary.

The Taxonomy Project

Readers of the Open Glossary want more than mere definitions; they want to know how the constituent parts fit together as a whole. To answer this request, the Open Glossary has begun the Taxonomy Project, a working group that seeks to create a classification system for edge computing across three core areas:

  • Edge infrastructure
  • Edge devices
  • Edge software

The Taxonomy Project will draw upon the expertise of subject matter experts in each of the core areas to define and hopefully also visualize the complex relationships between the different components in edge computing. The Taxonomy Project WG is being led by Alex Marcham and the Open Glossary will publish the first taxonomies in Q3 2019.

The Edge Computing Landscape Map

At the request of The Linux Foundation, the State of the Edge project also contributed its Edge Computing Landscape Map, which is now a working group under the auspices of the Open Glossary. The Landscape WG is being led by Wesley Reisz and has just started out. Currently, they have regular weekly meetings (Tuesdays at noon Pacific Time) and the group is actively working on refining the categories the LF Edge Landscape. The group seeks wider participation and will be asking for help to test and validate the proposed categories.

You can see the landscape map evolve at https://landscape.lfedge.org and join the mailing list here.

Glossary 2.0

Edge computing is a rapidly evolving area of development, deployment and discussion, and the Open Glossary contributors aim to keep the glossary up to date with the latest developments in industry and academia, driven by contributions from our project members. During 2019, the Open Glossary team aims to release its 2.0 version of the Open Glossary, which will be a timely update that continues to form the basis for clear and concise communication on the edge.

To see open issues and pull requests or to make contributions, visit the GitHub repo. Go here to join the mailing list.

In Summary

2019 is shaping up to be a pivotal year for edge technologies, their proponents and of course their end users. The Open Glossary project, as part of The Linux Foundation, is dedicated to continuing its unique mission of bringing a single, open and definitive lingua franca to the world of edge computing. We hope you’ll join us during 2019 as we focus on these goals for the project, and look forward to your contributions.

Contributors can get involved with the Open Glossary project by joining the mailing lists and contributing and commenting on the GitHub repository for the project.

Alex Marcham is a technical marketing manager at Vapor IO. He is one of the primary contributors to the Open Glossary and leads the Taxonomy Project working group. Matt Trifiro is CMO of Vapor IO and is the Chair of the Open Glossary project.

Your Path to Edge Computing: Akraino Edge Stack’s Release 1

By | Akraino, Blog

By Kandan Kathirvel, Akraino Edge Stack TSC-Chair and Tina Tsou, Akraino Edge Stack TSC Co-Chair

The Akraino community was proud to announce the availability of its release 1 on June 6th. The community has experienced extremely rapid growth over the past year, in terms of both membership and community activity: Akraino includes broad contributions from across LF Edge, with 60% of LF Edge’s 60+ members contributing to project, as well as several other developers across the globe.

Before Akraino, developers had to download multiple open source software packages and integrate/test on deployable hardware, which prolonged innovation and increased cost. The Akraino community came up with a brilliant way to solve this integration challenge with the Blueprint model.

An Akraino Blueprint is not just a diagram; it’s real code that brings everything together so users can download and deploy the edge stack in their own environment to address a specific edge use case. Example use cases include IoT gateway, MEC for connected car, and a RAN intelligent controller that enables 5G infrastructure.

The Blueprints address interoperability, packaging, and testing under open standards, which reduces both overall deployment costs and integration time by users. The Akraino community will supply Blueprints across the LF Edge portfolio of projects, with plans to address 5G, IoT and a range of other edge use cases.

The key strength of the Akraino community is the well-defined process to welcome new Blueprints, new members, users and developers. The technical community is comprised of a Technical Steering committee (TSC), which consists of representatives from across member companies. The TSC acts as a “watchdog” to set process, monitor the community, and ensure open collaboration. In addition to the TSC, the Akraino community has seven sub-committees focused on much-needed areas such as security, edge APIs, CI and validation labs, upstream collaborations, documentation, process and community. Regular meetings are scheduled to ensure broader collaboration and accelerate progress on the various projects. The community calendar can be found here. It is not necessary to be a member to join the community calls, we invite anyone interested in learning more to join!

The above picture illustrates the primary use of Akraino R1 Blueprints and its targeted deployment areas. The release 1 Blueprints cover everything from a larger deployment in a telco-based edge cloud to a smaller deployment, such as in a public building like a stadium. Each Blueprint is validated via community standards on real physical lab hardware, hosted by either the community or the users.

Akraino Edge Stack prides itself on continuous refinement and development to ensure the success of Blueprints and projects. The community is already planning R2, which will include both new Blueprints and enhancements to existing Blueprints, tools for automated Blueprint validations, defined edge API’s, new community lab hardware, and much more. For future events and meetings please visit: https://wiki.akraino.org/display/AK/Akraino+TSC+Group+Calendar.

 

Hello System Management!

By | Blog, EdgeX Foundry

Written by Akram Ahmad, EdgeX Foundry contributor and Principal Software Engineer at Dell Technologies

For those of you not yet familiar with the canonical way of introducing new technology-centric stuff, at least the way we do it in the world of computer programming–and thinking here specifically to the “Hello World!” first-ever program introduced to the world by programming legends Kernighan and Ritchie with their C programming language–please allow me to clarify what may be an admittedly enigmatic title we’ve got for this blog post. Essentially, it was with the EdgeX Foundry Delhi Release that the team had the pleasure of introducing EdgeX System Management capability to the world! Hence, “Hello System Management!” (More on the Edinburgh Release in just a bit.)

It’s my ongoing privilege to be a part of helping design, implement, and shepherd System Management (or “SM” for short) to date, and going forward. With that in mind, I would like to give you a flavor of the capabilities that SM brings to the table.

You can think of the System Management Agent (SMA), in particular, as a brand-new service which serves as the coordinator for control plane information (i.e. status, configuration, and metrics for EdgeX services). The SMA also control actions on EdgeX services (i.e. starting, stopping, and restarting services). Cloud or third-party systems can, in turn, call on the API provided by the SMA to trigger the actions or to get the control plane data they need. In a nutshell, the SMA can serve as a one-stop shop for managing a deployed instance of EdgeX.

Each EdgeX micro service has a corresponding management API that the SMA calls on to help control that service (e.g. to stop the service) or fetch its latest configuration or metrics. The SMA, along with the management API provided by each service, will be expanded in future releases of EdgeX and will one day offer control plane data and actions via alternate protocols (for example via the well-known protocol SNMP that is part of the TCP/IP suite that powers the Internet as we know it today).

I invite you to hold on to the thought that, for the constellation of services that will be offered via EdgeX, there needs to be “controller” of sorts…

Now, let’s turn to the truism that an IoT platform like EdgeX is used to collect the data from “Things.” Put another way, the platform ingests data that is physically sensed from IoT sensors and devices. Work associated with collecting, managing, and disbursing sensed data is exactly the kind of work associated with a “data plane.” On the other hand, the kind of work associated with operating and managing the IoT platform software and infrastructure is best described as “control plane” operations.

This includes getting the IoT platform and infrastructure running (or shutdown), configuring the platform software for the particular use case, and understanding the health and status of the software platform (is it running and what type of resources is the IoT software platform using?). Analysis of any control plane data may be used to take action as well, but action revolves around the IoT platform itself–not the sensed or controlled world. For example, in the control plane, it may be determined that a service needs to be restarted because it is consuming too much memory.

This is where the SMA comes in!

The System Management (SM) service will assist in protecting EdgeX and reducing the surface area of an API attack. Rather than opening up access to all services to the central management system, the SM service serves as a single point proxy to the control plane for all of EdgeX services for the central management system. The SMA, therefore, reduces the number of access points to EdgeX and reduces potential security vulnerabilities. It also allows the central management system to be loosely coupled to all of EdgeX—requiring the central management system to again have just one access address (the address of the SM service) that it needs to know about for any EdgeX deployment.

Before digging deeper, let’s recap what we’ve learned so far: System Management (SM) functionality, as determined by the EdgeX community, is generally associated with control plane data and operations.  The control plane (and System Management) is about managing the IoT platform and infrastructure. The data plane is all about managing and understanding the physical world that the IoT platform is there to observe and control. Think about it: Whether one is talking about towering skyscrapers or flimsy tents rigged on the grounds of a park, there remains, as ever, the crucial need for control. Without coordination, things can get chaotic in a heartbeat.

Also, and crucially, SM is also about providing information—having retrieved that information in the first place—about the status of the services it manages. Eventually, building on this capability, SM will provide the means to reconfigure the services themselves. At this time, with the Edinburgh Release, SM can provide performance and memory metrics for requested services. Likewise, SM can provide detailed configuration information for the services requested by users of SM, as well as the health status of those services (whether given services are up or down.)

In other words, while control is a critical capability, SM is about more than just control. By the same token, we want to make it abundantly clear that we are building System Management (SM) capability to facilitate other central systems, and not be those central systems. In a nutshell, EdgeX SM is about helping promote interoperability—in this case, allowing you to manage EdgeX with your choice in central management system.

Let’s shift gears a bit now: When you look at a typical fog deployment, a larger management system will want to manage the control plane of the edge systems as well as all the intermediate and upper level nodes and resources of the overall deployment. Just as there is a management system to control all the nodes and infrastructure within a cloud data center, and across cloud data centers, so too there will likely be management systems that will manage and control all the nodes (from edge to cloud) and infrastructure of a complete fog or IoT deployment.

If you will be so kind as to allow me the use of just one more metaphor, it will be this one: Think to a team of workhorses ploughing the land (EdgeX services). Then think to the driver (System Management). Finally, and without going too crazy about the farming metaphor—all metaphors, including this one, can carry only so much water—I invite you to imagine two scenarios (1) First, the one without the other, and (2) Second, the two (i.e. the team of workhorses and the driver) working in unison. If you associated chaos with the first scenario, and clockwork unison with the second, you are in good company.

So with the Edinburgh Release, we will continue building SM capability to facilitate other central systems. Again, the goal is not to be those central systems, but rather to facilitate those systems. May your System Management (SM) learnings continue, and may the community be the better for it!

If you have questions or comments, visit the EdgeX Foundry Slack Channel and share your thoughts in the #community channel. Or, join the LF Edge Slack Channel and share your thoughts in the #EdgeX channel.

Project EVE Code Now Available

By | Blog, Project EVE

Project EVE (Edge Virtualization Engine), part of LF Edge since the organization’s inception, earlier this week marked an important milestone: the official handover of code from ZEDEDA. EVE provides an open standard for edge virtualization, helping make it as easy and secure to manage applications on edge devices as it is in the cloud. With EVE, enterprises can run a wide variety of applications on any edge-class gateway while enjoying the benefits of data center virtualization, like zero-touch provisioning and secure, one-click software update rollouts at IoT scale.

“Project EVE’s release under LF Edge is an important milestone for the edge computing industry,” said Melissa Evers-Hood, senior director of Google Operating Systems for Intel System Software Products, and chair of the LF Edge Governing Board. “An open approach to virtualization can help companies address the growth in diverse services and hardware configurations being deployed at the edge. Using virtualization to consolidate workloads provides companies with a more flexible and elastic infrastructure, allowing them to secure and manage these services while containing costs.”

Project EVE  allows applications ranging from legacy software programs running in virtual machines (VMs) to the latest microservices architectures to operate in a secure and reliable way on smaller edge devices. This is accomplished through the use of a type-1 hypervisor, an Edge Container runtime, and a hardened root-of-trust implementation, enabling workloads to run in either a VM or standard container environment. By decoupling software from hardware, EVE also allows for multi-tenant deployments that can operate in complete isolation from each other, increasing security and decreasing complexity.

Key features of Project EVE include:

  • Compatibility with all major edge hardware and cloud providers—no vendor lock-in
  • Ability to support any application that can run in a VM or standard container
  • Simplified application management through standardized APIs
  • Smarter hardware usage through coordinated resource allocation and partitioning
  • Ability to create a zero-trust approach to security, leveraging a hardened root-of-trust implementation

As the number of IoT devices continues to skyrocket, it’s becoming more and more important for businesses to be able to process, analyze, and act on sensor data in real time via local edge gateway systems. Project EVE provides a key component of the technology stack needed for powerful computing at the edge. By contributing the code for Project EVE to LF Edge, ZEDEDA is furthering the organization’s mission to create an open framework for edge computing.

For more information about Project EVE, visit https://www.lfedge.org/projects/eve/.

 

LF Edge at IoT World

By | Akraino, Blog, EdgeX Foundry, Project EVE

LF Edge will be at IoT World 2019 in Santa Clara, Cali. from May 13 to 16. The event is the leading IoT showcase that features the top technologies, strategies, and case studies for the key industries implementing IoT. This year, LF Edge projects Akraino, EdgeX Foundry & Project EVE will be at the show floor to show off their latest demos in booth #610.

Akraino will be on-site to show off it’s it’s latest line of blueprints, which are designed to support a wide variety of edge use cases. Akraino will show off it’s SDN Enabled Broadband Access (SEBA) blueprint, ELIOT (Edge Lightweight and IoT) Blueprint, Micro-MEC Blueprint and the Future Network Lab Connected Vehicle Blueprint. The Akraino community tests and validates the blueprints on real hardware labs supported by users and community members.

EdgeX Foundry will showcase it’s building automation demo, which highlights EdgeX’s ability to bring together a real-world, smart flexible office space environment based on components from a variety of vendors leveraging numerous connectivity standards, operating systems and hardware types.

EdgeX will also be featured in a demo from Beechwoods Software that showcases the AMD Edge Gateway reference running EdgeX and supporting IBM Watson IoT for both the cloud and the analytics engine.

In addition, Project EVE will also be on the scene with its new wind turbine model. Based on the EdgeX framework and the Project EVE technology, the demo will showcase turbine  operations analytics.

LF Edge members will be also be on-site to provide background on any of the projects or walk you through an interactive demo. Stop by booth #610 to learn more or attend any of the following IoT presentations on Thursday, May 16:

  • Jason Shepherd, LF Edge Governing Board Member and CTO of IoT and Edge Computing at Dell Technologies, will present a session at 11:40 am – Noon in Grand Ballroom F. The presentation titled, “The Holy Grail for Digital & Data,” explores where companies are today with their data strategy and where they might be in five to ten years.
  • Arpit Joshipura, GM of Networking & Orchestration + Edge / IOT at the Linux Foundation, will present on a panel titled “The Fast and The Curious: Smart, Safe Ways to Accelerate Building and Deployment,” at 1:40 pm – 2:20 pm in Grand Ballroom G. He’ll be joined by Frederic Desbiens,  IoT & Edge Program Manager at The Eclipse Foundation, Christopher Konopka, Developer Evangelist at Twilio & Brian Buntz, Content Director at IoT World Today. Attendees of this session will learn about the latest methods for bringing a commercial or in-house IoT application or device into production more quickly, efficiently and securely.
  • Arpit Joshipura will then join Sue Troy, IoT World Today Executive Editor and Alexander Olesen, Founder of Babylon Micro-Farms, for the last panel of the day titeld, “What’s the deal with…?” From 4:20 – 5 pm in the Grand Ballroom G, the panel will discuss  new IoT technologies, projects, hardware, software and services and a wide range of other topics like edge and IoT, starting an IoT company , 5G, WIFI 6, digital twins, challenges and concerns.

If you have questions or comments, visit the LF Edge Slack Channel and share your thoughts in the #community or project channels.

EdgeX Foundry Is a Finalist for The IoT World Awards

By | Blog

EdgeX Foundry, the vendor-neutral, operating system and hardware independent, open source, microservice, software edge computing platform, is a finalist for the “Best Edge Computing Solution & Achievements in IoT Integration” award for the IoT World Awards. The awards celebrate the success and outstanding contributors to the very best in the world of IoT.

Other finalists in this category include Dell Technologies and FogHorn – both are currently LF Edge members and are still very active in EdgeX Foundry – as well as Itron Inc and Lantronix Inc.

Not only will EdgeX Foundry be attending the award celebration but will be on-site on the exhibition floor. To see interactive EdgeX Foundry demos, which include building automation and a wind turbine, visit the LF Edge booth (Booth 610). For more information about the activities planned for IoT World, visit https://www.lfedge.org/event/iot-world-2019/.

EdgeX Foundry focuses on IoT Edge, and helps simplify the process to design, develop and deploy solutions across industrial, enterprise, and consumer applications. Since it’s launch in 2017, EdgeX has met several technical milestones in its roadmap including the Barcelona release, California release, & Delhi release.

In January 2019, EdgeX Foundry joined Akraino, Project EVE, The Open Glossary of Edge Computing and Home Edge to form LF Edge, an umbrella organization dedicated to establishing an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating systems.

The winner of the award will be announced at the IoT Awards Dinner & Gala in Santa Clara, CA on May 15. To learn more about the awards or any of the other categories, click here.

Arm at the Edge: Telco and IoT Akraino Blueprints debut at ONS 2019

By | Akraino, Blog

By Tina Tsou, co-chair, Akraino Edge Stack Technical Steering Committee & Enterprise Architect, Arm. A version of this post also appeared on the Arm Community blog

Last week at Open Networking Summit in San Jose, there was a lot of buzz about the Akraino Edge Stack Project. Launched in 2018, Akraino Edge Stack was developed to create an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications.

Significant progress has been made in this community since the launch, and many member companies showcased their Akraino blueprints last week. Arm is very active in the Akraino community and is excited about the Akraino Edge Stack Blueprints.

Here are some additional details about the four blueprints that were shown:

1) SDN Enabled Broadband Access (SEBA) on Ampere-based servers

For users of Virtual broadband access (XGS-PON which is a higher bandwidth, symmetric version of GPON), Ampere demonstrated SDN Enabled Broadband Access (SEBA) validation on Arm. The SEBA blueprint in the Akraino Edge Stack Project, can run applications of Virtual broadband access – vOLT access and aggregation for 5000 edge locations. There are three servers per POD, with x86 and Arm (with 8-16 cores each). The power consumption is restricted to less than 1 kW and includes NEBS compliance and 48V DC. The Ampere eMAG-based server delivers competitive performance per watt with 32 Armv8 CPU cores at 3+ GHz with Turbo. 

The foundation of the SEBA validation on Arm demo is built on Integration Edge Cloud (IEC) blueprint family. The Integrated Edge Cloud (IEC) enables new functionality and business models on the network edge with benefits such as lower latency for end users, less load on the network since more data an be processed locally, and full utilization of the computational power of the edge devices.

VMWare proposed multi-cloud xConnection to interconnect different kinds of clouds of IT and Telco. The IEC had several deployment models that each support different business case such as telco/enterprise edge cloud (ex. MEC or brand office data center) or telco/enterprise remote edge locations (ex. SD-WAN, IoT Gateways). The demonstration included Ubuntu, Kubernetes, and Calico on Arm.

2) ELIOT (Edge Lightweight and IoT) Blueprint on Huawei IoT Gateway 

We were excited to partner with Huawei to demonstrate the ELIOT: Edge Lightweight and IoT Blueprint Family.  The ELIOT blueprint was designed to service the need of many diverse business applications that require a converged IoT gateway, and Enterprise WAN edge use of SD-WAN solutions or universal CPU (uCPE). The IoT gateway can be deployed in smart cities, smart homes, connected farming, agriculture logistics industrial, and Industrial IoT.  SD-WAN, WAN edge, uCPE are designed to be used for hybrid WAN, hybrid cloud deployment, and BYOD.  ELIOT is very scalable, from 1 single unit to 10K, 100K, 1000K, or more. ELIOT also supports diverse types of edge applications in many industries and market segments, including but not limited to: telcos, operators, service/cloud providers, medicine, smart cities, industrial IoT, home, and enterprise. The cloud/network infrastructure for ELIOT includes containers, Kubernetes, and the Kubernetes ecosystem. At the same time, the blueprint is designed to use lightweight operating systems and container runtime environments.

“The IoT gateway and enterprise edge SD-WAN gateway are two great examples of computing or power resource constrained edge nodes. The ELIOT project provides end-to-end light-weight open source blueprints for deploying and managing these use cases, built on any processor architecture, to foster a vibrant ecosystem around edge gateways in both hardware and software.”

–  Bill Ren, Chief Open Source Liaison Officer, Huawei

3) Micro-MEC Blue Print from Nokia

Nokia, Arm, and other ecosystem partners within the Akraino/LF Edge community have formed an edge blueprint for a Smart Cities platform called Micro-Mec (uMEC) targeted for a range of use cases. Nokia is using an Arm-based Marvell CN83xx uMEC design to show a highway traffic monitoring application. The uMEC enables new functionalities and business models on the network edge. The benefits of running applications on the network edge include lower latencies for end users, less load on the network since more data can be processed locally, and better security/privacy since sensitive data need not be transferred to a centralized location.

All these new services support the business case for building new high-speed networks which in turn enable new things. The uMEC has several deployment models that each support different business cases including:

  • Fixed installation as part of 5G NR base station enabling new services that require low latency such as AR/VR.
  • As an extension of the previous, the “Smart City” deployments have additional functions such as weather stations, cameras, displays, or drone charging stations.  The control software for these functions would run on the uMEC.
  • In an Industry 4.0 use case set, the uMEC is deployed as part of a 5G network and would provide a platform for running services for the factory floor.
  • In a train, the uMEC could collect and store surveillance camera data for later uploading. 

“Nokia is very excited to demo the the new edge blueprint for operators and smart cities called Micro Multiaccess Edge (uMEC). It demonstrates how even a very small ARM- based system can implement a Smart City use case, and complements the industry standard Open Edge hardware that is used in the Radio Edge Cloud and the 5G Radio Access Network,” said Tapio Tallgren, Project Technical Leader for uMEC and Akraino Technical Community

4) Tencent Future Network Lab Connected Vehicle Blueprint

The Connected Vehicle Blueprint focuses on the MEC platform, which is the backbone for V2X (Vehicle to Everything) applications. The blueprint can be used in multiple use cases, including but not limited to: 

  • Accurate Locations: The blueprint is designed to deliver more than 10X finer granularity in location. GPS is 5-10 meters level location which can be improved to <1 meter.
  • Smart Navigation: Real-time traffic information update reduces the latency from minutes to seconds to provide the more efficient path for drivers.
  • Safer Driving: Insight into potential risks which can’t be seen by drivers’ eyes.
  • Reduced Violation in Traffic Rules: Let the driver understand the traffic rules in some specific area.  For example, change in lines for an upcoming and narrow street, avoiding opposite way drive in the one way road, and avoiding carpool lanes for a single driver.

The blueprint can be flexibly deployed in multiple environments, including bare metal, virtual machine, and container based on commodity hardware (Arm/x86 server). The major software component of this blueprint is Tars, a Linux Foundation microservice Framework project. Tars can be deployed in Arm and x86 servers.  For more detail information for Tars, refer to the github link.

Learn more about connected vehicle blueprint on Google Drive or read the blog post here

“Tencent continuously promotes network innovation from various application perspectives. We believe that application-based network innovation promotes a stronger ecosystem, which brings tremendous benefits to our customers and stakeholders.” –– Zhang Yun Fei, Director of Future Network Lab, Tencent

“Open source is an important technical strategy for Tencent. As both a platinum member and board member of the Linux Foundation, Tencent continuously makes contributions to the Linux Foundation and its projects. After the Tars project contributed to the LF in 2018, and recent Akraino blueprint, Tencent will continue to contribute several new open source projects focused on cache and configuration. We welcome additional participation from more Linux Foundation member companies!” — Xin Liu, Linux Foundation Board member, and Tencent General Manager

We are excited about these great Blueprint demonstrations and the others shown last week. I want to acknowledge the Akraino Edge Stack Project Technical Steering Committee, PTLs, committers, and contributors, for their support with our activities at the conference.  It is truly a team effort! Special recognition goes to:

  • Aaron Byrd, AT&T
  • Matt Taylor, Ampere
  • Trevor Tao, Arm
  • Gabriel Yu, Huawei
  • Tapio Tallgren, Nokia
  • Robert Qiu, Tencent
  • Xinhui Li, VMware
  • Ken Yi, DiDi
  • Wenhui Zhang, PSU

Connected Vehicle Blueprint Debuts at ONS NA 2019

By | Blog

The  Connected Vehicle Blueprint, established within the Akraino community by contributions from Tencent Future Network Lab, Arm, Intel, and Nokia, was demonstrated onsite at Open Networking Summit this week in San Jose. The blueprint demo clearly depicts features, architecture as well as the potential benefits for customers. The Connected Vehicle Blueprint focuses on the MEC platform, which is the backbone for the V2X (Vehicle to Everything) Application.

The blueprint can be used in multiple use cases, including, but not limited to:

  • Accurate Location: The blueprint brings more than 10X fine gratuity location. GPS is 5-10 meters level location, that can be improved to <1 meter, which is the distance of a typical street lane.
  • Smart Navigator: The real-time traffic information update, reduces the latency from minutes to seconds, figures out the most efficient route for drivers.
  • Safe Drive Improvement: Helps the driver figure out any potential traffic risks that may not be seen by the driver.
  • Reduces traffic violations: Helps the driver understand local traffic rules. For instance, changing the lane prior to a narrow street, avoiding driving on the wrong side of a one-way road, avoiding carpool lanes as a single driver, etc.

“Tencent continuously promotes network innovation from various application perspectives. We believe that application-based network innovation promotes a stronger ecosystem, which brings tremendous benefits to our customers and stakeholders,” said Zhang Yun Fei, Director of Future Network Lab, Tencent.

“Open source is an important technical strategy for Tencent. As both a platinum member and board member of the Linux Foundation, Tencent continuously makes contributions to the Linux Foundation and its projects. After the Tars project contributed to the LF in 2018, and recent Akraino blueprint, Tencent will continue to contribute several new open source projects focused on cache and configuration.  We welcome additional participation from more Linux Foundation member companies!” said Xin Liu, Linux Foundation Board Member and Tencent General Manager

The blueprint can be flexibly deployed in multiple environments, including bare metal, virtual machine and container-based environments on commodity hardware. The major software component of this blueprint is Tars, a Linux Foundation microservice framework project.  For more detail information on Tars, refer to the link:  https://github.com/TarsCloud/Tars

For more information regarding the connected vehicle blueprint, refer to: