All Posts By

LF Edge

Decentralized ID and Access Management (DIAM) for IoT Networks

By Blog, Hyperledger, LF Edge

Written by Nima Afraz, Connect Centre for Future Networks and Communications, Trinity College Dublin

The Telecom Special Interest group in collaboration with the Linux Foundation’s LF Edge initiative has published a solution brief addressing the issues concerning the centralized ID and Access Management (IAM) in IoT Networks and introducing a distributed alternative using Hyperledger Fabric.

The ever-growing number of IoT devices means data vulnerability is an ongoing risk. Existing centralized IoT ecosystems have led to concerns about security, privacy, and data use. This solution brief shows that a decentralized ID and access management (DIAM) system for IoT devices provides the best solution for those concerns, and that Hyperledger offers the best technology for such a system.

The IoT is growing quickly. IDC predicts that by 2025 there will be 55.7 billion connected devices in the world. Scaling and securing a network of billions of IoT devices starts with a robust device. Data security also requires a strong access management framework that can integrate and interoperate with existing legacy systems. Each IoT device should carry a unique global identifier and have a profile that governs access to the device.

In this solution brief, we propose a decentralized approach to validate and verify the identity of IoT devices, data, and applications. In particular, we propose using two frameworks from the Linux Foundation: Hyperledger Fabric for the distributed ledger (DLT) and Hyperledger Indy for the decentralized device IDs. These two blockchain frameworks provide the core components to address end-to-end IoT device ID and access management (IAM).

The Problem: IoT Data Security

The ambitious IoT use cases including smart transport infer a massive volume of vehicle-to-vehicle (V2V) and vehicle-to-road communications that must be safeguarded to prevent malicious activity and malfunctions due to single points of failure.

The Solution: Decentralized Identity

IoT devices collect, handle, and act on data as proxies for a wide range of users, such as a human, a government agency, or a multinational enterprise.With tens of billions of IoT devices to be connected over the next few years, numerous IoT devices may represent a single person or institution in multiple roles. And IoT devices may play roles that no one has yet envisioned.

A decentralized ID management system removes the need for any central governing authority and makes way for new models of trust among organizations. All this provides more transparency, improves communications, and saves costs.

The solution is to use Hyperledger technology to create a trusted platform for a telecom ecosystem that can support IoT devices throughout their entire lifecycle and guarantee a flawless customer experience. The solution brief includes Reference Architecture and a high-level architecture view of the proof of concept (PoC) that IBM is working on with some enterprise clients. This PoC uses Hyperledger Fabric as described above.

Successful Implementations of Hyperledger-based IoT Networks

IBM and its partners have successfully developed several global supply-chain ecosystems using IoT devices, IoT network services, and Hyperledger blockchain software. Two examples of these implementations are Food Trust and TradeLens.

The full paper is available to read at: https://www.hyperledger.org/wp-content/uploads/2021/02/HL_LFEdge_WhitePaper_021121_3.pdf

Get Involved with the Group

To learn more about the Hyperledger Telecom Special Interest Group, check out the group’s wiki and mailing list. If you have questions, comments or suggestions, feel free to post messages to the list.  And you’re welcome to join any of the group’s upcoming calls to meet group members and learn how to get involved.

Acknowledgements

The Hyperledger Telecom Special Interest Group would like to thank the following people who contributed to this solution brief: Nima Afraz, David Boswell, Bret Michael Carpenter, Vinay Chaudhary, Dharmen Dhulla, Charlene Fu, Gordon Graham, Saurabh Malviya, Lam Duc Nguyen, Ahmad Sghaier Omar, Vipin Rathi, Bilal Saleh, Amandeep Singh, and Mathews Thomas.

Managing in the dark: facilities in COVID

By Blog, Trend

Written by Scott Smith,  Industry Principal for Facilities and Data Centers for OSIsoft, an LF Edge member company

This article originally ran on the OSIsoft website – click here for more content like this. 

The job of facilities managers used to be pretty straightforward: keep the lights on, keep people comfortable. It was all about availability, uptime, and comfort. However; the world is changing, and now facility management must respond to failures before people even notice them. They must predict maintenance based on conditions rather than time, and all this needs to be done with less people and less energy. With the critical role facilities play in the health and safety of occupants, there’s no room for missteps.

It seems like there are not enough hours in a day to accomplish all that we face.  So, what does an ideal scenario look like for facilities managers?

First, we need to step back and realize we will not be adding all the necessary staff that we require, so we need to find a substitution.  The first place to look is how to leverage the data we already collect and how to enable the use of the data as our “eyes on” and provide us real time situational awareness.  If we can enhance our data with logic-based analytics and provide information, we can shift our resources to critical priorities and ensure the availability of our facilities.

Second, this operational information can notify the facilities team of problems and allow us to be proactive in solutions and communications instead of waiting for complaints to roll in.  We can even go one step further; we predict our facility performance and use that information to identify underlying issues we may have never found until it was too late.

This is not about implementing thousands of rules and getting overwhelmed with “alarm fatigue,” but about enhancing your own data, based on your own facilities, based on your needs.  It allows you to break down silos and use basic information from meters, building automation and plant systems to provide you an integrated awareness.

University of California, Davis is a great example of this. In 2013, they set the goal of net zero greenhouse gas emissions from the campus’s building and vehicle fleet by 2025. With more than 1,000 buildings comprising 11.3 million square feet, this was no small feat. Beyond the obvious projects, UC Davis turned to the PI System to drive more efficient use of resources. By looking at the right data, UC Davis has been able to better inform their strategy for creating a carbon neutral campus.

UC Davis a perfect example of simply starting and starting simple. The data will provide the guidance on where there are opportunities to meet your objectives.

OSIsoft is an LF Edge member company. For more information about OSIsoft, visit their website.

New Training Program Helps You Set Up an Open Source Program Office

By Blog, Training

Written by Dan Brown, Senior Manager, Content & Social Media, LF Training

Is your organization considering setting up an open source program office (OSPO) but you aren’t sure where to start? Or do you already have an OSPO but want to learn best practices to make it more effective? Linux Foundation Training & Certification has released a new series of seven training courses designed to help.

The Open Source Management & Strategy program is designed to help executives, managers, software developers and engineers understand and articulate the basic concepts for building effective open source practices within their organization. Developed by Guy Martin, Executive Director of OASIS Open, an internationally recognized standards development and open source projects consortium, the series of online courses provides the foundational knowledge and skills to build out an effective OSPO.

The courses in the program are designed to be modular, so participants only need to take those of relevance to them. This enables organizations to let each member of the team train in the skills they need to be successful in their given roles. The courses included in the program are:

  • LFC202 – Open Source Introduction – covers the basic components of open source and open standards
  • LFC203 – Open Source Business Strategy – discusses the various open source business models and how to develop practical strategies and policies for each
  • LFC204 – Effective Open Source Program Management – explains how to build an effective OSPO and the different types of roles and responsibilities needed to run it successfully
  • LFC205 – Open Source Development Practices – talks about the role of continuous integration and testing in a healthy open source project
  • LFC206 – Open Source Compliance Programs – covers the importance of effective open source license compliance and how to build programs and processes to ensure safe and effective consumption of open source
  • LFC207 – Collaborating Effectively with Open Source Projects – discusses how to work effectively with upstream open source projects and how to get the maximum benefit from working with project communities
  • LFC208 – Creating Open Source Projects – explains the rationale and value for creating new open source projects as well as the required legal, business and development processes needed to launch new projects

The Open Source Management & Strategy program is available to begin immediately. The $499 enrollment fee provides unlimited access to all seven courses for one year, as well as a certificate upon completion. Interested individuals may enroll here. The program is also included in all corporate training subscriptions.

 

Kubernetes Is Paving the Path for Edge Computing Adoption

By Blog, State of the Edge, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

For more content like this, please visit the Section website

Since Kubernetes was released five years ago by Google, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.

When it comes to the Infrastructure (or Service Provider) Edge, Kubernetes is increasingly being adopted as a key component of edge computing. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.

A Shared Operational Paradigm

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.

As defined by the State of the Edge’s Open Glossary, an “edge-native application” is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.

Kubernetes-Based Edge Architecture

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are three approaches to using Kubernetes in edge-based architecture to manage workloads and resource deployments.

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.

The three approaches for edge-based scenarios can be summarized as follows:

  1. The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine. K3s is the reference architecture for this solution.
  2. The next approach comes from KubeEdge, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.
  3. The third approach is hierarchical cloud plus edge, using a virtual kubelet as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.

Section’s Migration to Kubernetes

Section migrated to Kubernetes from a legacy homegrown scheduler last year. Instead of building our own fixed hardware network, Section distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, Packet, DigitalOcean, CenturyLink, and RackCorp. Kubernetes allows us to be infrastructure-agnostic and seamlessly manage a diverse set of workloads.

Our first-hand experience of the many benefits of Kubernetes at the edge include:

  • Flexible tooling, allowing our developers to interact with the edge as they need to;
  • Our users can run edge workloads anywhere along the edge continuum;
  • Scaling up and out as needed through our vendor-neutral worldwide network;
  • High availability of services;
  • Fewer service interruptions during upgrades;
  • Greater resource efficiency – in particular, we use the Kubernetes Horizontal Pod Autoscaler, which automatically scales the number of pods up or down according to defined latency or volume thresholds;
  • Full visibility into production workloads through the built-in monitoring system;
  • Improved performance.

Summary

As more organizations and operators continue to adopt and support Kubernetes-based cloud-edge patterns, the ecosystem will continue to mature. However, not every organization will have the resources and/or expertise to build these systems themselves. This is where edge platforms (like Section) bridge those gaps, offering DevOps teams familiar tooling to take advantage of the benefits that Kubernetes has to offer without the complexities that come along with it.

Interesting Developments In Edge Hypervisors

By Blog, Industry Article, Project EVE, State of the Edge

Written by Rex St. John, EGX Developer Relations at NVIDIA

This article originally ran on Rex’s LinkedIn page. For more content like this, connect with him on LinkedIn. 

After building Edge Computing ecosystems at Intel and Arm, I have recently made the switch to working on Edge Computing at NVIDIA. Several people have asked me to share my perspective and learnings, so I am starting this informal, personal series on the topic. All opinions shared here are my own personal opinions and not those of my employer.

Objective

In this article, I will share two reasons why some experts in the industry are investing in hypervisor technology as well as two interesting open source edge hypervisor solutions to be aware of. For edge definition nitpickers (you know who you are), I am going to be referring to the “Device Edge” here. There are many other definition for “Edge,” if you are curious, read this LF Edge white paper.

The Hovercraft Analogy

For those of you who are unfamiliar, a hypervisor is kind of like a hovercraft that your programs can sit inside. Like hovercrafts, hypervisors can provide protective cushions which allow your applications to smoothly transition from one device to another, shielding the occupants from the rugged nature of the terrain below. With hypervisors, the bumps in the terrain (differences between hardware devices), are minimized and mobility of the application is increased.

Benefits of Hypervisors

Benefits of hypervisors include security, portability and reduced need to perform cumbersome customization to run on specific hardware. Hypervisors also allow a device to concurrently run multiple, completely different, operating systems. Hypervisors also can help partition applications from one another for security and reliability purposes. You can read more about hypervisors here. They frequently are compared to, used together with or even compete with containers for similar use cases, though they historically require more processing overhead to run.

Two Reasons Why Some (Very Smart) Folks Are Choosing Hypervisors For The Edge

A core challenge in Edge Computing is the extreme diversity in hardware that applications are expected to run on. This, in turn, creates challenges in producing secure, maintainable, scalable applications capable of running across all possible targets.

Unlike their heavier datacenter-based predecessors, light-weight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge. Here are two reasons why some in the industry are taking a careful look at edge hypervisors.

Reason 1: Avoiding The Complexity And Overhead of Kubernetes

One potential reason for taking a hypervisor-based approach at the edge is that there may be downsides in pursuing Kubernetes for smaller clusters. These include the difficulty in building and managing a team who can properly setup and scale a real-world Kubernetes application due to the overhead and complexity of Kubernetes itself. In some cases, such as in running a cluster of 4-5 nodes, it might be desirable to use more streamlined approaches involving a controller and light-weight hypervisors. This is the approach taken by EVE, mentioned in more detail below.

Reason 2: Ease Of Modernizing Brown-Field Industrial IT

Another pressing reason for choosing edge hypervisors is that “brown-field” installations of existing edge hardware are extremely expensive to upgrade to follow modern IT “best practices.” Hypervisors provide a path forward that does not involve rewriting old systems from scratch as the code running on older machines can frequently be shifted into a hypervisor and neatly managed and secured from there (a process referred to as “Workload Consolidation.”)

Let’s take a look at two interesting examples of edge hypervisors to understand further.

Hypervisor #1: Project ACRN

No alt text provided for this image

The first edge hypervisor we will look at is called ACRN, which is a project hosted by the Linux Foundation. ACRN has a well documented architecture and offers a wide range of capabilities and configurations depending on the situation or desired outcome.

No alt text provided for this image

ACRN seeks to support industrial use cases by offering a specific partitioning between high-reliability processes and those which do not need to receive preferential security and processing priority. ACRN accomplishes this separation by specifying a regime for sandboxing different hypervisor instances running on the device as shown above. I recommend keeping an eye on ACRN as it seems to have significant industry support. ACRN supported platforms currently tend to be strongly x86-based.

Hypervisor #2: EVE (part of LF Edge)

Also a project hosted on the Linux Foundation, EVE differs from ACRN in that it belongs to the LFEdge project cluster. Unlike ACRN, EVE also tends to be more agnostic about supported devices and architectures. Following the instructions hosted on the EVE Github page, I was able to build and run it on a Raspberry Pi 4 within the space of ten minutes, for example.

No alt text provided for this image

In terms of design philosophy, EVE is positioning itself as the “Android of edge devices.” You can learn more about EVE by watching this recent webinar featuring the Co-Founder of Zededa, Roman Shaposhnik. EVE makes use of a “Controller” structure which provisions and manages edge nodes to simplify the overhead of operating an edge cluster.

Wrapping Up

Expect to see more happening in the hypervisor space as the technology continues to evolve. Follow me to stay up to date with the latest developments in Edge Computing.

Cultivating Giants to Stand On: Extending Kubernetes to the Edge

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – click here for more content like this.

Kubernetes is more than just a buzzword. With Gartner predicting that by the end of 2025, 90% of applications at the edge will be containerized, it’s clear that organizations will be looking to leverage Kubernetes across their enterprises, but this isn’t a straightforward proposition. There’s much more involved than just repurposing the architecture we use in the data center in a smaller or more rugged form factor at the edge.

The edge environment has several major distinctions from data centers that must be addressed in order to successfully leverage Kubernetes:

Computers, whether massive data center machines or small nodes on the smart device edge, are essentially three parts — hardware, operating system (OS) and runtime — running in support of some sort of application. And within that, an operating system is just a program that allows the execution of other programs. We went through a time in the 1990s where it was believed that the OS was the only part that mattered, and the goal was to find the best one, like the OS was a titan with the entire world on its shoulders. The reality though is that it’s actually turtles all the way down! By this I mean that with virtualization, computers are not limited to just three parts. We can slice each individual section in many different ways, with hardware emulation, hypervisors, etc.

Image for post

Split it or join it: either way you get something exciting

So how can we look at these building blocks in the most optimal way in 2020?

We first have to talk about where we find computers — the spectrum of computers being deployed today is vast. From giant machines in data centers doing big things all the way to specialized computers that might be a smart light bulb or sensor. In the middle is the proverbial edge, which we call the Smart Device Edge.

Image for post

Image courtesy of LF Edge

As we look at how to best run Kubernetes on the smart device edge, the answer is that it’s a triplet of K3s, some kind of operating system (or support for K3s) and some sort of hardware.

And so then, if we have K3s and we have hardware, what’s the best possible way to run K3s on hardware? The answer is a specialized operating system.

Just like the team behind Docker used a specialized operating system when they had to run Docker on a MacBook Pro — they created Docker Desktop, which is a specialized engine — like an operating system — that’s only there in support of Docker. And so for the smart device edge, we’ve created EVE, a lightweight, secure, open, and universal operating system built to address the unique security and scale requirements of edge nodes deployed outside of the data center.

Image for post
EVE is to Edge what Android is to Mobile

What makes EVE different? It’s the only OS that enables organizations to extend their cloud-like experience to edge deployments outside of the data center while also supporting legacy software investments. It provides an abstraction layer that decouples software from the diverse landscape of IoT edge hardware to make application development and deployment easier, secure and interoperable. The hosting of Project EVE under LF Edge ensures vendor-neutral governance and community-driven development.

Ready to learn more and to see EVE in action? Check out the full discussion.

Hitting Performance Targets at the IoT Edge

By Blog, EdgeX Foundry

Written by James Butcher, EdgeX Foundry QA&Test Working Group Chair and Edge Xpert Product Manager at IOTech Systems

 

Edge computing is all about the premise of an organization processing its operational data at the edge of the network. That means performing data collection functions and running decision making and control algorithms as close to the edge sensors and OT devices as possible.

The EdgeX Foundry framework helps makes that premise a reality and provides an open, vendor-neutral approach for doing so. However, we must never lose sight of the fact that businesses will want to understand the full implications, including the fully loaded cost, of rolling out an edge system.

The EdgeX QA & Test Working Group

The EdgeX QA & Test working group is responsible for ensuring that releases meet the high standards of reliability and robustness that are required at the Industrial IoT edge. Our remit includes verifying the framework is well tested and ensuring we have the processes in place to maintain and monitor these high standards.

Solid Testing as Standard

EdgeX Foundry is an open source community project under the LF Edge umbrella and we take testing and reliability very seriously. We developed our own EdgeX testing hardness on top of a test framework called TAF. You’ll hear us speak about TAF a lot – it stands for Test Automation Framework and supports the functional, integration and performance testing needs that we have across the project. A version of EdgeX is only released when the TAF reports all tests are passing, for example.

EdgeX is transitioning to a second version of its APIs so there has been a big emphasis on ensuring the V2 API is accurately tested. TAF has been invaluable here because we have been able to easily update and add new tests for the new API. We are pushing ahead for the first release of the new APIs in the EdgeX Ireland release due in the spring.

Gathering Performance Metrics

The QA & Test group also record and gather performance metrics associated with the usage of the EdgeX platform. A common question from users is what sort of hardware is EdgeX aimed at and how fast can it perform? Well, of course, EdgeX has been designed to be hardware and operating system independent, but key metrics like footprint, CPU usage and latency of data flow do need to be provided so users can plan which IoT edge gateways and edge servers to use. Sometimes users have the luxury of being able to select brand new hardware to deploy in their edge projects, but often it is necessary to re-use existing already-deployed hardware.

Over the last release cycle, culminating in the EdgeX Hanoi release, the QA & Test group have put renewed focus on gathering and reporting these performance metrics. We have also improved the accuracy of the figures by increasing the number of tests ran and stating metric averages along with minimum and maximum readings, etc. Note that TAF can also monitor performance regressions. We’ve added threshold testing such that failures are reported if a code commit adversely affects any of the performance metrics. These are all tools to ensure that EdgeX quality and performance remains as high as possible as the project evolves.

Presenting the Performance

We’ve decided to go beyond just providing the raw metrics and instead have created a report that presents the key numbers along with some comment and explanation. Please find the Hanoi Performance Report here.

I hope you find this information useful. We are aiming to provide an updated version of this document for each EdgeX release and will track the progression as we go.

Commercial Support is Key

EdgeX is a fantastic project for helping to simplify the challenges of edge computing. The open-source and collaborative nature of the project brings together ideas from edge experts around the world. However, to pick up the point about businesses needing to understand the full implications of selecting an edge solution, companies are usually reluctant to deploy open source code without having guaranteed help at hand and support contracts in place.

So, switching to my other role in the EdgeX ecosystem for moment, let me mention how companies like IOTech provide products and services to mitigate that risk…

I am product manager of Edge Xpert which is IOTech’s value-add and commercially supported implementation of EdgeX. Our Edge Xpert 1.8 release is based on Hanoi and adds another level of features and services to help you. Head over to our website to read about how Edge Xpert can get your EdgeX systems to market much quicker and with less risk.

Future Plans

Obviously quality and robustness is very important to me in both my EdgeX and IOTech roles. Testing is never finished of course, and we are always looking to add to our test and performance tracking capability. Future iterations of the performance testing will include additions for the security services, scalability metrics and performance associated with the EdgeX Device Services – for example, how fast can Modbus data be collected, and how many data points can be received in a certain period, etc. These factors are all important in understanding how the platform will scale as it is deployed and understanding the number of deployments – and thus gateways and edge servers that will be needed.

Open Discussions

Feel free the join the weekly EdgeX QA & Test calls where we discuss progress and issues that need to be addressed. We meet every Tuesday at 8am PST. You can find the meeting links on our page here. To learn more about EdgeX Foundry’s Hanoi Release, click here.

 

You Can Now Run Windows 10 on a Raspberry Pi using Project EVE!

By Blog, Project EVE

Written by Aaron Williams, LF Edge Developer Advocate

Ever since Project EVE came under the Linux Foundation’s LF Edge umbrella, we have been asked about porting (and we wanted to port) EVE to the Raspberry Pi, so that developers and hobbyists could test out EVE’s virtualization of hardware.  Both were looking for an easy way to evaluate EVE by creating simple PoC projects, without having to buy a commercial grade IoT gateway or another device.  They wanted to just get started with something they already had on their desk.  And we are excited to announce that we have completed the first part of the work needed to run Windows on a Raspberry Pi 4!  We have posted the tutorial on our community wiki and it takes less than an hour to get it up and running.

The RPi

The Raspberry Pi was first released in 2012 with the goal of having a cheap and easy way to teach high school students how to code.  It had USB ports to attach a keyboard and mouse, HDMI to hook up to your TV, GPIO (General Purpose Input/Output) pins for IoT, and a networking cable for internet access.  Thus for $35 you had a great, cheap computer that ran Linux.  The Raspberry Pi Foundation sold a lot of these devices to schools, but the RPi really took off as developers and home hobbyists discovered them, thinking “Wow a $35 Linux computer, I wonder if I could do that home IoT project I have been planning?”  Plus, in many companies, the RPi became a great way to create “real” demos and PoCs cheaply.

Fast forward 7 years and we knew that we wanted to port EVE to the RPi, because it was such a large part of the IoT world, especially demos explaining IoT concepts.  (It is much easier to go to your manager or spouse and ask for $50 RPi vs. $500+ for some hardware.)  But, according to Erik Nordmark, TSC Chair of Project EVE, “the GIC (Global Interrupt Controller) and the proprietary RPI boot code on the RPi3 (and earlier models) prevented it from booting into a Type 1 Hypervisor like Xen without a hacking up strange emulation code.”  Thus, while it was possible, it would take a lot work and might not work well.

This changed with the release of the RP4.  Roman Shaposhnik, also of Project EVE, and Stefano Stabellini, of the Xen Project, saw that the RPi 4 had a regular GIC-400 interrupt controller that Xen supports right of out the box.  Thus, getting Xen to work on the RPi should be pretty easy, right?  And as they documented in their article from Linux.com, it wasn’t. “We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.”  But soon their hard work paid off and they were able to get Xen working and submitted a number of patches that will be part of the Linux 5.9 release.  (To learn about Roman and Stefano’s adventure, see their article Xen on Raspberry Pi 4 Adventures at Linux.com).  With that done, the EVE team turned their attention to see what work would be needed to complete the virtualization of the RPi, which went pretty smoothly.

Why Windows?

We have been saying since the start of Project EVE, that EVE is to IoT as Android is to phones.  Android allows you to write your code and push it to the device without having to worry about the hardware underneath.  EVE works much the same way.  It virtualizes the hardware, allowing you to push your code across devices.  And since EVE is open source, everyone benefits from the “plumbing” being handled by the community.  The plumbing in this case is the addition of a new device or family of devices. Thus, when a device is added, the whole community benefits.  IoT devices do have an extra security concern, namely they not usually housed in a locked building and instead are out in the open.  Therefore, the physical security of the device cannot be guaranteed and it must be assumed that there is no physical security.  Because of this, the default for EVE is turn off all external ports, such as USB ports.

This brings us to the question, why Windows?  Simply, why not?  Windows doesn’t belong on a Raspberry Pi, so we figured that it would be fun to see if it would work.  And it worked right out of the box, we just needed to find a containerized version of Windows and then we just deployed it (it is really that easy).  And it is a lot of fun using Windows knowing that it is running on a Raspberry Pi!

What work is left

We haven’t done a lot of testing and for us this is a PoC, so we won’t have a full list of limitations.  But here are a couple of things that we have found.  We will update our tutorial page as the community finds and fixes them.

Asking Petr Fedchenkov and Vladimir Suvorov, lead developers on Project EVE about issues that they have run into, Petr mentioned, “The biggest issue is that there are no drivers for GPU and Windows 10 ARM64 doesn’t have virtio-gpu support.  So, we are using ramfb (RAM Framebuffer), which is much more limited.”  In layman’s terms, this means that today if you plug a keyboard and monitor into the RPI, you will interact with EVE, not the Windows desktop.  The easy workaround is to run RDP (Windows Remote Desktop) or VNC, but in our testing RDP works much better.

The native WiFi does work , but you do need to turn it on via EVE.  Remember EVE is designed to give you control of your devices, remotely, yet securely so, the WiFi is turned off by default.  As you build EVE for your RPi, you have the option of passing in a SSID, which turns of the WiFi.

Bluetooth is currently not working.  USB ports should work, but there needs to be some configuration.  We are also not sure about the GPIO pins, we haven’t tested them yet.  (see below on how to help out to get these working and tested.

Call for Help

While it is pretty amazing what we have accomplished, but we need a lot of help.  If have any interest, in part of this, please let us know.  We need help with getting the USB’s fully working plus the items mentioned above.  Is there device that you would like to see EVE work on?  Please help us port it to that device.  We also could use tech writers, bloggers, or anyone that can help us improve our documentation and/or can help us get the word out about EVE.  If you are interested, please visit our GitHub, Wiki, or slack channel (#eve).

Kaloom’s Startup Journey (Part 1)

By Akraino, Blog

Written by Laurent Marchand, Founder and CEO of Kaloom, an LF Edge member company

This blog originally ran on the Kaloom website. To see more content like this, visit their website.

When we founded Kaloom, we embarked upon our journey with the vision of creating a truly P4-programmable, cloud- and open standards-based software defined networking fabric. Our name, Kaloom comes from the Latin word “caelum” which means “cloud.”

Armed solely with our intrepid crew of talented and experienced engineers, we left our good-paying, secure jobs at one of the world’s largest incumbent networking vendors and set off on our adventure. This two-part blog is the story of why we founded Kaloom and where we stand today.

In our previous blog, “SDN’s Struggle to Summit the Peak of Inflated Expectations,” Sonny outlined the challenges posed by the initial vision of SDN and the attempt to solve them with VM-based overlays. Upcoming 5G deployments will only amplify those challenges, disruptively creating an extreme need for lightweight container-based solutions that satisfy 5G’s cost, space and power requirements.

5G Amplifying SDN’s Challenges

With theoretical peak data rates up to 100 times faster than current 4G networks, 5G enables incredible new applications and use cases such as autonomous vehicles, AR/VR, IoT/IIoT and the ability to share real-time medical imaging – just to name a few. Telecom, cloud and data center providers alike look forward to 5G forming a robust edge “backbone” for the industrial internet to drive innovative new consumer apps, enterprise use cases and increased revenue streams.

These new 5G-enabled apps require extreme low latency which demands a distributed edge architecture that puts applications close to their data source and end users. When we founded Kaloom, incumbents were still selling the same technology platforms and architectural approach as they had with 4G, however, the economics for 5G are dramatically different.

The Pain Points are the Main Points – Costs, Revenues and Latency

For example, service providers’ revenues from smart phone users running 3G/4G were about $50 per month. However, connected cars will only generate about $1 or $2 per month and installing 5G requires extreme amounts of upfront investments (CAPEX), not only in antennas but also in the backend servers, storage and networking switches required to support these apps. So, it was clear to us that the cost point for deploying 5G had to be much cheaper than for 4G. Even if 5G costs were the same as 4G rollouts, service providers still could not economically justify their revenue models throttling down to $1 or $2 per month.

The low latency requirement is another pain point, mainly because 5G-enabled high speed apps won’t work properly with low throughput and high latency. Many of these new applications require an end-to-end latency below 10 milliseconds, unfortunately, typical public clouds are unable to fulfill such requirements.

To take advantage of lower costs via centralization, most hyperscale cloud providers have massive regional deployments, with only about six to 10 major facilities serving all North America. If your cloud hub is 1,000 miles away, even the speed of light (fiber optics) does not transfer data fast enough to meet the low latency requirements of 5G apps such as connected cars. For example, the latency from New-York to Amazon Web Services or Microsoft Azure in Northern -Virginia is greater than 20 milliseconds.

Today, the VM-based network overlay has dumb, or unmanaged, switches; then all the upper layer functions such as firewalling, load balancing and 5G UPF run on VMs in servers. Because you couldn’t previously run these functions at Tbps speeds on switches, they were put on x86 servers. High-end servers cost about $25,000 and consume 1,000 watts of power each. With x86 servers’ costs and power consumption, we could see that the numbers were way off from what was needed to run a business-viable environment for 5G providers. Rather than have switching and routing run via VMs on x86, Kaloom’s vision was to be able to run all networking functions at Tbps speeds to meet the power- and cost-efficient points needed for 5G deployments to generate positive revenues. This is why Kaloom decided to use containers, specifically open source Kubernetes, and the programmability of P4-enabled chips as the basis for our solutions.

Solving the latency problem required a paradigm shift to the distributed edge, which places compute, storage and networking resources closer to the end user. Because of these constraints, we were convinced the edge would become far more important. As seen in the figure below, there are many edges.

Many Edges

Kaloom’s Green Logo and Telco’s Central Office Constraints

While large regional cloud facilities were not built for the new distributed edge paradigm, service providers’ legacy Central Office (CO) architectures are even more ill-suited for the shift. Containing an odd mishmash of old and new equipment from each decade going back at least 50 years, these facilities are also typically near the limit of their space, power and cooling requirements.

There are about 25,000 legacy COs in the US and they need to be converted, or updated in some way, to sustain low latency networks that support 5G’s shiny new apps. Today, perhaps 10 percent of data is processed outside the CO and data center infrastructure. In a decade, this will grow to 25 percent which means telcos have their work cut out for them.

This major buildup of required new 5G-supporting edge infrastructure will have an extremely negative impact on the environment in terms of energy consumption. According to the Linux Foundation’s State of the Edge 2020 report, by 2028 it will consume 102,000 megawatts of power and over $700 billion in cumulative CAPEX will be spent within the next decade on edge IT infrastructure and data center facilities. The power consumption of a single 5G base station alone is three times that of its 4G LTE predecessor; 5G needs three times the base stations for the same coverage as LTE due to higher frequencies; and 5G base stations costs 4 times the price of LTE, according to mobile executives and an analyst quoted in this article at Light Reading. Upgrading a CO’s power supply involves permits, closing and digging up streets and running new high capacity wiring to the building, which is very costly. Unless all of society’s power sources suddenly turned sustainable, our founding team could see we were going to have to create a technology that could dramatically and rapidly reduce the power needed to provide these new 5G services and apps to consumers and enterprises.

Have you noticed that Kaloom’s logo is green?

This is the reason why. We have set out to solve all the issues facing telcos as they work to bring their infrastructure into the 21st century, including power.

P4 and Containers vs. ASICs and VM-Overlays

Where we previously worked, we were using a lot of switches that were not programmable. We were changing hardware every three to four years to keep pace with changes in the mobile, cloud and networking industries that constantly require more network, compute and storage resources. So, we were periodically throwing away and replacing very expensive hardware. We thought, why not bring the same flexibility by building a 100 percent programmable networking fabric. This, again, was driven by costs.

Before we left our jobs, we were part of a team running 4G Evolved Packet Gateway (EPG) using virtual machines (VMs) on Red Hat’s software platforms. Fortunately, we were asked to investigate the difference in the performance and other characteristics of containers versus VMs. That’s when we realized that, by using containers, the same physical servers could sustain several times more applications as compared to those running VM-based overlay networks.

This meant that the cost per user and cost per Gbps could be lowered dramatically. Knowing the economics of 5G would require this vastly reduced cost point, containers were the clear choice. The fact that they were more lightweight in terms of code meant that we could use less storage to house the networking apps and less compute to run them. They’re also more easily portable and quicker to instantiate. When a VM crashes it can take four or five minutes to resume, but you can restore a container in a few seconds.

Additionally, because VM-based overlays sit on top of the fabric so they provide no direct interaction with the network underlay below in terms of visibility and control. If there is congestion or a bottleneck in the lower hardware’s fabric then the overlays may not know when, where, or how bad the problem is. Delivering Quality of Service (QoS) for service providers and their end users is a key consideration.

Kaloom’s Foundations

As a member of LF Edge, Kaloom has demonstrated innovative containerized Edge solutions for 5G along with IBM at the Open Networking and Edge Summit (ONES) 2020. Kaloom also presented further details of this optimized edge solution to the Akraino TSC as an invited presentation. Kaloom is participating in the Public Cloud Edge Interface Blueprint work as it is interesting and well aligned with our technology directions and customer requirements.

We remain committed to address the various pain points including networking’s costs, the need to reduce 5G’s growing energy footprint so it has a less negative impact on the environment, and to deliver much-needed software control and programmability. Looking to solve all the networking challenges facing service providers’ 5G deployments, we definitely had our work cut out for us.

EdgeX Foundry China Day 2020

By Blog, EdgeX Foundry

Written by Ulia Sun, Project Specialist, Ecosystem & Communications of  VMware China R&D

EdgeX Foundry, a Stage 3 project under LF Edge, is a leading framework on edge computing with open and vendor neutral architecture. Lead by VMware and Intel, they launched the EdgeX Foundry China Project in late 2019. The group quickly gained attention and began hosting meetups and online meetings. Within the year, their efforts have helped EdgeX become the largest edge computing community in China.

With the mission of increasing collaboration in the edge computing community and expanding the impact of edge computing technologies, the EdgeX Foundry China Project hosted EdgeX Foundry China Day 2020 last year on December 22-23.

The event was a huge success and included: 

  • 9 Ecosystem Partner:Intel 英特尔,IBM, ThunderSoft 中科创达,JiangXing Intelligence江行智能, IO Tech, Linux Foundation, EMQ 杭州映云科技,Agree Technology赞同科技, Celestone 北京天石易通信息技术有限公司
  • 4 Live broadcast channels:VMware Official Live Streaming Platform (VMware大会官方直播间) Bilibili – EdgeX Foundry China  (EdgeX中国社区B站官方直播间),CSDN Website Home Page Recommendation (开发者专区官网首页推荐直播间), OpenVINO Community Wechat Live (OpenVINO社区直播)
  • A global reach: 10,000+ touch points across the country
  • New collaborators: 249 new group members of EdgeX Foundry China Community
  • 4 interactive workshops led by LF Edge member companies
    • “AI/Open VINO”  by Intel
    • “ EdgeX Coding Camp”  by Thundersoft & Jiangxing Intelligence
    • “ How to use Rule Engine in EdgeX”  by EMQ
    • “Deep Dive into Open Horizon”  by IBM
  • 2 Sessions Related to VMware:
    • Alan Ren, General Manager of VMware China R&D, attended and gave the opening speech with a recognition of VMware’s contributions to EdgeX Foundry China community 
    • Gavin Lu,  Technical Director of VMware China R&D gave the keynote speech  with the theme of “EdgeX Foundry China 2020 Yearly Review and 2021 Plan”

Partners & Keynote speakers

Live Steaming Platforms

Highlights

Watch the video here: https://live.csdn.net/room/edgexfoundry/Wp2vsTYG 

For more information, visit the EdgeX Foundry China Project Wiki: https://wiki.edgexfoundry.org/display/FA/China+Project.