All Posts By

LF Edge

Kubernetes Is Paving the Path for Edge Computing Adoption

By Blog, State of the Edge, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

For more content like this, please visit the Section website

Since Kubernetes was released five years ago by Google, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.

When it comes to the Infrastructure (or Service Provider) Edge, Kubernetes is increasingly being adopted as a key component of edge computing. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.

A Shared Operational Paradigm

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.

As defined by the State of the Edge’s Open Glossary, an “edge-native application” is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.

Kubernetes-Based Edge Architecture

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are three approaches to using Kubernetes in edge-based architecture to manage workloads and resource deployments.

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.

The three approaches for edge-based scenarios can be summarized as follows:

  1. The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine. K3s is the reference architecture for this solution.
  2. The next approach comes from KubeEdge, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.
  3. The third approach is hierarchical cloud plus edge, using a virtual kubelet as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.

Section’s Migration to Kubernetes

Section migrated to Kubernetes from a legacy homegrown scheduler last year. Instead of building our own fixed hardware network, Section distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, Packet, DigitalOcean, CenturyLink, and RackCorp. Kubernetes allows us to be infrastructure-agnostic and seamlessly manage a diverse set of workloads.

Our first-hand experience of the many benefits of Kubernetes at the edge include:

  • Flexible tooling, allowing our developers to interact with the edge as they need to;
  • Our users can run edge workloads anywhere along the edge continuum;
  • Scaling up and out as needed through our vendor-neutral worldwide network;
  • High availability of services;
  • Fewer service interruptions during upgrades;
  • Greater resource efficiency – in particular, we use the Kubernetes Horizontal Pod Autoscaler, which automatically scales the number of pods up or down according to defined latency or volume thresholds;
  • Full visibility into production workloads through the built-in monitoring system;
  • Improved performance.

Summary

As more organizations and operators continue to adopt and support Kubernetes-based cloud-edge patterns, the ecosystem will continue to mature. However, not every organization will have the resources and/or expertise to build these systems themselves. This is where edge platforms (like Section) bridge those gaps, offering DevOps teams familiar tooling to take advantage of the benefits that Kubernetes has to offer without the complexities that come along with it.

Interesting Developments In Edge Hypervisors

By Blog, Industry Article, Project EVE, State of the Edge

Written by Rex St. John, EGX Developer Relations at NVIDIA

This article originally ran on Rex’s LinkedIn page. For more content like this, connect with him on LinkedIn. 

After building Edge Computing ecosystems at Intel and Arm, I have recently made the switch to working on Edge Computing at NVIDIA. Several people have asked me to share my perspective and learnings, so I am starting this informal, personal series on the topic. All opinions shared here are my own personal opinions and not those of my employer.

Objective

In this article, I will share two reasons why some experts in the industry are investing in hypervisor technology as well as two interesting open source edge hypervisor solutions to be aware of. For edge definition nitpickers (you know who you are), I am going to be referring to the “Device Edge” here. There are many other definition for “Edge,” if you are curious, read this LF Edge white paper.

The Hovercraft Analogy

For those of you who are unfamiliar, a hypervisor is kind of like a hovercraft that your programs can sit inside. Like hovercrafts, hypervisors can provide protective cushions which allow your applications to smoothly transition from one device to another, shielding the occupants from the rugged nature of the terrain below. With hypervisors, the bumps in the terrain (differences between hardware devices), are minimized and mobility of the application is increased.

Benefits of Hypervisors

Benefits of hypervisors include security, portability and reduced need to perform cumbersome customization to run on specific hardware. Hypervisors also allow a device to concurrently run multiple, completely different, operating systems. Hypervisors also can help partition applications from one another for security and reliability purposes. You can read more about hypervisors here. They frequently are compared to, used together with or even compete with containers for similar use cases, though they historically require more processing overhead to run.

Two Reasons Why Some (Very Smart) Folks Are Choosing Hypervisors For The Edge

A core challenge in Edge Computing is the extreme diversity in hardware that applications are expected to run on. This, in turn, creates challenges in producing secure, maintainable, scalable applications capable of running across all possible targets.

Unlike their heavier datacenter-based predecessors, light-weight hypervisors offer both the benefits of traditional hypervisors while also respecting the limited resources found on the device edge. Here are two reasons why some in the industry are taking a careful look at edge hypervisors.

Reason 1: Avoiding The Complexity And Overhead of Kubernetes

One potential reason for taking a hypervisor-based approach at the edge is that there may be downsides in pursuing Kubernetes for smaller clusters. These include the difficulty in building and managing a team who can properly setup and scale a real-world Kubernetes application due to the overhead and complexity of Kubernetes itself. In some cases, such as in running a cluster of 4-5 nodes, it might be desirable to use more streamlined approaches involving a controller and light-weight hypervisors. This is the approach taken by EVE, mentioned in more detail below.

Reason 2: Ease Of Modernizing Brown-Field Industrial IT

Another pressing reason for choosing edge hypervisors is that “brown-field” installations of existing edge hardware are extremely expensive to upgrade to follow modern IT “best practices.” Hypervisors provide a path forward that does not involve rewriting old systems from scratch as the code running on older machines can frequently be shifted into a hypervisor and neatly managed and secured from there (a process referred to as “Workload Consolidation.”)

Let’s take a look at two interesting examples of edge hypervisors to understand further.

Hypervisor #1: Project ACRN

No alt text provided for this image

The first edge hypervisor we will look at is called ACRN, which is a project hosted by the Linux Foundation. ACRN has a well documented architecture and offers a wide range of capabilities and configurations depending on the situation or desired outcome.

No alt text provided for this image

ACRN seeks to support industrial use cases by offering a specific partitioning between high-reliability processes and those which do not need to receive preferential security and processing priority. ACRN accomplishes this separation by specifying a regime for sandboxing different hypervisor instances running on the device as shown above. I recommend keeping an eye on ACRN as it seems to have significant industry support. ACRN supported platforms currently tend to be strongly x86-based.

Hypervisor #2: EVE (part of LF Edge)

Also a project hosted on the Linux Foundation, EVE differs from ACRN in that it belongs to the LFEdge project cluster. Unlike ACRN, EVE also tends to be more agnostic about supported devices and architectures. Following the instructions hosted on the EVE Github page, I was able to build and run it on a Raspberry Pi 4 within the space of ten minutes, for example.

No alt text provided for this image

In terms of design philosophy, EVE is positioning itself as the “Android of edge devices.” You can learn more about EVE by watching this recent webinar featuring the Co-Founder of Zededa, Roman Shaposhnik. EVE makes use of a “Controller” structure which provisions and manages edge nodes to simplify the overhead of operating an edge cluster.

Wrapping Up

Expect to see more happening in the hypervisor space as the technology continues to evolve. Follow me to stay up to date with the latest developments in Edge Computing.

Cultivating Giants to Stand On: Extending Kubernetes to the Edge

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – click here for more content like this.

Kubernetes is more than just a buzzword. With Gartner predicting that by the end of 2025, 90% of applications at the edge will be containerized, it’s clear that organizations will be looking to leverage Kubernetes across their enterprises, but this isn’t a straightforward proposition. There’s much more involved than just repurposing the architecture we use in the data center in a smaller or more rugged form factor at the edge.

The edge environment has several major distinctions from data centers that must be addressed in order to successfully leverage Kubernetes:

Computers, whether massive data center machines or small nodes on the smart device edge, are essentially three parts — hardware, operating system (OS) and runtime — running in support of some sort of application. And within that, an operating system is just a program that allows the execution of other programs. We went through a time in the 1990s where it was believed that the OS was the only part that mattered, and the goal was to find the best one, like the OS was a titan with the entire world on its shoulders. The reality though is that it’s actually turtles all the way down! By this I mean that with virtualization, computers are not limited to just three parts. We can slice each individual section in many different ways, with hardware emulation, hypervisors, etc.

Image for post

Split it or join it: either way you get something exciting

So how can we look at these building blocks in the most optimal way in 2020?

We first have to talk about where we find computers — the spectrum of computers being deployed today is vast. From giant machines in data centers doing big things all the way to specialized computers that might be a smart light bulb or sensor. In the middle is the proverbial edge, which we call the Smart Device Edge.

Image for post

Image courtesy of LF Edge

As we look at how to best run Kubernetes on the smart device edge, the answer is that it’s a triplet of K3s, some kind of operating system (or support for K3s) and some sort of hardware.

And so then, if we have K3s and we have hardware, what’s the best possible way to run K3s on hardware? The answer is a specialized operating system.

Just like the team behind Docker used a specialized operating system when they had to run Docker on a MacBook Pro — they created Docker Desktop, which is a specialized engine — like an operating system — that’s only there in support of Docker. And so for the smart device edge, we’ve created EVE, a lightweight, secure, open, and universal operating system built to address the unique security and scale requirements of edge nodes deployed outside of the data center.

Image for post
EVE is to Edge what Android is to Mobile

What makes EVE different? It’s the only OS that enables organizations to extend their cloud-like experience to edge deployments outside of the data center while also supporting legacy software investments. It provides an abstraction layer that decouples software from the diverse landscape of IoT edge hardware to make application development and deployment easier, secure and interoperable. The hosting of Project EVE under LF Edge ensures vendor-neutral governance and community-driven development.

Ready to learn more and to see EVE in action? Check out the full discussion.

Hitting Performance Targets at the IoT Edge

By Blog, EdgeX Foundry

Written by James Butcher, EdgeX Foundry QA&Test Working Group Chair and Edge Xpert Product Manager at IOTech Systems

 

Edge computing is all about the premise of an organization processing its operational data at the edge of the network. That means performing data collection functions and running decision making and control algorithms as close to the edge sensors and OT devices as possible.

The EdgeX Foundry framework helps makes that premise a reality and provides an open, vendor-neutral approach for doing so. However, we must never lose sight of the fact that businesses will want to understand the full implications, including the fully loaded cost, of rolling out an edge system.

The EdgeX QA & Test Working Group

The EdgeX QA & Test working group is responsible for ensuring that releases meet the high standards of reliability and robustness that are required at the Industrial IoT edge. Our remit includes verifying the framework is well tested and ensuring we have the processes in place to maintain and monitor these high standards.

Solid Testing as Standard

EdgeX Foundry is an open source community project under the LF Edge umbrella and we take testing and reliability very seriously. We developed our own EdgeX testing hardness on top of a test framework called TAF. You’ll hear us speak about TAF a lot – it stands for Test Automation Framework and supports the functional, integration and performance testing needs that we have across the project. A version of EdgeX is only released when the TAF reports all tests are passing, for example.

EdgeX is transitioning to a second version of its APIs so there has been a big emphasis on ensuring the V2 API is accurately tested. TAF has been invaluable here because we have been able to easily update and add new tests for the new API. We are pushing ahead for the first release of the new APIs in the EdgeX Ireland release due in the spring.

Gathering Performance Metrics

The QA & Test group also record and gather performance metrics associated with the usage of the EdgeX platform. A common question from users is what sort of hardware is EdgeX aimed at and how fast can it perform? Well, of course, EdgeX has been designed to be hardware and operating system independent, but key metrics like footprint, CPU usage and latency of data flow do need to be provided so users can plan which IoT edge gateways and edge servers to use. Sometimes users have the luxury of being able to select brand new hardware to deploy in their edge projects, but often it is necessary to re-use existing already-deployed hardware.

Over the last release cycle, culminating in the EdgeX Hanoi release, the QA & Test group have put renewed focus on gathering and reporting these performance metrics. We have also improved the accuracy of the figures by increasing the number of tests ran and stating metric averages along with minimum and maximum readings, etc. Note that TAF can also monitor performance regressions. We’ve added threshold testing such that failures are reported if a code commit adversely affects any of the performance metrics. These are all tools to ensure that EdgeX quality and performance remains as high as possible as the project evolves.

Presenting the Performance

We’ve decided to go beyond just providing the raw metrics and instead have created a report that presents the key numbers along with some comment and explanation. Please find the Hanoi Performance Report here.

I hope you find this information useful. We are aiming to provide an updated version of this document for each EdgeX release and will track the progression as we go.

Commercial Support is Key

EdgeX is a fantastic project for helping to simplify the challenges of edge computing. The open-source and collaborative nature of the project brings together ideas from edge experts around the world. However, to pick up the point about businesses needing to understand the full implications of selecting an edge solution, companies are usually reluctant to deploy open source code without having guaranteed help at hand and support contracts in place.

So, switching to my other role in the EdgeX ecosystem for moment, let me mention how companies like IOTech provide products and services to mitigate that risk…

I am product manager of Edge Xpert which is IOTech’s value-add and commercially supported implementation of EdgeX. Our Edge Xpert 1.8 release is based on Hanoi and adds another level of features and services to help you. Head over to our website to read about how Edge Xpert can get your EdgeX systems to market much quicker and with less risk.

Future Plans

Obviously quality and robustness is very important to me in both my EdgeX and IOTech roles. Testing is never finished of course, and we are always looking to add to our test and performance tracking capability. Future iterations of the performance testing will include additions for the security services, scalability metrics and performance associated with the EdgeX Device Services – for example, how fast can Modbus data be collected, and how many data points can be received in a certain period, etc. These factors are all important in understanding how the platform will scale as it is deployed and understanding the number of deployments – and thus gateways and edge servers that will be needed.

Open Discussions

Feel free the join the weekly EdgeX QA & Test calls where we discuss progress and issues that need to be addressed. We meet every Tuesday at 8am PST. You can find the meeting links on our page here. To learn more about EdgeX Foundry’s Hanoi Release, click here.

 

You Can Now Run Windows 10 on a Raspberry Pi using Project EVE!

By Blog, Project EVE

Written by Aaron Williams, LF Edge Developer Advocate

Ever since Project EVE came under the Linux Foundation’s LF Edge umbrella, we have been asked about porting (and we wanted to port) EVE to the Raspberry Pi, so that developers and hobbyists could test out EVE’s virtualization of hardware.  Both were looking for an easy way to evaluate EVE by creating simple PoC projects, without having to buy a commercial grade IoT gateway or another device.  They wanted to just get started with something they already had on their desk.  And we are excited to announce that we have completed the first part of the work needed to run Windows on a Raspberry Pi 4!  We have posted the tutorial on our community wiki and it takes less than an hour to get it up and running.

The RPi

The Raspberry Pi was first released in 2012 with the goal of having a cheap and easy way to teach high school students how to code.  It had USB ports to attach a keyboard and mouse, HDMI to hook up to your TV, GPIO (General Purpose Input/Output) pins for IoT, and a networking cable for internet access.  Thus for $35 you had a great, cheap computer that ran Linux.  The Raspberry Pi Foundation sold a lot of these devices to schools, but the RPi really took off as developers and home hobbyists discovered them, thinking “Wow a $35 Linux computer, I wonder if I could do that home IoT project I have been planning?”  Plus, in many companies, the RPi became a great way to create “real” demos and PoCs cheaply.

Fast forward 7 years and we knew that we wanted to port EVE to the RPi, because it was such a large part of the IoT world, especially demos explaining IoT concepts.  (It is much easier to go to your manager or spouse and ask for $50 RPi vs. $500+ for some hardware.)  But, according to Erik Nordmark, TSC Chair of Project EVE, “the GIC (Global Interrupt Controller) and the proprietary RPI boot code on the RPi3 (and earlier models) prevented it from booting into a Type 1 Hypervisor like Xen without a hacking up strange emulation code.”  Thus, while it was possible, it would take a lot work and might not work well.

This changed with the release of the RP4.  Roman Shaposhnik, also of Project EVE, and Stefano Stabellini, of the Xen Project, saw that the RPi 4 had a regular GIC-400 interrupt controller that Xen supports right of out the box.  Thus, getting Xen to work on the RPi should be pretty easy, right?  And as they documented in their article from Linux.com, it wasn’t. “We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.”  But soon their hard work paid off and they were able to get Xen working and submitted a number of patches that will be part of the Linux 5.9 release.  (To learn about Roman and Stefano’s adventure, see their article Xen on Raspberry Pi 4 Adventures at Linux.com).  With that done, the EVE team turned their attention to see what work would be needed to complete the virtualization of the RPi, which went pretty smoothly.

Why Windows?

We have been saying since the start of Project EVE, that EVE is to IoT as Android is to phones.  Android allows you to write your code and push it to the device without having to worry about the hardware underneath.  EVE works much the same way.  It virtualizes the hardware, allowing you to push your code across devices.  And since EVE is open source, everyone benefits from the “plumbing” being handled by the community.  The plumbing in this case is the addition of a new device or family of devices. Thus, when a device is added, the whole community benefits.  IoT devices do have an extra security concern, namely they not usually housed in a locked building and instead are out in the open.  Therefore, the physical security of the device cannot be guaranteed and it must be assumed that there is no physical security.  Because of this, the default for EVE is turn off all external ports, such as USB ports.

This brings us to the question, why Windows?  Simply, why not?  Windows doesn’t belong on a Raspberry Pi, so we figured that it would be fun to see if it would work.  And it worked right out of the box, we just needed to find a containerized version of Windows and then we just deployed it (it is really that easy).  And it is a lot of fun using Windows knowing that it is running on a Raspberry Pi!

What work is left

We haven’t done a lot of testing and for us this is a PoC, so we won’t have a full list of limitations.  But here are a couple of things that we have found.  We will update our tutorial page as the community finds and fixes them.

Asking Petr Fedchenkov and Vladimir Suvorov, lead developers on Project EVE about issues that they have run into, Petr mentioned, “The biggest issue is that there are no drivers for GPU and Windows 10 ARM64 doesn’t have virtio-gpu support.  So, we are using ramfb (RAM Framebuffer), which is much more limited.”  In layman’s terms, this means that today if you plug a keyboard and monitor into the RPI, you will interact with EVE, not the Windows desktop.  The easy workaround is to run RDP (Windows Remote Desktop) or VNC, but in our testing RDP works much better.

The native WiFi does work , but you do need to turn it on via EVE.  Remember EVE is designed to give you control of your devices, remotely, yet securely so, the WiFi is turned off by default.  As you build EVE for your RPi, you have the option of passing in a SSID, which turns of the WiFi.

Bluetooth is currently not working.  USB ports should work, but there needs to be some configuration.  We are also not sure about the GPIO pins, we haven’t tested them yet.  (see below on how to help out to get these working and tested.

Call for Help

While it is pretty amazing what we have accomplished, but we need a lot of help.  If have any interest, in part of this, please let us know.  We need help with getting the USB’s fully working plus the items mentioned above.  Is there device that you would like to see EVE work on?  Please help us port it to that device.  We also could use tech writers, bloggers, or anyone that can help us improve our documentation and/or can help us get the word out about EVE.  If you are interested, please visit our GitHub, Wiki, or slack channel (#eve).

Kaloom’s Startup Journey (Part 1)

By Akraino, Blog

Written by Laurent Marchand, Founder and CEO of Kaloom, an LF Edge member company

This blog originally ran on the Kaloom website. To see more content like this, visit their website.

When we founded Kaloom, we embarked upon our journey with the vision of creating a truly P4-programmable, cloud- and open standards-based software defined networking fabric. Our name, Kaloom comes from the Latin word “caelum” which means “cloud.”

Armed solely with our intrepid crew of talented and experienced engineers, we left our good-paying, secure jobs at one of the world’s largest incumbent networking vendors and set off on our adventure. This two-part blog is the story of why we founded Kaloom and where we stand today.

In our previous blog, “SDN’s Struggle to Summit the Peak of Inflated Expectations,” Sonny outlined the challenges posed by the initial vision of SDN and the attempt to solve them with VM-based overlays. Upcoming 5G deployments will only amplify those challenges, disruptively creating an extreme need for lightweight container-based solutions that satisfy 5G’s cost, space and power requirements.

5G Amplifying SDN’s Challenges

With theoretical peak data rates up to 100 times faster than current 4G networks, 5G enables incredible new applications and use cases such as autonomous vehicles, AR/VR, IoT/IIoT and the ability to share real-time medical imaging – just to name a few. Telecom, cloud and data center providers alike look forward to 5G forming a robust edge “backbone” for the industrial internet to drive innovative new consumer apps, enterprise use cases and increased revenue streams.

These new 5G-enabled apps require extreme low latency which demands a distributed edge architecture that puts applications close to their data source and end users. When we founded Kaloom, incumbents were still selling the same technology platforms and architectural approach as they had with 4G, however, the economics for 5G are dramatically different.

The Pain Points are the Main Points – Costs, Revenues and Latency

For example, service providers’ revenues from smart phone users running 3G/4G were about $50 per month. However, connected cars will only generate about $1 or $2 per month and installing 5G requires extreme amounts of upfront investments (CAPEX), not only in antennas but also in the backend servers, storage and networking switches required to support these apps. So, it was clear to us that the cost point for deploying 5G had to be much cheaper than for 4G. Even if 5G costs were the same as 4G rollouts, service providers still could not economically justify their revenue models throttling down to $1 or $2 per month.

The low latency requirement is another pain point, mainly because 5G-enabled high speed apps won’t work properly with low throughput and high latency. Many of these new applications require an end-to-end latency below 10 milliseconds, unfortunately, typical public clouds are unable to fulfill such requirements.

To take advantage of lower costs via centralization, most hyperscale cloud providers have massive regional deployments, with only about six to 10 major facilities serving all North America. If your cloud hub is 1,000 miles away, even the speed of light (fiber optics) does not transfer data fast enough to meet the low latency requirements of 5G apps such as connected cars. For example, the latency from New-York to Amazon Web Services or Microsoft Azure in Northern -Virginia is greater than 20 milliseconds.

Today, the VM-based network overlay has dumb, or unmanaged, switches; then all the upper layer functions such as firewalling, load balancing and 5G UPF run on VMs in servers. Because you couldn’t previously run these functions at Tbps speeds on switches, they were put on x86 servers. High-end servers cost about $25,000 and consume 1,000 watts of power each. With x86 servers’ costs and power consumption, we could see that the numbers were way off from what was needed to run a business-viable environment for 5G providers. Rather than have switching and routing run via VMs on x86, Kaloom’s vision was to be able to run all networking functions at Tbps speeds to meet the power- and cost-efficient points needed for 5G deployments to generate positive revenues. This is why Kaloom decided to use containers, specifically open source Kubernetes, and the programmability of P4-enabled chips as the basis for our solutions.

Solving the latency problem required a paradigm shift to the distributed edge, which places compute, storage and networking resources closer to the end user. Because of these constraints, we were convinced the edge would become far more important. As seen in the figure below, there are many edges.

Many Edges

Kaloom’s Green Logo and Telco’s Central Office Constraints

While large regional cloud facilities were not built for the new distributed edge paradigm, service providers’ legacy Central Office (CO) architectures are even more ill-suited for the shift. Containing an odd mishmash of old and new equipment from each decade going back at least 50 years, these facilities are also typically near the limit of their space, power and cooling requirements.

There are about 25,000 legacy COs in the US and they need to be converted, or updated in some way, to sustain low latency networks that support 5G’s shiny new apps. Today, perhaps 10 percent of data is processed outside the CO and data center infrastructure. In a decade, this will grow to 25 percent which means telcos have their work cut out for them.

This major buildup of required new 5G-supporting edge infrastructure will have an extremely negative impact on the environment in terms of energy consumption. According to the Linux Foundation’s State of the Edge 2020 report, by 2028 it will consume 102,000 megawatts of power and over $700 billion in cumulative CAPEX will be spent within the next decade on edge IT infrastructure and data center facilities. The power consumption of a single 5G base station alone is three times that of its 4G LTE predecessor; 5G needs three times the base stations for the same coverage as LTE due to higher frequencies; and 5G base stations costs 4 times the price of LTE, according to mobile executives and an analyst quoted in this article at Light Reading. Upgrading a CO’s power supply involves permits, closing and digging up streets and running new high capacity wiring to the building, which is very costly. Unless all of society’s power sources suddenly turned sustainable, our founding team could see we were going to have to create a technology that could dramatically and rapidly reduce the power needed to provide these new 5G services and apps to consumers and enterprises.

Have you noticed that Kaloom’s logo is green?

This is the reason why. We have set out to solve all the issues facing telcos as they work to bring their infrastructure into the 21st century, including power.

P4 and Containers vs. ASICs and VM-Overlays

Where we previously worked, we were using a lot of switches that were not programmable. We were changing hardware every three to four years to keep pace with changes in the mobile, cloud and networking industries that constantly require more network, compute and storage resources. So, we were periodically throwing away and replacing very expensive hardware. We thought, why not bring the same flexibility by building a 100 percent programmable networking fabric. This, again, was driven by costs.

Before we left our jobs, we were part of a team running 4G Evolved Packet Gateway (EPG) using virtual machines (VMs) on Red Hat’s software platforms. Fortunately, we were asked to investigate the difference in the performance and other characteristics of containers versus VMs. That’s when we realized that, by using containers, the same physical servers could sustain several times more applications as compared to those running VM-based overlay networks.

This meant that the cost per user and cost per Gbps could be lowered dramatically. Knowing the economics of 5G would require this vastly reduced cost point, containers were the clear choice. The fact that they were more lightweight in terms of code meant that we could use less storage to house the networking apps and less compute to run them. They’re also more easily portable and quicker to instantiate. When a VM crashes it can take four or five minutes to resume, but you can restore a container in a few seconds.

Additionally, because VM-based overlays sit on top of the fabric so they provide no direct interaction with the network underlay below in terms of visibility and control. If there is congestion or a bottleneck in the lower hardware’s fabric then the overlays may not know when, where, or how bad the problem is. Delivering Quality of Service (QoS) for service providers and their end users is a key consideration.

Kaloom’s Foundations

As a member of LF Edge, Kaloom has demonstrated innovative containerized Edge solutions for 5G along with IBM at the Open Networking and Edge Summit (ONES) 2020. Kaloom also presented further details of this optimized edge solution to the Akraino TSC as an invited presentation. Kaloom is participating in the Public Cloud Edge Interface Blueprint work as it is interesting and well aligned with our technology directions and customer requirements.

We remain committed to address the various pain points including networking’s costs, the need to reduce 5G’s growing energy footprint so it has a less negative impact on the environment, and to deliver much-needed software control and programmability. Looking to solve all the networking challenges facing service providers’ 5G deployments, we definitely had our work cut out for us.

EdgeX Foundry China Day 2020

By Blog, EdgeX Foundry

Written by Ulia Sun, Project Specialist, Ecosystem & Communications of  VMware China R&D

EdgeX Foundry, a Stage 3 project under LF Edge, is a leading framework on edge computing with open and vendor neutral architecture. Lead by VMware and Intel, they launched the EdgeX Foundry China Project in late 2019. The group quickly gained attention and began hosting meetups and online meetings. Within the year, their efforts have helped EdgeX become the largest edge computing community in China.

With the mission of increasing collaboration in the edge computing community and expanding the impact of edge computing technologies, the EdgeX Foundry China Project hosted EdgeX Foundry China Day 2020 last year on December 22-23.

The event was a huge success and included: 

  • 9 Ecosystem Partner:Intel 英特尔,IBM, ThunderSoft 中科创达,JiangXing Intelligence江行智能, IO Tech, Linux Foundation, EMQ 杭州映云科技,Agree Technology赞同科技, Celestone 北京天石易通信息技术有限公司
  • 4 Live broadcast channels:VMware Official Live Streaming Platform (VMware大会官方直播间) Bilibili – EdgeX Foundry China  (EdgeX中国社区B站官方直播间),CSDN Website Home Page Recommendation (开发者专区官网首页推荐直播间), OpenVINO Community Wechat Live (OpenVINO社区直播)
  • A global reach: 10,000+ touch points across the country
  • New collaborators: 249 new group members of EdgeX Foundry China Community
  • 4 interactive workshops led by LF Edge member companies
    • “AI/Open VINO”  by Intel
    • “ EdgeX Coding Camp”  by Thundersoft & Jiangxing Intelligence
    • “ How to use Rule Engine in EdgeX”  by EMQ
    • “Deep Dive into Open Horizon”  by IBM
  • 2 Sessions Related to VMware:
    • Alan Ren, General Manager of VMware China R&D, attended and gave the opening speech with a recognition of VMware’s contributions to EdgeX Foundry China community 
    • Gavin Lu,  Technical Director of VMware China R&D gave the keynote speech  with the theme of “EdgeX Foundry China 2020 Yearly Review and 2021 Plan”

Partners & Keynote speakers

Live Steaming Platforms

Highlights

Watch the video here: https://live.csdn.net/room/edgexfoundry/Wp2vsTYG 

For more information, visit the EdgeX Foundry China Project Wiki: https://wiki.edgexfoundry.org/display/FA/China+Project.

LF‌ ‌Edge‌ ‌Ecosystem‌ ‌Grows,‌ ‌Adds‌ ‌New‌ ‌Members‌ ‌and‌ ‌New‌ ‌ Cross-Community‌ ‌Integrations

By Announcement

  • LF Edge welcomes Foxconn Industrial Internet (FII), HCL Technologies, OpenNebula, Robin.io, and  Shanghai Open Source Information Technology Association to robust roster of community members working to advance open source at the edge
  • FII sponsors Smart Agriculture SIG within the Open Horizon Project 
  • Home Edge’s Coconut release leverages EdgeX Foundry to enable more data storage options across different devices

SAN FRANCISCO January 14, 2021 LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced continued ecosystem momentum with the addition of four new general members (FII, HCL, OpenNebula, and Robin.io) and one new Associate member (Shanghai Open Source Information Technology Association). Additionally, Home Edge has released its third platform update with new Data Storage and Mult-NAT Edge Device  Communications (MNDEC) features. 

“We are pleased to see continued growth among the LF Edge ecosystem,” said Arpit Joshipura, general manager, Networking, Edge and IOT, the Linux Foundation. “Open edge technology crosses verticals and is poised to become larger than the cloud native ecosystem. We are honored to be at the forefront of this budding industry and welcome all of our new members.”

Approaching it’s two-year mark as an umbrella project, LF Edge is currently comprised of nine projects – including Akraino, Baetyl, FledgeEdgeX Foundry, Home Edge, Open Horizon, Project EVE, Secure Device Onboard (SDO) and State of the Edge – that support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and faster processing and mobility. LF Edge helps  unify a fragmented edge market around a common, open vision for the future of the industry.

One of the project’s newest general members, FII, plans to sponsor a Smart Agriculture Special Interest Group (SIG) within the Open Horizon project and soon, an LF Edge Smart Agriculture End User Solution Group (EUSG).  Open Horizon SIGs are designed to identify use cases within an industry or vertical, develop end-to-end reference solutions, and promote the solutions, use cases, and the group itself.  LF Edge EUSGs gather user requirements through cross-project discussion with end users and generate demos, white papers, and messaging in order to prioritize project features and inform project roadmaps.

Home Edge Issues Coconut Release

Home Edge’s  third release, Coconut,  is now available. Coconut includes new features such as Multi-NAT communications (which enables discovery of devices in different NATs) and Data Storage.  In collaboration with EdgeX Foundry, a centralized device can be designated as a primary device to store the data from different devices. Home Edge expects its next release will be available in 2021 and will include real time data analytics features. More information on Home Edge Coconut is available here: https://wiki.lfedge.org/display/HOME/Release+Notes+for+Coconut .

Open Networking and Edge Summit Sessions, Project Webinars Available On-Demand

LF Edge sessions from the 2020 Open Networking and Edge Summit (ONES) virtual experience are available on-demand: https://bit.ly/2JPAyOd

LF Edge’s “On the Edge with LF Edge” webinar series, launched this year, has produced seven episodes to date, with overview discussions of each of the projects. Episodes are available for viewing on-demand here: https://www.lfedge.org/news-events/webinars/ 

More details on LF Edge, including how to join as a member, details on specific projects and other resources, are available here: www.lfedge.org.

Support From New LF Edge Member

“HCL Technologies is excited to join the Linux Foundation Edge community. We look forward to collaborating with industry leaders to help create and promote Edge Computing frameworks for wider adoption, leveraging our expertise and investments in IoT, 5G and Cloud technologies”, said GH Rao, resident, Engineering R&D Services, HCL Technologies.

“We are delighted to join the LF Edge community as part of our ONEedge initiative”, said Constantino Vazquez, Chief Operations Officer, OpenNebula. “OpenNebula, which has traditionally been used for Private Clouds, has now become a True Hybrid Cloud platform with a number of powerful features for making deployments at the edge much easier for organizations that want to rely on open source technologies and retain the freedom of being able to use on-demand the providers, locations and resources that they really need.”

“Robin.io is excited to join the Linux Foundation Edge,” said Partha Seetala, CEO and founder of Robin.io. “Open edge technologies is a large and exciting opportunity  for Robin technology and cloud-native ecosystem; we are honored to be at the forefront of the enabling and promoting of Kubernetes-based Edge Computing frameworks for wider adoption.”

“The Shanghai Open Source Information Technology Association has joined LF Edge to implement open source innovation in China, promote the establishment of an open source ideological and cultural system that is compatible with the innovative application of the Digital Economy, and promote the design of laws, regulations and institutional frameworks that are conductive to the ecological development of open source.”

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Open Horizon Mentorships with LFX

By Blog, Linux Foundation News, Open Horizon

Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM

Late this summer, a representative from the Linux Foundation’s LFX Mentorship program attended the LF Edge Technical Advisory Committee (TAC) bi-weekly meeting and gave a presentation about their mentorship program.  The program potentially gives open-source projects a turn-key mentorship program with minimal work needed to get it started, which sounded great to me!  At the end of the presentation, they offered to speak to each project’s leadership teams if they would like to learn more.

The Open Horizon project eagerly accepted the offer, since we had plenty of opportunities for interesting work and few volunteers. We were told that once we began accepting applications, we would be flooded with more than enough well-qualified mentee candidates.

Easy onboarding experience

The project sign-up process was ridiculously easy. We filled out the forms and specified that we wanted to see each candidate’s CV/resume, a link to their GitHub repo, and a cover letter explaining in a single paragraph why they wanted to be a mentee of our project.

Incredible applicant pool

The applications began to come in. At first two (pretty easy to handle), then two more (great!), then four more (OK, this is quite a few). Within two weeks, we had over 15! By the end of our open application period, we had over 30 which was more than enough to handle. In fact, we ended up identifying both a primary and a secondary applicant for each open slot in our initial fall term.

Choosing our mentors

To select our project’s mentors, we looked for maintainers in our project with a natural teaching ability, approachability, maturity, and availability. We also assembled a pool of potential items for our mentees to work on. The four mentors selected were:

  • David Booz (Dave) – Chief Architect and founder of the project, Chair of the Agent Working Group, and a member of the team from the start almost six years ago. Dave understands the project code from both the macro level as well as the specific details about how the client software — the Agent — functions.
  • Liang Wang (Los) – Technology Advocate and Chair of the China Manufacturing Special Interest Group (SIG). Los is building up the SIG, overseeing translation of the project materials, and creating a community around manufacturing and industry 4.0 use cases.
  • Troy Fine – Software Engineer and Chair of the Developer Examples Working Group. Troy oversees creation and maintenance of all code and services that demonstrate all project solution functionality from the simple to the complex.
  • Joseph Pearson (Joe) – Project Chair and Chair of the Documentation Working Group. Joe ensures that people looking to use Open Horizon, develop it, extend or port it, and to build solutions with it have the information they need.

Choosing our mentees

We created a set of filtering criteria to shorten the list down to candidates who were not only qualified, but also exhibited a natural curiosity about our project, and were eager to get started. We also approximately matched the geographic location of the candidates as well as their linguistic abilities.

The mentors then interviewed those candidates on the short list over Slack, email, and Zoom or WebEx. It was not an easy task, and we wish that we could have accepted double our limit of four mentees.

For the work the mentees would be completing, the mentors identified tasks that fit the candidate’s natural abilities and strengths and yet would stretch them a bit. We also aimed for tasks that could be completed within the timeframe of the fall term.

Anukriti Jain

Dave selected Anukriti Jain as the mentee for his Working Group. Anukriti is a Computer Science Engineering student in the last year of her undergraduate program. She is based in India.

Han Gao

Los chose Han Gao as mentee. Han Gao is pursuing his doctorate in the Netherlands.  His goals are:

*  Become an effective contributor to the Open Horizon project with a deep understanding at the code level around its architecture and key policy-based management flow.

*  Contribute by translating Open Horizon technical documentation into Chinese, as well as creating new hands-on guidance for getting started with an end-to-end example.

*  Broaden his view and build insight across other LF Edge projects.

Clement NG

Troy went with Clement Ng for the Examples Working Group. Clement, like Troy, is based on the US west coast.

Edidiong Etuk

Joe tapped Edidiong Etuk to assist with the Documentation Working Group. Eddie is from Nigeria and is pursuing a degree in Computer Science.  He has natural leadership qualities and is an eager self-starter with a gift for intuition.

Lessons learned from the mentorship program

Now that we’re about 2/3 of the way through the fall term, we have gained some perspective through the benefit of hindsight.  It turns out that four mentees is just about the right number for our small project to handle.

The tasks have been adjusted to ensure that each mentee can complete the work in the allotted time without too much stress.  And several of them have already gained such a sense of belonging and ownership that they plan to continue working with the project after their mentorship term is complete.  We highly recommend that other projects take advantage of this opportunity.

For more details about Open Horizon, click here. Stay tuned here for updates about the next mentorship.

EdgeX Foundry and Forbes

By Blog, EdgeX Foundry

Written by Andrew Foster, EdgeX Foundry Contributor and Product Director and IOTech Co-Founder

I read a very interesting article recently in Forbes on the Future of Industrial IoT Gateways. The thrust of the piece is that IoT gateways play a key role as the main integration point between the OT and IT worlds in an industrial IoT system. They provide important functions such as protocol translation for the myriad of industrial protocols that users often have to deal with, enable edge processing for latency sensitive applications and provide a secure firewall between the open internet and possible vulnerable devices/sensors.

However, unlike more general IT environments, the problem with IoT gateways is that they typically require a lot of customization and bespoke software configuration to support each specific use case, and this is slowing down IoT adoption.

The article identifies the solution to the problem as being the need for service-oriented gateway frameworks in the form of plug and play microservices that are not tied to any specific hardware or operating system. The key benefit is that these gateways can be delivered with very little customization and it’s much easier for the platform to be configured using IT native techniques (e.g. docker, Kubernetes), or new microservices (e.g. to support a new device protocol) written in cloud friendly languages instead on having a heavy reliance on specialized embedded development skills. It also facilitates an ecosystem of reusable components, many of which are open source but also encourages a thriving commercial marketplace for new microservices.

In particular, it was great to see that alongside ARM’s Pelion open source initiative, the Linux Foundation’s EdgeX Foundry is highlighted. EdgeX is an open-source, vendor-neutral, project hosted by the Linux Foundation under the LF Edge umbrella focused on the development of a service-oriented edge software framework that is being adopted by a number of the major the gateway vendors. The project was established in 2017 and 6 releases later with over seven million downloads, adoption is still growing fast.

In fact, EdgeX is already incorporated into a number IoT gateway products from Accenture, HP, Jiangxing, ThunderSoft, TIBCO and Home Edge (another LF Edge project). Dell also provides IoT solutions with EdgeX for Dell gateway hardware. My company IOTech develops a commercial implementation of EdgeX called Edge Xpert that is used on some of the gateways mentioned including the Dell 3000 and 5000 series ruggedized Industrial IoT gateways.

Check out the article here.  For additional information on EdgeX, visit the new EdgeX Foundry website.