Category

Project EVE

Xen on Raspberry Pi 4 with Project EVE

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA, and Stefano Stabellini, Principal Engineer for Xilinx

Originally posted on Linux.com, the Xen Project is excited to share that the Xen Hypervisor now runs on Raspberry Pi. This is an exciting step for both hobbyists and industries. Read more to learn about how Xen now runs on RPi and how to get started.

Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.

Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the xen-devel mailing list. “It should be easy,” we answered. “It might even work out of the box,” we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.

The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in Dom0 was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for Dom0 on RPi4. The patch is now present in Xen 4.14.

“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.” We were wrong again. The Xen subsystem in Linux uses virt_to_phys to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using virt_to_phys, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linux’s master branch.

We felt confident that we finally reached the end of the line. “Memory allocations – check. Memory translations — check. We are good to go!” No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.

To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasn’t easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.

Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.

We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.

HACKING XEN ON RASPBERRY PI 4

If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See this tutorial on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to this article.

Use the rpi-imager to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit config.txt. Make sure to add the following:

 

    kernel=u-boot.bin

    enable_uart=1

    arm_64bit=1

Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance OpenSUSE. Download the JeOS image, then open it and save u-boot.bin:

    xz -d 
openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz

   kpartx -a 
./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw

    mount /dev/mapper/loop0p1 /mnt
    cp /mnt/u-boot.bin /tmp

Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for Dom0, the Dom0 rootfs, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot boot.scr script on the SD card:

    setenv serverip 192.168.0.1

    setenv ipaddr 192.168.0.2

    tftpb 0xC00000 boot2.scr

    source 0xC00000

Where:
– serverip is the IP of your TFTP server

– ipaddr is the IP of the RPi4

Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:

   mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr

Where:

– boot.source is the input

– boot.scr is the output

UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using ImageBuilder.

Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb). RPi4 has two UARTs; the default is bcm2835-aux-uart at address 0x7e215040. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:

  console=dtuart dtuart=serial1 sync_console

The Xen command line is provided by the boot2.scr script generated by ImageBuilder as “xen,xen-bootargs“. After editing boot2.source you can regenerate boot2.scr with mkimage:

 mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr

XEN ON RASPBERRY PI 4: AN EASY BUTTON

Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasn’t been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is LF Edge’s Project EVE.

Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels.

You can find more about EVE on the project’s website at http://projecteve.dev and its GitHub repo https://github.com/lf-edge/eve/blob/master/docs/README.md. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at:

https://github.com/lf-edge/eve/blob/master/docs/README.md

Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:

$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like

$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw

This is followed by flashing the resulting live.raw binary onto an SD card using your favorite tool.

Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as Dom0 under Xen.

As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVE’s sister project Eden: https://github.com/lf-edge/eden#raspberry-pi-4-support and following a short tutorial over there.

If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edge’s Slack channels starting with #eve over at http://lfedge.slack.com/ — we’d love to hear your feedback.

In the meantime – happy hacking!

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

On the “Edge” of Something Great

By Akraino, Announcement, Baetyl, Blog, EdgeX Foundry, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard, State of the Edge

As we kick off Open Networking and Edge Summit today, we are celebrating the edge by sharing the results of our first-ever LF Edge Member Survey and insight into what our focuses are next year.

LF Edge, which will celebrate its 2nd birthday in January 2021, sent the survey to our more than 75 member companies and liaisons. The survey featured about 15 questions that collected details about open source and edge computing, how members of the LF Edge community are using edge computing and what project resources are most valuable. 

Why did you chose to participate in LF Edge?

The Results Are In

The Top 3 reasons to participate in LF Edge are market creation and adoption acceleration, collaboration with peers and industry influence. 

  • More than 71% joined LF Edge for market creation and adoption acceleration
  • More than 57% indicated they joined LF Edge for business development
  • More than 62% have either deployed products or services based on LF Edge Projects or they are planned by for later this year, next year or within the next 3-5 years

Have you deployed products or services based on LF Edge Projects?

This feedback corresponds with what we’re seeing in some of the LF Edge projects. For example, our Stage 3 Projects Akraino and EdgeX Foundry are already being deployed. Earlier this summer, Akraino launched its Release 3 (R3) that delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R3, Akraino brings deployments and PoCs from a swath of global organizations including Aarna Networks, China Mobile, Equinix, Futurewei, Huawei, Intel, Juniper, Nokia, NVIDIA, Tencent, WeBank, WiPro, and more. 

Additionally, EdgeX Foundry has hit more than 7 million container downloads last month and a global ecosystem of complementary products and services that continues to increase. As a result, EdgeX Foundry is seeing more end-user case studies from big companies like Accenture, ThunderSoft and Jiangxing Intelligence

Have you gained insight into end user requirements through open collaboration?


Collaboration with peers

The edge today is a solution-specific story. Equipment and architectures are purpose-built for specific use cases, such as 5G and network function virtualization, next-generation CDNs and cloud, and streaming games. Which is why collaboration is key and more than 70% of respondents said they joined LF Edge to collaborate with peers. Here are a few activities at ONES that showcase the cross-project and members collaboration. 

Additionally, LF Edge created a LF Edge Vertical Solutions Group that is working to enable easily-customized deployments based on market/vertical requirements. In fact, we are hosting an LF Edge End User Community Event on October 1 that provides a platform for discussing the utilization of LF Edge Projects in real-world applications. The goal of these sessions is to educate the LF Edge community (both new and existing) to make sure we appropriately tailor the output of our project collaborations to meet end user needs. Learn more.

Industry Influence

More than 85% of members indicated they have gained insights into end user requirements through open collaboration. A common definition of the edge is gaining momentum. Community efforts such as LF Edge and State of the Edge’s assets, the Open Glossary of Edge Computing, and the Edge Computing Landscape are providing cohesion and unifying the industry. In fact,  LF Edge members in all nine of the projects collaborated to create an industry roadmap that is being supported by global tech giants and start-ups alike.

 

 

Where do we go from here? 

When asked, LF Edge members didn’t hold back. They want more. They want to see more of everything – cross-project collaboration, end user events and communication, use cases, open source collaboration with other liaisons. As we head into 2021, LF Edge will continue to lay the groundwork for markets like cloud native, 5G, and edge for  more open deployments and collaboration.  

 

Introduction to Project EVE, an open-source edge node architecture

By Blog, Project EVE
Written by Brad Corrion, active member of the Project EVE community and Strategic Architect at Intel 
No alt text provided for this image

This article originally ran as a LinkedIn article last month. 

At the inaugural August ZEDEDA Transform conference, I participated in a panel discussion entitled “Edge OSS Landscape, Intro to Project EVE and Bridging to Kubernetes”. Project EVE, an LF Edge project, delivers an open-source edge node, on which applications are deployed as either containers or VMs. As some audience members noted, they were pleasantly surprised that we didn’t spend an hour talking explicitly about the Project EVE architecture. Instead, we considered many aspects of a container-first edge, community open-source investments, and whether technologies like Kubernetes can be useful for IoT applications.

The Edge Buzz

There’s a lot of buzz around the edge, and many see it as the next big thing since the adoption of cloud computing. Just as the working definition of the cloud has morphed over time and conceptualized as a highly scalable, micro-service-based application hosted on computing platforms around the globe, so has the edge come to represent a computing style. To embrace the edge means to place your computing power as close to the data that it is processing, to balance the cost and latency of moving high volumes of data across the network. Most have also used the transition to edge computing to adopt the same “cloud-native” technologies and processes to manage their compute workloads in a similar fashion, regardless of where the compute is deployed, be it cloud, data center, or some remote edge environment.

Turning the Problem on its Head

That last part is what enthuses me about the shift to edge computing: we move away from highly curated enterprise IT devices (whose management tends to prevent change and modification) and move towards cloud-like, dynamic, scalable assets (whose management technologies are designed for innovation and responding to ever-changing circumstances). This is the classic example of “pets vs. cattle“, but sprinkled in with IoT challenges like systems that are distributed at extreme ends of the world, with costly trip charges required to manage with an on-site technician. The solution turns the problem on its it head. It requires organizations, from IT to line of business, to adopt practices of agility and innovation, so that they can manage and deploy solutions to the edge as nimbly as they experiment and innovate in the cloud.

Next up, we discussed the benefits of utilizing open source software for projects like Project EVE. Making an edge node easy to deploy, secure to boot, and remotely manageable is not trivial, and it is not worth competing over. The community, including competitors, can create a solid, open-source edge node architecture, such as Project EVE, and all parties can benefit from the group investment. With a well-accepted, common edge architecture, innovators can focus instead on the applications, the orchestration, and the usability of the edge. Even using the same basic edge node architecture, there is more than ample surface area left to compete for service value, elegance, and solution innovation. Just as the typical investment around Kubernetes is allowing masses of projects and companies to improve on nearly every aspect of orchestration, without making each project re-invent the basics of orchestration, we don’t need tens of companies re-inventing secure device deployments, secure boot, and container protection. Get the basics done (the “boring stuff” as I called it) and focus on the specialization around it.

Can container first concepts, and projects like Kubernetes, be effective at the edge and solving IoT problems? Yes. No doubt, there are differences in using Kubernetes in the cloud from using it to manage edge nodes, and some of those challenges include limited power infrastructure, communications and connectivity, and cost considerations. However, these technologies are very adaptable. A container will run as happily on an IOT device, an IoT gateway, or an edge server. How you connect to and orchestrate the containers on those devices will vary. Most edges won’t need a local Kubernetes cluster. Distant Kubernetes infrastructures could remotely orchestrate a local edge node or local edge cluster.

Infrastructure as Code

A common theme in edge orchestration architectures is local orchestration, which provides enough power to a small orchestrator running at the edge to make decisions while offline from a central orchestrator. Projects like Open Horizon, which IBM recently open-sourced to the LF Edge, is designed to bridge traditional cloud orchestration to edge devices with a novel distributed policy engine. This distributed policy engine is executing and responding to changing conditions even when disconnected. Adopting an “infrastructure as code” mentality provides administrators a high degree of configuration and control of the target environment, even over remote networks. There is high confidence in the resulting infrastructure configurations, but with the variability as to “when” the changes are received due to bandwidth considerations. Could this be used on oil and gas infrastructure in the jungles of Ecuador? Yes. However, the challenge is in deciding which of the philosophies and projects are best suited to the needs of the situation.

If you find yourself architecting your edge node or using a simple Linux installation, while kicking the security can down the road, or are otherwise bogged down by remote manageability challenges when you’d rather be innovating and solving domain-specific problems, look to the open-source community. Specifically to projects like Project EVE, to give you a leg up for your edge architecture.

Please feel free to connect with me and start a conversation about this article. Here are some additional resources:

  • Explore how Intel is connecting open source communities with the retail market through the Open Retail Initiative
  • Be sure to explore Project EVE, Open Horizon, EdgeX Foundry and more projects at the LF Edge

LF Edge Demos at Open Networking & Edge Summit

By Blog, EdgeX Foundry, Event, Fledge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard

Open Networking & Edge Summit, which takes place virtually on September 28-30, is co-sponsored by LF Edge, the Linux Foundation and LF Networking. With thousands expected to attend, ONES will be the epicenter of edge, networking, cloud and IoT. If you aren’t registered yet – it takes two minutes to register for US$50 – click here.

Several LF Edge members will be at the conference leading discussions about trends, presenting use cases and sharing best practices. For a list of LF Edge focuses sessions, click here and add them to your schedule. LF Edge will also host a pavilion – in partnership with our sister organization LF Networking – that will showcase demos, including the debut of two new ones that feature a collaboration between Project EVE and Fledge and Open Horizon and Secure Device Onboarding. Check out the sneak peek of the demos below:

Managing Industrial IoT Data Using LF Edge (Fledge, EVE)

Presented by Flir, Dianomic, OSIsoft, ZEDEDA and making its debut at ONES, this demo showcases the strength of Project EVE and Fledge. The demo Fledge will show how the two open source projects work together to securely manage, connect, aggregate, process, buffer and forward any sensor, machine or PLC’s data to existing OT systems and any cloud. Specifically, it will show a FLIR IR Camera video and data feeds being managed as described.

 

Real-Time Sensor Fusion for Loss Detection (EdgeX Foundry):

Presented by LF Edge members HP, Intel and IOTech, this demo showcases the strength of the Open Retail Initiative and EdgeX Foundry. Learn how different sensor devices can use LF Edge’s EdgeX Foundry open-middleware framework to optimize retail operations and detect loss at checkout. The sensor fusion is implemented using a modular approach, combining point-of-sale , computer vision, RFID and scale data into a POC for loss prevention.

This demo was featured at the National Retail Federation Show in January. More details about the demo can be found in HP’s blog and  Intel blog.

               

Low-touch automated onboarding and application delivery with Open Horizon and Secure Device Onboard

Presented by IBM and Intel, this demo features two of the newest projects accepted into the LF Edge ecosystem – Secure Device Onboard was announced in July while Open Horizon was announced in April.

An OEM or ODM can generate a voucher with SDO utilities that is tied to a specific device. Upon purchase, they can send the voucher to the purchaser. With LF Edge’s Open Horizon Secure Device Onboard integration, an administrator can load the voucher into Open Horizon and pre-register the device. Once the device is powered on and connected to the network, it will automatically authenticate, download and install the Open Horizon agent, and begin negotiation to receive and run relevant workloads.

For more information about ONES, visit the main website: https://events.linuxfoundation.org/open-networking-edge-summit-north-america/. 

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge

LF Edge Update: Taxonomy, SIGs and Project EVE

By Blog, LF Edge, Project EVE

Written by Jason Shepherd, LF Edge Governing Board Member and VP Ecosystem ZEDEDA

I hope you and yours are doing well in these crazy times. Things are going great within the LF Edge community as we increase alignment across projects, fine-tune our processes and recently welcomed two new projects: Open Horizon and Secure Device Onboard. Net-net, more and more people by the day are joining in to grow an inclusive, structured community focused on developing an open framework for edge computing.

As a board member who is also involved in a number of the LF Edge working groups, I wanted to take this opportunity to provide an update on several fronts: the recently released LF Edge taxonomy white paper, the emerging Special Interest Groups (SIGs), and Project EVE.

LF Edge Taxonomy White Paper

Released last week, the community-created white paper outlining the LF Edge taxonomy and framework is an important piece that we believe will really help clear up a lot of market confusion by providing a balanced view of the edge landscape that is based on inherent technical and logistical trade offs spanning the edge to cloud continuum. This is compared to many existing edge taxonomies that break down the continuum using ambiguous terms that can be interpreted in different ways.  

The goal was to provide a universal framework for various industries to apply their preferred terminology on top of. We’ve received universally positive feedback in previews with key analysts and encourage you to check out both the paper and this related webinar that presents key insights.

LF Edge Vertical Solutions Focus Groups

With nine projects and the new taxonomy serving as a solid foundation, we’re now increasing focus on spinning up Vertical Solutions Focus Groups within the LF Edge community. These focus groups are a precedent set by CNCF and other Linux Foundation projects. The purpose is to document unique requirements for specific markets and feed desired features back into the LF Edge project working groups for consideration as part of their roadmaps. Doing this across as many use cases spanning Industrial, Enterprise and Consumer markets will ensure that each project is maximizing impact while also recognizing inherent tradeoffs.  

Both member and non-member companies will be welcome to volunteer to lead a vertical, or join one already in flight. It’s a low time commitment and a great way to demonstrate thought leadership as an end user while making sure LF Edge projects are developing the right features and specific extensions for the verticals and use cases that matter the most to your company. Stay tuned for more details, including a virtual launch event in the late summer!

Project EVE Update

LF Edge projects are growing across the board and Project EVE is no exception. The EVE community is now approaching 50 unique contributors from organizations including ZEDEDA, Xilinx, Intel, Global Logic, Atomic, GE Research and Timesys and contributions are also growing at a steady pace.  

In terms of market focus, EVE is optimized for supporting IoT edge computing workloads at the “Smart Device Edge,” as defined in LF Edge taxonomy.  Edge compute nodes in this subcategory are characterized by two attributes: 1) being deployed outside of a physically-secure data center but 2) still having enough memory (approximately 256MB) to support application abstraction in the form of virtualization and/or containerization.

 

Devices at the IoT Edge are constrained, heterogeneous, and physically accessible, dictating that special attention needs to be made to optimize footprint, simplify support for diverse hardware, enable zero touch provisioning, and establish a zero trust security model that eliminates the guesswork in securing distributed edge computing nodes at scale. EVE builds on the principles of traditional virtualization tools optimized for the data center, but is optimized for the unique needs of the IoT Edge.

The chart below explains how EVE takes a balance of architectural approaches to serve as a holistic, open engine for supporting any IoT edge computing workload. The net is that the bare metal foundation enables deep security and networking capabilities, supports both containers and virtual machines to provide options for both modern and legacy workloads, and mitigates lock-in with its open API. 

Since launching as a founding LF Edge project in early 2019, the EVE community has been working hard to modularize the EVE foundation so developers can choose preferred components, ultimately wrapped up into a de-facto standard interface in the form of the open EVE API. 

As part of the community’s effort to increase modularity, support for ACRN and KVM hypervisors as alternatives to the original Xen baseline has been added. The project has adopted continerd given that it is the most common container runtime and a general tenet is to integrate leading OSS projects and standards wherever possible rather than reinventing. 

The community’s goals for the balance of 2020 include further increasing modularity and reducing footprint, and adding support for Kubernetes via K3S. Regarding the latter, this will be done by integrating the right features from the Kubernetes paradigm rather than simply trying to cut and paste the same functionality from centralized data centers to the necessarily different IoT edge. Join in if you’d like to help the community shape this important bridge from the IoT Edge to the data center paradigm! 

In terms of adoption, EVE is being leveraged as the edge computing foundation for deployments in several market verticals including oil and gas, renewable energy, manufacturing and healthcare. Check out the EVE in Market page for a growing list of community-supported images for hardware models from Advantech, Dell, HPE, IEI, Intel, Kontron, Lanner, Nexcomm, Raspberry Pi, Siemens, Supermicro and more! This page also has links to the OSS Adam and commercial controller offers that leverage the open EVE API.

Finally, thanks to the great work of Stefano Stabelini at Xylinx, among others, an image is now available for the Raspberry Pi 4 to make it even easier and cost-effective to get started with EVE! This image includes GPU support and can be used with the OSS Adam controller today. Stay tuned to “EVE in the Market” page for other controller options, or contribute your own!

The mission of the EVE community is to do for the IoT Edge what Android did for the smartphone. Learn more about EVE through the project page on the LF Edge site and in this EVE webinar in which I also highlight the importance of an open edge for realizing the true business potential of digital transformation. We welcome you to join the community to make EVE the one foundational stack needed to scale IoT edge computing deployments with choice of hardware, applications and cloud!

In closing, we have a lot of great things going on within the LF Edge community and we’re just scratching the surface of the opportunity ahead. Our future is bright and we encourage you to get involved, whether it be providing key market input through a SIG or diving straight in and contributing code. After all, LF Edge is a technical meritocracy and the best way to vote on the direction of a project is with fingers on your keyboard!

ZEDEDA is a LF Edge member and leader in Project EVE. For more details about LF Edge members, visit here. For more details about Project EVE, visit the project page

Other resources:

 

Extending Kubernetes to the IoT Edge

By Blog, Project EVE

Written by Jason Shepherd, LF Edge member, VP of Ecosystem for Zededa and active leader in Project EVE

This post originally ran on the Zededa Medium blog. Click here for more articles like this one. 

When Kubernetes was first released onto the open source scene by Google in 2014, it marked an important milestone in application development for cloud environments. Docker had already begun to show the world the value of containers as a means to create modular, nimble applications that could be built using a microservices architecture. But containers were difficult to deploy at scale — particularly at the massive size of many cloud-native applications. Kubernetes filled that need by providing a comprehensive container orchestration solution.

Now, as the edge is emerging as the next frontier of computing, many businesses and developers are looking to replicate the benefits of cloud-native development closer to the data’s source. Kubernetes will play an important role in that process, but, for reasons we’ll explain in this post, it isn’t as easy as simply copy/pasting the traditional version of Kubernetes across the board.

Which edge are we talking about?
Of course, there are many different “edges” in edge computing, and it’s been easier to extend cloud-native principles to some more than others. For instance, for a service provider that creates a micro data center comprised of a rack of servers at the base of one of their cell towers, it’s fairly straightforward to use the same version of Kubernetes that runs in the cloud. These deployments are very similar to those found in a traditional data center, and so developing and running applications is also similar, including the way containers are used and orchestrated.

At the other extreme of the edge spectrum are devices like microcontrollers, which use deeply embedded code with tight coupling between hardware and software due to resource constraints in areas like memory and processing. These classes of devices also often leverage a Real-time Operating System (RTOS) to ensure determinism in responses. Given these constraints, it’s not technically feasible for these devices to leverage container-based architectures, regardless of whether there’s a benefit for modularity and portability.

In between these two ends of the edge computing spectrum are a large number of devices that make up what we consider to be the “IoT edge”: small- and medium-sized compute nodes that can run Linux and have sufficient memory (>512MB) to be able to support the overhead of virtualization. Compared to embedded devices that tend to be fixed in function, these compute nodes are used to aggregate business-critical data for enterprises and need to be able to run an evolving set of apps that process and make decisions based on that data. As app development for edge use cases continues to evolve and grow, companies will see the most benefit from making these IoT devices as flexible as possible, meaning they’ll want the capability to update software continuously and reconfigure applications as needed. This idea of being able to evolve software on a daily basis is a central tenet of cloud-native development practices, which is why we often talk about extending cloud-native benefits to the IoT edge.

Challenges of extending Kubernetes to the IoT edge
While Docker containers have been widely used in IoT edge deployments for several years, extending Kubernetes-based cloud-native development to the IoT edge isn’t a straightforward process. One reason why is that the footprint of the traditional Kubernetes that runs on servers in the data center is too large for constrained devices. Another challenge is the fact that many industrial operators are running legacy applications that are business-critical but not compatible with containerization. Companies in this situation need a hybrid solution that can still support their legacy apps on virtual machines (VMs), even as they introduce new containerized apps in parallel. There are data center solutions that support both VMs and containers today; however, footprint is still an issue for the IoT edge. Further, IoT edge compute nodes are highly distributed, deployed at scale outside of the confines of traditional data centers and even secure micro data centers, hence they require specific security considerations.

In addition to the technical challenges, there’s also a cultural shift that comes with the move toward a more cloud-native development mindset. Many industrial operators are used to control systems that are isolated from the internet and therefore only updated if absolutely necessary, because an update could bring down an entire process. Others are still accustomed to a waterfall model of development, where apps are updated only periodically, and the entire stack is taken down to do so. In either case, switching to a system of modularized applications and continuous delivery is a big transition, if not premature. It’s not always an easy change to make, and it’s common for organizations to find themselves stuck in a transitory state — especially for those that leverage a combination of partner offerings instead of controlling all aspects of app development and technology management in-house.

Net-net, there’s a lot of buzz today around solutions that promise to make cloud-native development and orchestration at the edge as easy as flipping on a switch, but the reality is that most enterprises aren’t ready to “rip and replace,” and therefore need a solution that provides a transition path.

Enter: Project EVE and Kubernetes
Simplifying the transition to a cloud-native IoT edge is the philosophy that underpins Project EVE, an open source project originally donated by ZEDEDA to LF Edge. EVE serves as an edge computing engine that sits directly on top of resource-constrained hardware and makes it possible to run both VMs and containers on the same device — thereby supporting both legacy and cloud-native apps. Additionally, EVE creates a foundation that abstracts hardware complexities, providing a zero-trust security environment and enabling zero-touch provisioning. The goal of EVE is to serve as a comprehensive and flexible foundation for IoT edge orchestration, especially when compared to other approaches which are either too resource intensive, only support containers, or are lacking security measures necessary for distributed deployments.

With EVE as a foundation, the next step is to bridge the gap between Kubernetes from the data center and the needs of the IoT edge. Companies such as Rancher have been doing great work with K3S, taking a “top-down” approach to shrink Kubernetes and extend the benefits to more constrained compute clusters at the edge. At ZEDEDA, we’ve been taking a “bottom-up” approach, working in the open source community to meet operations customers where they’re at today with the goal of then bridging Project EVE to Kubernetes. Through open source collaboration these two approaches will meet in the middle, resulting in a version of Kubernetes that’s lightweight enough to power compute clusters at the IoT edge while still taking the unique challenges around security and scale into account. Kubernetes will then successfully span from the cloud to the IoT edge.

A exciting road ahead, but it will be a journey
Kubernetes will play an important role in the future of edge computing, including for more constrained devices at the IoT edge. As companies look to continually update and reconfigure the applications running on their edge devices, they’ll want the flexibility and portability benefits of containerized app development, scalability, clustering for availability, and so forth. But for now, many companies also need to continue supporting their legacy applications that cannot be run in containers, which is why a hybrid approach between “traditional” and cloud-native development is necessary.

We at ZEDEDA recognize this need for transition, which is why we developed EVE to make sure that enterprises could support both technology approaches. With this flexibility, it affords companies that are still working through the technical and cultural shift to cloud-native development with the room to manage that transition.

As cloud-native applications begin to make up a greater percentage of workloads at the edge, though, we’ll need the tools to manage and orchestrate them effectively. This is why we feel it’s important to put a strong effort now behind developing a version of Kubernetes that works for the IoT edge. With EVE, we’ve created a secure, flexible foundation from which to bridge the gap, and we’re excited to continue working with the open source community to extend Kubernetes to this space. For more information on EVE or to get involved, visit the project page on the LF Edge site.

Zededa is a LF Edge member and active leader in Project EVE. For more details about LF Edge members, visit here. Join the LF Edge Slack Channel and interact with the project at #eve.

LF Edge Expands Ecosystem with Open Horizon, adds Seven New Members and Reaches Critical Deployment Milestones

By Akraino Edge Stack, Announcement, Baetyl, EdgeX Foundry, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, State of the Edge

  • Open Horizon, an application and metadata delivery platform, is now part of LF Edge as a Stage 1 (At-Large) Project.
  • New members bring R&D expertise in Telco, Enterprise and Cloud Edge Infrastructure.
  • EdgeX Foundry hits 4.3 million downloads and Akraino R2 delivers 14 validated deployment-ready blueprints.
  • Fledge shares a race car use case optimizing car and driver operations using Google Cloud, Machine Learning and state-of-the-art digital twins and simulators.

SAN FRANCISCO – April 30, 2020 –  LF Edge, an umbrella organization under The Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced continued project momentum with the addition a new project and several technical milestones for EdgeX Foundry, Akraino Edge Stack and Fledge. Additionally, the project welcomes seven new members including CloudBrink, Federated Wireless, Industrial Technology Research Institute (ITRI), Kaloom, Ori Industries, Tensor Networks and VoerEir to its ecosystem.

Open Horizon, an existing project contributed by IBM, is a platform for managing the service software lifecycle of containerized workloads and related machine learning assets. It enables autonomous management of applications deployed to distributed webscale fleets of edge computing nodes and devices without requiring on-premise administrators.

Edge computing brings computation and data storage closer to where data is created by people, places, and things. Open Horizon simplifies the job of getting the right applications and machine learning onto the right compute devices, and keeps those applications running and updated. It also enables the autonomous management of more than 10,000 edge devices simultaneously – that’s 20 times as many endpoints as in traditional solutions.

“We are thrilled to welcome Open Horizon and new members to the LF Edge ecosystem,” said Arpit Joshipura, general manager, Networking, Edge & IoT, the Linux Foundation. “These additions complement our deployment ready LF Edge open source projects and our growing global ecosystem.”

“LF Edge is bringing together some of the most significant open source efforts in the industry, said Todd Moore, IBM VP Open Technology, “We are excited to contribute the Open Horizon project as this will expand the work with the other projects and companies to create shared approaches, open standards, and common interfaces and APIs.”

Open Horizon joins LF Edge’s other projects including: Akraino Edge Stack, Baetyl,  EdgeX Foundry, Fledge, Home Edge, Project EVE and State of the Edge. These projects support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and  faster processing and mobility. By forming a software stack that brings the best of cloud, enterprise and telecom, LF Edge helps to unify a fragmented edge market around a common, open vision for the future of the industry.

Since its launch last year, LF Edge projects have met significant milestones including:

  • EdgeX Foundry has hit 4.3 million docker downloads.
  • Akraino Edge Stack (Release 2) has 14 specific Blueprints that have all tested and validated on hardware labs and can be deployed immediately in various industries including Connected Vehicle, AR/VR, Integrated Cloud Native NFV, Network Cloud and Tungsten Fabric and SDN-Enabled Broadband Access.
  • Fledge shares a race car use case optimizing car and driver operations using Google Cloud, Machine Learning and state-of-the-art digital twins and simulators.
  • State of the Edge merged under LF Edge earlier this month and will continue to pave the path as the industry’s first open research program on edge computing. Under the umbrella, State of the Edge will continue its assets including State of the Edge Reports, Open Glossary of Edge Computing and the Edge Computing Landscape.

Support from the Expanding LF Edge Ecosystem

Federated Wireless:

“LF Edge has become a critical point of collaboration for network and enterprise edge innovators in this new cloud-driven IT landscape,” said Kurt Schaubach, CTO, Federated Wireless. “We joined the LF Edge to apply our connectivity and spectrum expertise to helping define the State of the Edge, and are energized by the opportunity to contribute to the establishment of next generation edge compute for the myriad of low latency applications that will soon be part of private 5G networks.”

Industrial Technology Research Institute (ITRI):

“ITRI is one of the world’s leading technology R&D institutions aiming to innovate a better future for society. Founded in 1973, ITRI has played a vital role in transforming Taiwan’s industries from labor-intensive into innovation-driven. We focus on the fields of Smart Living, Quality Health, and Sustainable Environment. Over the years, we also added a focus on 5G, AI, and Edge Computing related research and development. We joined LF Edge to leverage its leadership in these areas and to collaborate with the more than 75 member companies on projects like Akraino Edge Stack.”

Kaloom:

“Kaloom is pleased to join LF Edge to collaborate with the community on developing open, cloud-native networking, management and orchestration for edge deployments” said Suresh Krishnan, chief technology officer, Kaloom.  “We are working on an unified edge solution in order to optimize the use of resources while meeting the exacting performance, space and energy efficiency needs that are posed by edge deployments. We look forward to contributing our expertise in this space and to collaborating with the other members in LF Edge in accelerating the adoption of open source software, hardware and standards that speed up innovation and reduce TCO.”

Ori Industries:

“At Ori, we are fundamentally changing how software interacts with the distributed hardware on mobile operator networks.” said Mahdi Yahya, Founder and CEO, Ori Industries. “We also know that developers can’t provision, deploy and run applications seamlessly on telco infrastructure. We’re looking forward to working closely with the LF Edge community and the wider open-source ecosystem this year, as we turn our attention to developers and opening up access to the distributed, telco edge.”

Tensor Networks:

“Tensor Networks believes in and supports open source. Having an arena free from the risks of IP Infringement to collaborate and develop value which can be accessible to more people and organizations is essential to our efforts. Tensor runs its organization, and develops products on top of Linux.  The visions of LF Edge, where networks and latency are part of open software based service composition and delivery, align with our vision of open, fast, smart, secure, connected, and customer driven opportunities across all industry boundaries.” – Bill Walker, Chief Technology Officer.

VoerEir:

“In our extensive work with industry leaders for NFVI/VIM test and benchmarking,  a need to standardize infrastructure KPIs in Edge computing has gradually become more important,” said Arif  Khan, Co-Founder of VoerEir AB. “This need has made it essential for us to join LF Edge and to initiate the new Feature Project “Kontour” under the Akraino umbrella. We are excited to collaborate with various industry leaders to define, standardize  and measure Edge KPIs.”

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

Securing the IoT Edge (Part 2)

By Blog, Project EVE

Written by Jason Shepherd, LF Edge member, VP of Ecosystem for Zededa and active leader in Project EVE

This post originally ran on the Zededa Medium blog. Click here for more articles like this one. 

The computing landscape has long observed a swing between centralized and distributed architectures, from the mainframe to client-server to the cloud. The next generation of computing is now upon us, representing both a return to the familiar distributed model and a breakthrough in rethinking how we handle data. Many of the security lessons we’ve learned from past paradigms are applicable, yet the edge also brings unique challenges. In part 1 of this blog series, we covered some of the characteristics that make security different at the edge compared to the cloud. In this blog, we’ll be going over ten baseline recommendations for securing IoT edge deployments.

Coined by former Forrester analyst John Kindervag, the “zero trust” mindset is rooted in the assumption that the network is hostile. This means that every individual or device — inside or outside of the network perimeter — trying to access the network must be authenticated and all downloaded updates verified, because nothing can be trusted.

Key principles of zero trust security

At the foundation of your security approach should be a trust anchor in your edge devices based on a root of trust at the silicon level (e.g., Trusted Platform Module, or TPM). Due to fragmentation in edge hardware, as much support as possible for this trust anchor should be abstracted into the software layer and exposed to your applications through APIs. This trust anchor should be the foundation for key functions such as device identification and authentication, secure and measured boot, encryption, application updates, and so forth.

 The massive distributed denial of service (DDoS) attack that leveraged the Mirai botnet and took down a portion of the internet in 2016 involved millions of cameras that shared a very small number of common credentials. Back during the setup of these devices, their credentials either could not be changed or were not changed because it was easier to use the factory default. What can we take away from this incident? Rather than relying on field technicians or end users to change and manage countless edge device passwords, leverage solutions that automatically create and store credentials in the trust anchor based on a unique device ID during a zero-touch provisioning process. Field technicians should then only be able to access the device through a central controller. Additionally, establish the ability to set policies in your network that allow you to remotely disable any unused physical ports on edge devices in order to prevent unauthorized installation of software.

 Leveraging the key provided by your trust anchor, encrypt data both at rest on your edge devices and in motion across the network. Deploy compute immediately upstream of resource-constrained edge devices and legacy systems to encrypt data when they aren’t capable of doing it themselves.

With a growing number of devices at the periphery of your network, it’s more important than ever that you have full visibility into user activity, device location and status, and the routes your data is traveling between devices and your on-prem and cloud systems. Be sure to regularly review role-based access to make sure only the users who need access have it, and that this access is based on real-time context as part of your zero-trust strategy.

Network flow log in ZEDEDA’s controller

The U.S. Department of Homeland Security estimates that as many as 85 percent of targeted attacks are preventable due to exploitation of unpatched software. These updates need to be signed from a trusted authority and verified by the private keys stored in your edge devices. Given the implications of downtime in an operational technology (OT) environment, it’s important to enable the scheduling of vulnerability updates during maintenance windows. Also key is to have rollback capabilities in the event of failed updates, so that devices aren’t bricked in the field, which can take down a mission-critical process or result in an expensive trip to a remote location. Software should have extended support, offering the ability to patch applications and underlying runtime for 5 to 7 (or more) years.

Consider solutions that leverage machine learning to assess the steady state of your deployments and alert for anomalies, whether it be unusual network activity, signs of malware, or other indicators. For example, had active threat analytics been applied at the edge in the 2016 Mirai attack, the unusual network traffic could have been addressed at the source rather than snowballing into a much bigger problem. Consult with experts that understand the unique needs of OT-specific protocols — this includes defining what normal behaviors are and how to gracefully shut down processes in the case of any detected attack.

 It takes a village to develop and deploy IoT and edge computing solutions, with multiple different parties coming together spanning the necessary technologies and domain expertise. It’s key to invest in tools for securing and managing your infrastructure that are consistent regardless of the applications and domain expertise applied on top. Leveraging purpose-built, open edge orchestration frameworks that support cloud-native development and have clearly-defined APIs provides a transparent mechanism for getting all stakeholders on the same page, regardless of the combination of ingredients used in a given deployment.

It’s important to strike a balance between locking a solution down and making it usable across the various stakeholders involved. Many of the breaches we hear of in the consumer space happen because developers prioritized instant gratification and usability over security. This is where capabilities such as zero-touch provisioning are key, eliminating the need for expertise and awareness to securely onboard devices.

Security is about defense in depth, applying the right tools in layers based on security posture and risk. This includes utilizing segmentation when possible — while a zero-trust mindset eliminates a perimeter-based focus, micro-segmentation is still important to isolate critical networks and devices, especially legacy systems.Further augment your zero trust model with distributed firewall software to govern access across nodes on internal networks.

Not all edges are created equally; for organizations looking to implement edge computing, it’s important to first understand the unique challenges of securing and managing computing located outside of the confines of a traditional data center. However, adopting a distributed model for compute efficiency doesn’t need to bring tradeoffs in security. Being aware of the considerations that exist at the edge will help organizations be better equipped to protect field deployments and reap the benefits of edge computing. At ZEDEDA, we build off of a foundation that considers all the points above to enable enterprises to securely orchestrate IoT edge deployments with their preferred devices, applications and clouds.

Zededa is a LF Edge member and active leader in Project EVE. For more details about LF Edge members, visit here. For more details about Project EVE, visit the project page