All Posts By

LF Edge

Xen on Raspberry Pi 4 with Project EVE

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA, and Stefano Stabellini, Principal Engineer for Xilinx

Originally posted on Linux.com, the Xen Project is excited to share that the Xen Hypervisor now runs on Raspberry Pi. This is an exciting step for both hobbyists and industries. Read more to learn about how Xen now runs on RPi and how to get started.

Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.

Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the xen-devel mailing list. “It should be easy,” we answered. “It might even work out of the box,” we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.

The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in Dom0 was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for Dom0 on RPi4. The patch is now present in Xen 4.14.

“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.” We were wrong again. The Xen subsystem in Linux uses virt_to_phys to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using virt_to_phys, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linux’s master branch.

We felt confident that we finally reached the end of the line. “Memory allocations – check. Memory translations — check. We are good to go!” No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.

To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasn’t easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.

Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.

We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.

HACKING XEN ON RASPBERRY PI 4

If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See this tutorial on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to this article.

Use the rpi-imager to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit config.txt. Make sure to add the following:

 

    kernel=u-boot.bin

    enable_uart=1

    arm_64bit=1

Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance OpenSUSE. Download the JeOS image, then open it and save u-boot.bin:

    xz -d 
openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz

   kpartx -a 
./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw

    mount /dev/mapper/loop0p1 /mnt
    cp /mnt/u-boot.bin /tmp

Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for Dom0, the Dom0 rootfs, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot boot.scr script on the SD card:

    setenv serverip 192.168.0.1

    setenv ipaddr 192.168.0.2

    tftpb 0xC00000 boot2.scr

    source 0xC00000

Where:
– serverip is the IP of your TFTP server

– ipaddr is the IP of the RPi4

Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:

   mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr

Where:

– boot.source is the input

– boot.scr is the output

UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using ImageBuilder.

Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb). RPi4 has two UARTs; the default is bcm2835-aux-uart at address 0x7e215040. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:

  console=dtuart dtuart=serial1 sync_console

The Xen command line is provided by the boot2.scr script generated by ImageBuilder as “xen,xen-bootargs“. After editing boot2.source you can regenerate boot2.scr with mkimage:

 mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr

XEN ON RASPBERRY PI 4: AN EASY BUTTON

Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasn’t been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is LF Edge’s Project EVE.

Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels.

You can find more about EVE on the project’s website at http://projecteve.dev and its GitHub repo https://github.com/lf-edge/eve/blob/master/docs/README.md. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at:

https://github.com/lf-edge/eve/blob/master/docs/README.md

Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:

$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like

$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw

This is followed by flashing the resulting live.raw binary onto an SD card using your favorite tool.

Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as Dom0 under Xen.

As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVE’s sister project Eden: https://github.com/lf-edge/eden#raspberry-pi-4-support and following a short tutorial over there.

If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edge’s Slack channels starting with #eve over at http://lfedge.slack.com/ — we’d love to hear your feedback.

In the meantime – happy hacking!

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

Onboard edge computing devices with Secure Device Onboard and Open Horizon

By Blog, Open Horizon, Secure Device Onboard

Written by Joe Pearson, Chair of the Open Horizon TSC and Technology Strategist for IBM

For many companies, setting up heterogeneous fleets of edge devices across remote sites has traditionally been a time-consuming and sometimes difficult process. At the Linux Foundation’s Open Networking & Edge Summit conference last month, IBM announced that Intel’s Secure Device Onboard (SDO) solution is now fully integrated into Open Horizon and IBM Edge Application Manager and available to developers as a tech preview.

The Intel-developed SDO enables low-touch bootstrapping of required software at device initial power-on. For the Open Horizon project, this enables the agent software to be automatically and autonomously installed and configured. SDO technology is now being incorporated into a new industry onboarding standard being developed by the FIDO Alliance.

Developers can try this out by using the all-in-one version of Open Horizon. Simply run a one-line script on a target edge compute device or VM and simulate powering-up an SDO-enabled device and its onboarding.

Both Open Horizon and SDO recently joined the LF Edge umbrella, which aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. Thanks to IBM’s participation in the LF Edge open source community, contributors in the community are helping advance the future of open edge computing solutions.

See how SDO works

Simplifying edge device onboarding

Our team uses the term “device onboarding” to describe the initial bootstrapping process of installing and configuring required software on an edge computing device. In the case of Open Horizon, that includes connecting it to the Horizon management hub services. We have simplified the software installation process by providing a one-liner script, so that a person can install and run a development version of Open Horizon and SDO on a laptop or in a small virtual machine.

Before SDO was available, the typical installation process required a person to open a secure connection to the device (sometimes on premises), manually install all of the software pre-requisites, then install the Horizon agent, configure it, and register it with the management hub. With SDO support enabled, an administrator simply loads the voucher into the management hub when the device is purchased and then associates the configuration. When a technician powers on the device and connects it to the network, the device automatically finds the SDO services, presents the voucher, and downloads and installs the software automatically.

Integrating SDO into Open Horizon

The Open Horizon project has created a repository specifically for integrating the SDO project components into the Open Horizon management hub services and CLI. The SDO rendezvous service runs along side the management hub and provides a simple interface to bulk load import vouchers.

LF Edge and open source leadership

LF Edge continues to strive to ensure that edge computing solutions remain open. In May 2020, IBM contributed Open Horizon to LF Edge. With Intel also contributing SDO to LF Edge, this ensures that another vital component of a complete edge computing framework is also open source.

We’re excited to collaborate with Intel to expand the deployment of applications from open hybrid cloud environments down to the edge, making them accessible, secure, and scalable for the developer ecosystem and community. For more videos about Open Horizon, please visit LF Edge’s Youtube Channel or click here LF Edge Open Horizon Playlist. If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#open-horizon, #open-horizon-docs, #sdo-general or #sdo-tsc).

EdgeX Foundry Challenge Shanghai 2020 was a success!

By Blog, EdgeX Foundry, Event

Written by  co-chairs of the EdgeX Foundry China Project Melvin Sun, Senior Marketing Manager for the IoT Group at Intel, and Gavin Lu, R&D Director OCTO at VMware

On Thursday, September 24, after more than 2 months of fierce competition, the EdgeX Foundry Challenge Shanghai 2020 culminated in the final roadshow. Companies, colleges, hacker teams and enthusiastic audiences from around the world participated in the live video broadcast.

Yingjie Lu, Senior Director of IOT Group in Intel Corporation and Jim White, CTO of IOTech Systems and Chairman of EdgeX Foundry Technical Steering Committee, delivered opening and closing speeches for the final roadshow respectively. Mr. Peng Jianzhen, Secretary General of CCFA (China Chain-store & Franchise Association), and Mr. Shu Bin, experts from China State Grid, made summarizing remarks on all the teams of commerce and industrial tracks respectively.

The EdgeX Challenge Shanghai 2020 is an international hackathon event based on EdgeX, Foundry. It was successfully kicked off as a joint effort among EdgeX community members Canonical, Dell Technologies, HP, Intel, IOTech, Tencent, Thundersoft, VMware and Shanghai startup incubator Innospace.

In the age of the Internet of Things, collaboration is the foundation of all technological and business innovation. And such a spirit can be spread and practiced through the EdgeX community. Today, the EdgeX community members have collaborated extensively across the globe to drive technological development and business prosperity in the global IoT industry.

40 teams submitted their proposals covering both commerce and industrial tracks, including retail, assets tracking, manufacturing, environmental protection and energy conservation, utilities and building automation. The cutting-edge technologies adopted in the works, such as edge computing, artificial intelligence, 5G, sensors, blockchain, satellite communications, Internet of Things and big data, fully demonstrate the strength of innovation and foreshadowing the development and transformation of future life.

After careful selection by the judges, a total of 15 teams were selected as finalists on July 24. Based on the Devkits by Intel and the EdgeX challenge committee, these teams started use cases in various verticals, solving industry pain points, empowering deployable applications, and committing themselves to build a feast of technological innovation and change. The expert judges focused on the development process of the teams and worked with them with daily Q&As, weekly discussions and mid-term check-point meetings.

This Hackathon event has been running smoothly in a tense and methodical atmosphere with strong strength and support. As of mid-September, 15 teams had submitted their final projects, culminating in the finals. Click here to watch snapshot video of the EdgeX Challenge finalists preparing for the competition.

List of teams for roadshow:

Commerce Track

Sequence Code of team Name of team Use Cases Time
1 c18 HYD Miracle Innovation Apparel store edge computing 9:12
2 c27 DreamTech Mortgage assets monitoring based on blockchain +edge computing 9:24
3 c16 USST,YES! Retail store edge 9:36
4 c19 Breakthrough Studio Scenic Intelligent Navigation 9:48
5 c26 i-Design Edge-cloud based far ocean intelligent logistics/monitor 10:00
6 c07 Starlink team from JD.com Retail store heatmap & Surveillance 10:12
7 c03 AI Cooling Building automation 10:24
8 c08 Sugarinc Intelligence Smart building & office management 10:36
9 c10 Five Card Stud fighting force Automobile 4S (sales, service, spare part, survey) store 10:48

Industrial Track

Sequence Code of team Name of team Use Cases Time
1 i07 VSE Drivers school surveillance & analytics 11:00
2 i09 Smart eyes recognize energy CV based intelligent system of meters recognition 11:12
3 i01 Baowu Steel Defects detection in steel manufacture 11:24
4 i08 CyberIOT Guardians of electrical and mechanical equipment 11:36
5 i10 Power Blazers Autotuning system for orbital wind power system 11:48
6 i06 Jiangxing Intelligence Intelligent Edge Gateway for Industrial 12:00

 

Note: The sequence of the roadshow above is determined by the drawing of lots by the representatives of each participating teams.

The live roadshow was successfully broadcasted on September 24, where all the final teams presented their proposals and interacted with the judges. To ensure a comprehensive and professional assessment, the organizer invited expert judges with blended strong background, including:

– Business experts: Gao Ge, General Manager of Bailian Group’s Science and Innovation Center; Violet Lu, IT director of Fast Retailing LTD; and Yang Feng, Head of Wireless and Internet of Things of Tencent’s Network Platform Department

– EdgeX & AIOT experts: Henry Lau, Distinguished Technical Expert on Retail Solutions in HP Inc.; Gavin Lu, Director in VMware (China) R&D Center; Jim Wang, senior expert on IoT in Intel Corporation; Chris Wang, director of artificial intelligence in Intel; Jack Xu and Mi Wu, IoT experts in Intel; Xiaojing Xu, Senior Expert, Thundersoft Corporation

– Representatives from the investment community: Riel Capital, Intel Capital, VMware Capital …… In addition, the quality of the team & proposals attracted a large number of investment institutions, including Qiming Venture Capital, Morningside Capital, ZGC Qihang Investment, and other professional investors constitute the Members of the observer group for this live roadshow.

During the live streaming roadshow, all the teams’ works demonstrated the unique advantages of EdgeX in IoT and the edge computing development. The judges provided feedback and suggestions for Creativity, use of Computer Vision, Impact, Viability, Usefulness, simplicity, as well as documentation preparation and performance in the roadshow session. Observers and fans from all over the world also expressed their opinions through messaging and comment sections, which made the competition exciting.

The live roadshow has ended but the organizing committee is still in the process of collecting the judges’ questions and compiling the teams’ answers. Based on the above information, the competition will carefully score the ideations and demos to filter out competitive & deployable solutions to customer’s pain points. Meanwhile, we will hold an award ceremony at the Global Technology Transfer Conference on October 29, where the winners will receive a total of 90,000 RMB in cash prizes and rich rewards for their efforts.

Video playback of the final roadshow will be available on bilibili website, if you are interested in watching it, please follow the channel of EdgeX China Project on Bilibili. For more about the EdgeX Foundry China Project, visit the wiki at https://wiki.edgexfoundry.org/display/FA/China+Project.

Akraino White Paper: Cloud Interfacing at the Telco 5G Edge

By Akraino, Akraino Edge Stack, Blog

Written by Aaron Williams, LF Edge Developer Advocate

As cloud computing migrates from large Centralized Data Centers to the Server Provider Edge’s Access Edge (i.e. Server-based devices at the Telco tower) to be closer to the end user and reduce latency, more applications and use cases are now available.  When you combine this with the widespread roll out of 5G, with its speed, the last mile instantly becomes insanely fast.  Yet, if this new edge stack is not constructed correctly, the promise of these technologies will not be possible.

LF Edge’s Akraino project has dug into these issues and created its second white paper called Cloud Interfacing at the Telco 5G Edge.  In this just released white paper, Akraino analyzes the issues, proposes detailed enabler layers between edge applications and 5G core networks, and makes recommendations for how Telcos can overcome these technical and business challenges as they bring the next generation of applications and services to life.

Click here to view the complete white paper: https://bit.ly/34yCjWW

To learn more about Akraino, blueprints or end user case stories, please visit the wiki: https://wiki.akraino.org.

Introduction to Project EVE, an open-source edge node architecture

By Blog, Project EVE
Written by Brad Corrion, active member of the Project EVE community and Strategic Architect at Intel 
No alt text provided for this image

This article originally ran as a LinkedIn article last month. 

At the inaugural August ZEDEDA Transform conference, I participated in a panel discussion entitled “Edge OSS Landscape, Intro to Project EVE and Bridging to Kubernetes”. Project EVE, an LF Edge project, delivers an open-source edge node, on which applications are deployed as either containers or VMs. As some audience members noted, they were pleasantly surprised that we didn’t spend an hour talking explicitly about the Project EVE architecture. Instead, we considered many aspects of a container-first edge, community open-source investments, and whether technologies like Kubernetes can be useful for IoT applications.

The Edge Buzz

There’s a lot of buzz around the edge, and many see it as the next big thing since the adoption of cloud computing. Just as the working definition of the cloud has morphed over time and conceptualized as a highly scalable, micro-service-based application hosted on computing platforms around the globe, so has the edge come to represent a computing style. To embrace the edge means to place your computing power as close to the data that it is processing, to balance the cost and latency of moving high volumes of data across the network. Most have also used the transition to edge computing to adopt the same “cloud-native” technologies and processes to manage their compute workloads in a similar fashion, regardless of where the compute is deployed, be it cloud, data center, or some remote edge environment.

Turning the Problem on its Head

That last part is what enthuses me about the shift to edge computing: we move away from highly curated enterprise IT devices (whose management tends to prevent change and modification) and move towards cloud-like, dynamic, scalable assets (whose management technologies are designed for innovation and responding to ever-changing circumstances). This is the classic example of “pets vs. cattle“, but sprinkled in with IoT challenges like systems that are distributed at extreme ends of the world, with costly trip charges required to manage with an on-site technician. The solution turns the problem on its it head. It requires organizations, from IT to line of business, to adopt practices of agility and innovation, so that they can manage and deploy solutions to the edge as nimbly as they experiment and innovate in the cloud.

Next up, we discussed the benefits of utilizing open source software for projects like Project EVE. Making an edge node easy to deploy, secure to boot, and remotely manageable is not trivial, and it is not worth competing over. The community, including competitors, can create a solid, open-source edge node architecture, such as Project EVE, and all parties can benefit from the group investment. With a well-accepted, common edge architecture, innovators can focus instead on the applications, the orchestration, and the usability of the edge. Even using the same basic edge node architecture, there is more than ample surface area left to compete for service value, elegance, and solution innovation. Just as the typical investment around Kubernetes is allowing masses of projects and companies to improve on nearly every aspect of orchestration, without making each project re-invent the basics of orchestration, we don’t need tens of companies re-inventing secure device deployments, secure boot, and container protection. Get the basics done (the “boring stuff” as I called it) and focus on the specialization around it.

Can container first concepts, and projects like Kubernetes, be effective at the edge and solving IoT problems? Yes. No doubt, there are differences in using Kubernetes in the cloud from using it to manage edge nodes, and some of those challenges include limited power infrastructure, communications and connectivity, and cost considerations. However, these technologies are very adaptable. A container will run as happily on an IOT device, an IoT gateway, or an edge server. How you connect to and orchestrate the containers on those devices will vary. Most edges won’t need a local Kubernetes cluster. Distant Kubernetes infrastructures could remotely orchestrate a local edge node or local edge cluster.

Infrastructure as Code

A common theme in edge orchestration architectures is local orchestration, which provides enough power to a small orchestrator running at the edge to make decisions while offline from a central orchestrator. Projects like Open Horizon, which IBM recently open-sourced to the LF Edge, is designed to bridge traditional cloud orchestration to edge devices with a novel distributed policy engine. This distributed policy engine is executing and responding to changing conditions even when disconnected. Adopting an “infrastructure as code” mentality provides administrators a high degree of configuration and control of the target environment, even over remote networks. There is high confidence in the resulting infrastructure configurations, but with the variability as to “when” the changes are received due to bandwidth considerations. Could this be used on oil and gas infrastructure in the jungles of Ecuador? Yes. However, the challenge is in deciding which of the philosophies and projects are best suited to the needs of the situation.

If you find yourself architecting your edge node or using a simple Linux installation, while kicking the security can down the road, or are otherwise bogged down by remote manageability challenges when you’d rather be innovating and solving domain-specific problems, look to the open-source community. Specifically to projects like Project EVE, to give you a leg up for your edge architecture.

Please feel free to connect with me and start a conversation about this article. Here are some additional resources:

  • Explore how Intel is connecting open source communities with the retail market through the Open Retail Initiative
  • Be sure to explore Project EVE, Open Horizon, EdgeX Foundry and more projects at the LF Edge

Where the Edges Meet: Public Cloud Edge Interface

By Akraino, Akraino Edge Stack, Blog

Written by Oleg Berzin, Ph.D., a member of the Akraino Technical Steering Committee and Senior Director Technology Innovation at Equinix

Introduction

Why 5G

5G will provide significantly higher throughput than existing 4G networks. Currently, 4G LTE is limited to around 150 Mbps. LTE Advanced increases the data rate to 300 Mbps and LTE Advanced Pro to 600Mbps-1 Gbps. The 5G downlink speeds can be up to 20 Gbps. 5G can use multiple spectrum options, including low band (sub 1 GHz), mid-band (1-6 GHz) and mmWave (28, 39 GHz). The mmWave spectrum has the largest available contiguous bandwidth capacity (~1000 MHz) and promises dramatic increases in user data rates. 5G enables advanced air interface formats and transmission scheduling procedures that decrease access latency in the Radio Access Network by a factor of 10 compared to 4G LTE.

The Slicing Must Go On

Among advanced properties of the 5G architecture, Network Slicing enables the use of 5G network and services for a wide variety of use cases on the same infrastructure. Network Slicing (NS) refers to the ability to provision a common physical system to provide resources necessary for delivering service functionality under specific performance (e.g. latency, throughput, capacity, reliability) and functional (e.g. security, applications/services) constraints.

Network Slicing is particularly relevant to the subject matter of the Public Cloud Edge Interface (PCEI) Blueprint. As shown in the figure below, there is a reasonable expectation that applications enabled by the 5G performance characteristics will need access to diverse resources. This includes conventional traffic flows, such as access from mobile devices to the core clouds (public and/or private) as well as the general access to the Internet, edge traffic flows, such as low latency/high speed access to edge compute workloads placed in close physical proximity to the User Plane Functions (UPF), as well as the hybrid traffic flows that require a combination of the above for distributed applications (e.g. online gaming, AI at the edge, etc). One point that is very important is that the network slices provisioned in the mobile network must extend beyond the N6/SGi interface of the UPF all the way to the workloads running on the edge computing hardware and on the Public/Private Cloud infrastructure. In other words, “The Slicing Must Go On” in order to ensure continuity of intended performance for the applications.


The Mobile Edge

The technological capabilities defined by the standards organizations (e.g. 3GPP, IETF) are the necessary conditions for the development of 5G. However, the standards and protocols are not sufficient on their own. The realization of the promises of 5G depends directly on the availability of the supporting physical infrastructure as well as the ability to instantiate services in the right places within the infrastructure.

Latency can be used as a very good example to illustrate this point. One of the most intriguing possibilities with 5G is the ability to deliver very low end to end latency. A common example is the 5ms round-trip device to application latency target. If we look closely at this latency budget, it is not hard to see that to achieve this goal a new physical aggregation infrastructure is needed. This is because the 5ms budget includes all radio/mobile core, transport and processing delays on the path between the application running on User Equipment (UE) and the application running on the compute/server side. Given that at least 2ms will be required for the “air interface”, the remaining 3ms is all that’s left for the radio/packet core processing, network transport and the compute/application processing budget. The figure below illustrates an example of the end-to-end latency budget in a 5G network.

The Edge-in and Cloud-out Effect

Public Cloud Service Providers and 3rd-Party Edge Compute (EC) Providers are deploying Edge instances to better serve their end-users and applications, A multitude of these applications require close inter-working with the Mobile Edge deployments to provide predictable latency, throughput, reliability, and other requirements.

The need to interface and exchange information through open APIs will allow competitive offerings for Consumers, Enterprises, and Vertical Industry end-user segments. These APIs are not limited to providing basic connectivity services but will include the ability to deliver predictable data rates, predictable latency, reliability, service insertion, security, AI and RAN analytics, network slicing, and more.

These capabilities are needed to support a multitude of emerging applications such as AR/VR, Industrial IoT, autonomous vehicles, drones, Industry 4.0 initiatives, Smart Cities, Smart Ports. Other APIs will include exposure to edge orchestration and management, Edge monitoring (KPIs), and more. These open APIs will be the foundation for service and instrumentation capabilities when integrating with public cloud development environments.

Public Cloud Edge Interface (PCEI)

Overview

The purpose of Public Cloud Edge Interface (PCEI) Blueprint family is to specify a set of open APIs for enabling Multi-Domain Inter-working across functional domains that provide Edge capabilities/applications and require close cooperation between the Mobile Edge, the Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute hardware and Networks. The Compute hardware is optimized and power efficient for Edge such as the Arm64 architecture.

The high-level relationships between the functional domains are shown in the figure below:

The Data Center Facility (DCF) Domain. The DCF Domain includes Data Center physical facilities that provide the physical location and the power/space infrastructure for other domains and their respective functions.

The Interconnection of Core and Edge (ICE) Domain. The ICE Domain includes the physical and logical interconnection and networking capabilities that provide connectivity between other domains and their respective functions.

The Mobile Network Operator (MNO) Domain. The MNO Domain contains all Access and Core Network Functions necessary for signaling and user plane capabilities to allow for mobile device connectivity.

The Public Cloud Core (PCC) Domain. The PCC Domain includes all IaaS/PaaS functions that are provided by the Public Clouds to their customers.

The Public Cloud Edge (PCE) Domain. The PCE Domain includes the PCC Domain functions that are instantiated in the DCF Domain locations that are positioned closer (in terms of geographical proximity) to the functions of the MNO Domain.

The 3rd party Edge (3PE) Domain. The 3PE domain is in principle similar to the PCE Domain, with a distinction that the 3PE functions may be provided by 3rd parties (with respect to the MNOs and Public Clouds) as instances of Edge Computing resources/applications.

Architecture

The PCEI Reference Architecture and the Interface Reference Points (IRP) are shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

Use Cases

The PCEI working group identified the following use cases and capabilities for Blueprint development:

  1. Traffic Steering/UPF Distribution/Shunting capability — distributing User Plane Functions in the appropriate Data Center Facilities on qualified compute hardware for routing the traffic to desired applications and network/processing functions/applications.
  2. Local Break-Out (LBO) – Examples: video traffic offload, low latency services, roaming optimization.
  3. Location Services — location of a specific UE, or identification of UEs within a geographical area, facilitation of server-side application workload distribution based on UE and infrastructure resource location.
  4. QoS acceleration/extension – provide low latency, high throughput for Edge applications. Example: provide continuity for QoS provisioned for subscribers in the MNO domain, across the interconnection/networking domain for end-to-end QoS functionality.
  5. Network Slicing provisioning and management – providing continuity for network slices instantiated in the MNO domain, across the Public Cloud Core/Edge as well as the 3Rd-Party Edge domains, offering dedicated resources specifically tailored for application and functional needs (e.g. security) needs.
  6. Mobile Hybrid/Multi-Cloud Access – provide multi-MNO, multi-Cloud, multi-MEC access for mobile devices (including IoT) and Edge services/applications
  7. Enterprise Wireless WAN access – provide high-speed Fixed Wireless Access to enterprises with the ability to interconnect to Public Cloud and 3rd-Party Edge Functions, including the Network Functions such as SD-WAN.
  8. Distributed Online/Cloud Gaming.
  9. Authentication – provided as service enablement (e.g., two-factor authentication) used by most OTT service providers 
  10. Security – provided as service enablement (e.g., firewall service insertion)

The initial focus of the PCEI Blueprint development will be on the following use cases:

  • User Plane Function Distribution
  • Local Break-Out of Mobile Traffic
  • Location Services

User Plane Function Distribution and Local Break-Out

The UPF Distribution use case distinguishes between two scenarios:

  • UPF Interconnection. The UPF/SPGW-U is located in the MNO network and needs to be interconnected on the N6/SGi interface to 3PE and/or PCE/PCC.
  • UPF Placement. The MNO wants to instantiate a UPF/SPGW-U in a location that is different from their network (e.g. Customer Premises, 3rd Party Data Center)

UPF Interconnection Scenario

UPF Placement Scenario

UPF Placement, Interconnection and Local Break-Out Examples

Location Services (LS)

This use case targets obtaining geographic location of a specific UE provided by the 4G/5G network, identification of UEs within a geographical area as well as facilitation of server-side application workload distribution based on UE and infrastructure resource location.

 

Acknowledgements

Project Technical Lead: Oleg Berzin

Committers: Suzy GuTina Tsou Wei Chen, Changming Bai, Alibaba; Jian Li, Kandan Kathirvel, Dan Druta, Gao Chen, Deepak Kataria, David Plunkett, Cindy Xing

Contributors: Arif , Jane Shen, Jeff Brower, Suresh Krishnan, Kaloom, Frank Wang, Ampere

Exploration and Practices of Edge Computing: Cloud Managing Containerized Devices

By Blog, EdgeX Foundry, Industry Article, Trend

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.

Introduction

The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.

Nebula Architecture

Project Nebula is designed based on the following key ideas:

  • Agnostic to device CPU architecture, supporting both x86 and ARM;
  • Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
  • Agnostic to data analytics services, supporting on-premise and cloud deployment;
  • Support small to large scale deployment;
  • Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture

Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.

Nebula Demo

Installation

Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.

The basic resource requirement of Nebula is:

  • CPU: 2 virtual CPU cores
  • Memory: 8GB
  • Storage: 150GB

Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.

Nebula Service Console

Vendor Portal

After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.

Nebula Management Portal

In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.

Edge Service Hierarchy

For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.

Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.

After the release, users can see and deploy these edge services.

Device Registration

Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.

Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.

./install.sh init -u user-acccount -p user-account-password -n user-device-name

User Portal

After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.

After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.

After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.

Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.

Next

From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge

Finalists for the 2020 Edge Woman of the Year Award!

By Blog, State of the Edge

Written by Candice Digby, Partner and Events Manager at Vapor IO, a LF Edge member and active community member in the State of the Edge Project

Last year’s Edge Woman of the Year winner Farah Papaioannou is ready to pass the torch.

“I was honored to have been chosen as Edge Woman of the Year 2019 and to be recognized alongside many inspiring and innovative women across the industry,” said Farah Papaioannou, Co-Founder and President of Edgeworx, Inc. “I am thrilled to pay that recognition forward and participate in announcing this year’s Edge Woman of the Year 2020 finalist categories; together we have a lot to accomplish.”

(left to right) Matt Trifiro, Farah Papaioannou, Gavin Whitechurch

With more nominations in the 2nd annual competition, it was difficult for State of the Edge and Edge Computing World to select only ten top finalists. The Edge Woman of the Year 2020 nominees represent industry leaders in roles that are impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems.

The Edge Woman of the Year Award represents a long-term industry commitment to highlight the growing importance of the contributions and accomplishments made by women in edge computing.  The award is presented at the annual Edge Computing World event which gathers the whole edge computing ecosystem, from network to cloud and application to infrastructure end-users and developers while also sharing edge best practices.

The annual Edge Woman of the Year Award is presented to outstanding female and non-binary professionals in edge computing for outstanding performance in their roles elevating Edge. The 2020 award committee selected the following 10 finalists for their excellent work in the named categories:

  • Leadership in Edge Startups
    • Kathy Do, VP, Finance and Operations at MemVerge
  • Leadership in Edge Open Source Contributions
    • Malini Bhandaru, Open Source Lead for IoT & Edge at VMware
  • Leadership in Edge at a Large Organization
    • Jenn Didoni, Head of Cloud Portfolio at Vodafone Group Business
  • Leadership in Edge Security
    • Ramya Ravichandar, VP of Product Management at FogHorn
  • Leadership in Edge Innovation and Research
    • Kathleen Kallot, Director, AI Ecosystem, arm
  • Leadership in Edge Industry and Technology
    • Fay Arjomandi, Founder and CEO, mimik technology, Inc.
  • Leadership in Edge Best Practices
    • Nurit Sprecher, Head of Management & Virtualization Standards, Nokia
  • Leadership in Edge Infrastructure
    • Meredith Schuler, Financial & Strategic Operations Manager, SBA Edge
  • Overall Edge Industry Leadership
    • Nancy Shemwell, Chief Operating Officer, Trilogy Networks, Inc.
  • Leadership in Executing Edge Strategy
    • Angie McMillin, Vice President and General Manager, IT Systems, Vertiv

The “Top Ten Women in Edge” finalists are selected from nominations and submissions submitted by experts in edge from around the world. The final winner will be chosen by a panel of industry judges. The winner of the Edge Woman of the Year 2020 will be announced during this year’s Edge Computing World, being held virtually October 12-15, 2020.

For more information on the Women in Edge Award visit: https://www.lfedge.org/2020/08/25/state-of-the-edge-and-edge-computing-world-announce-finalists-for-the-2020-edge-woman-of-the-year-award/