Skip to main content
Monthly Archives

October 2020

LF Edge Member Spotlight: Zenlayer

By Akraino, Akraino Edge Stack, Blog, LF Edge, Member Spotlight

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Jim Xu, Principal Engineer at Zenlayer, to discuss the importance of open source, collaborating with industry leaders in edge computing, their contributions to Akraino and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Zenlayer is an edge cloud service provider and global company headquartered in Los Angeles, Shanghai, Singapore, and Mumbai. Businesses utilize Zenlayer’s platform to deploy applications closer to the users and improve their digital experience. Zenlayer offers edge compute, software defined networking, and application delivery services in more than 180 locations on six continents.

Why is your organization adopting an open source approach?

Zenlayer has always relied on open source solutions. We strongly believe that open source is the right ecosystem for the edge cloud industry to grow. We connect infrastructure all over the globe. If each data center and platform integrate open-source software, it is much easier to integrate networks and make literal connections compared to a milieu of proprietary systems. Some of the open source projects we benefit from and support are Akraino Blue Prints, ODL, Kubernetes, OpenNess, DPDK, Linux, mySQL, and more.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We are a startup company in the edge cloud industry. LF Edge is one of the best open-source organizations both advocating for and building open edge platforms. The edge cloud space is developing rapidly, with continuous improvements in cloud technology, edge infrastructure, disaggregated compute, and storage options. Both impact and complexity go far beyond just cloud service providers, device vendors, or even a single traditional industry. LF Edge has helped build a community of people and companies from across industries, advocating for an open climate to make the edge accessible to as many users as possible.

What do you see as the top benefits of being part of the LF Edge community?

Our company has been a member of the LF Edge community for over a year now. Despite the difficulties presented by COVID-19, we have been able to enjoy being part of the Edge community. We interacted with people from a broad spectrum of industries and technology areas and learned some exciting use cases from the LF Edge community. This has helped us build a solid foundation for Zenlayer’s edge cloud services. 

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

We are proud to be part of the Edge Cloud community. Zenlayer is leading the Upstream subcommittee within Akraino and has invited multiple external communities such as ONF CORD,  CNTT, TIP OCN, O-RAN and TARS to share our common interest in building the edge. We also contributed to the upstream requirement and reviews for Akraino releases.

What do you think sets LF Edge apart from other industry alliances?

LF Edge has a clear focus on edge cloud and a very healthy and strong governing board to ensure unbiased technological drive toward open systems. 

How will LF Edge help your business?

We hope LF Edge will continue to empower rapid customer innovation during the drive to edge cloud for video streaming, gaming, enterprise applications, IoT, and more. As a member of a fast-growing community, we also look forward to more interactions via conferences and social events (digital or in person as is safe) so we can continue to get to know and better understand each other’s needs and how we can help solve them. 

What advice would you give to someone considering joining LF Edge?

LF Edge is a unique community pushing for the best future for edge cloud. The group brings together driven people, a collaborative culture, and fast momentum. What you put in you receive back tenfold. Anyone interested in the future of the edge should consider joining, even if they do not yet know much about open source and its benefits. The community will value their inputs and be happy to teach or collaborate in return.

To find out more about LF Edge members or how to join, click here. To learn more about Akraino, click here. Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #akraino-tsc channels.

Xen on Raspberry Pi 4 with Project EVE

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA, and Stefano Stabellini, Principal Engineer for Xilinx

Originally posted on Linux.com, the Xen Project is excited to share that the Xen Hypervisor now runs on Raspberry Pi. This is an exciting step for both hobbyists and industries. Read more to learn about how Xen now runs on RPi and how to get started.

Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.

Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the xen-devel mailing list. “It should be easy,” we answered. “It might even work out of the box,” we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.

The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in Dom0 was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for Dom0 on RPi4. The patch is now present in Xen 4.14.

“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.” We were wrong again. The Xen subsystem in Linux uses virt_to_phys to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using virt_to_phys, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linux’s master branch.

We felt confident that we finally reached the end of the line. “Memory allocations – check. Memory translations — check. We are good to go!” No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.

To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasn’t easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.

Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.

We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.

HACKING XEN ON RASPBERRY PI 4

If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See this tutorial on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to this article.

Use the rpi-imager to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit config.txt. Make sure to add the following:

 

    kernel=u-boot.bin

    enable_uart=1

    arm_64bit=1

Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance OpenSUSE. Download the JeOS image, then open it and save u-boot.bin:

    xz -d 
openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz

   kpartx -a 
./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw

    mount /dev/mapper/loop0p1 /mnt
    cp /mnt/u-boot.bin /tmp

Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for Dom0, the Dom0 rootfs, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot boot.scr script on the SD card:

    setenv serverip 192.168.0.1

    setenv ipaddr 192.168.0.2

    tftpb 0xC00000 boot2.scr

    source 0xC00000

Where:
– serverip is the IP of your TFTP server

– ipaddr is the IP of the RPi4

Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:

   mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr

Where:

– boot.source is the input

– boot.scr is the output

UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using ImageBuilder.

Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb). RPi4 has two UARTs; the default is bcm2835-aux-uart at address 0x7e215040. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:

  console=dtuart dtuart=serial1 sync_console

The Xen command line is provided by the boot2.scr script generated by ImageBuilder as “xen,xen-bootargs“. After editing boot2.source you can regenerate boot2.scr with mkimage:

 mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr

XEN ON RASPBERRY PI 4: AN EASY BUTTON

Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasn’t been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is LF Edge’s Project EVE.

Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels.

You can find more about EVE on the project’s website at http://projecteve.dev and its GitHub repo https://github.com/lf-edge/eve/blob/master/docs/README.md. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at:

https://github.com/lf-edge/eve/blob/master/docs/README.md

Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:

$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like

$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw

This is followed by flashing the resulting live.raw binary onto an SD card using your favorite tool.

Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as Dom0 under Xen.

As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVE’s sister project Eden: https://github.com/lf-edge/eden#raspberry-pi-4-support and following a short tutorial over there.

If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edge’s Slack channels starting with #eve over at http://lfedge.slack.com/ — we’d love to hear your feedback.

In the meantime – happy hacking!

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

Onboard edge computing devices with Secure Device Onboard and Open Horizon

By Blog, Open Horizon, Secure Device Onboard

Written by Joe Pearson, Chair of the Open Horizon TSC and Technology Strategist for IBM

For many companies, setting up heterogeneous fleets of edge devices across remote sites has traditionally been a time-consuming and sometimes difficult process. At the Linux Foundation’s Open Networking & Edge Summit conference last month, IBM announced that Intel’s Secure Device Onboard (SDO) solution is now fully integrated into Open Horizon and IBM Edge Application Manager and available to developers as a tech preview.

The Intel-developed SDO enables low-touch bootstrapping of required software at device initial power-on. For the Open Horizon project, this enables the agent software to be automatically and autonomously installed and configured. SDO technology is now being incorporated into a new industry onboarding standard being developed by the FIDO Alliance.

Developers can try this out by using the all-in-one version of Open Horizon. Simply run a one-line script on a target edge compute device or VM and simulate powering-up an SDO-enabled device and its onboarding.

Both Open Horizon and SDO recently joined the LF Edge umbrella, which aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. Thanks to IBM’s participation in the LF Edge open source community, contributors in the community are helping advance the future of open edge computing solutions.

See how SDO works

Simplifying edge device onboarding

Our team uses the term “device onboarding” to describe the initial bootstrapping process of installing and configuring required software on an edge computing device. In the case of Open Horizon, that includes connecting it to the Horizon management hub services. We have simplified the software installation process by providing a one-liner script, so that a person can install and run a development version of Open Horizon and SDO on a laptop or in a small virtual machine.

Before SDO was available, the typical installation process required a person to open a secure connection to the device (sometimes on premises), manually install all of the software pre-requisites, then install the Horizon agent, configure it, and register it with the management hub. With SDO support enabled, an administrator simply loads the voucher into the management hub when the device is purchased and then associates the configuration. When a technician powers on the device and connects it to the network, the device automatically finds the SDO services, presents the voucher, and downloads and installs the software automatically.

Integrating SDO into Open Horizon

The Open Horizon project has created a repository specifically for integrating the SDO project components into the Open Horizon management hub services and CLI. The SDO rendezvous service runs along side the management hub and provides a simple interface to bulk load import vouchers.

LF Edge and open source leadership

LF Edge continues to strive to ensure that edge computing solutions remain open. In May 2020, IBM contributed Open Horizon to LF Edge. With Intel also contributing SDO to LF Edge, this ensures that another vital component of a complete edge computing framework is also open source.

We’re excited to collaborate with Intel to expand the deployment of applications from open hybrid cloud environments down to the edge, making them accessible, secure, and scalable for the developer ecosystem and community. For more videos about Open Horizon, please visit LF Edge’s Youtube Channel or click here LF Edge Open Horizon Playlist. If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#open-horizon, #open-horizon-docs, #sdo-general or #sdo-tsc).

EdgeX Foundry Challenge Shanghai 2020 was a success!

By Blog, EdgeX Foundry, Event

Written by  co-chairs of the EdgeX Foundry China Project Melvin Sun, Senior Marketing Manager for the IoT Group at Intel, and Gavin Lu, R&D Director OCTO at VMware

On Thursday, September 24, after more than 2 months of fierce competition, the EdgeX Foundry Challenge Shanghai 2020 culminated in the final roadshow. Companies, colleges, hacker teams and enthusiastic audiences from around the world participated in the live video broadcast.

Yingjie Lu, Senior Director of IOT Group in Intel Corporation and Jim White, CTO of IOTech Systems and Chairman of EdgeX Foundry Technical Steering Committee, delivered opening and closing speeches for the final roadshow respectively. Mr. Peng Jianzhen, Secretary General of CCFA (China Chain-store & Franchise Association), and Mr. Shu Bin, experts from China State Grid, made summarizing remarks on all the teams of commerce and industrial tracks respectively.

The EdgeX Challenge Shanghai 2020 is an international hackathon event based on EdgeX, Foundry. It was successfully kicked off as a joint effort among EdgeX community members Canonical, Dell Technologies, HP, Intel, IOTech, Tencent, Thundersoft, VMware and Shanghai startup incubator Innospace.

In the age of the Internet of Things, collaboration is the foundation of all technological and business innovation. And such a spirit can be spread and practiced through the EdgeX community. Today, the EdgeX community members have collaborated extensively across the globe to drive technological development and business prosperity in the global IoT industry.

40 teams submitted their proposals covering both commerce and industrial tracks, including retail, assets tracking, manufacturing, environmental protection and energy conservation, utilities and building automation. The cutting-edge technologies adopted in the works, such as edge computing, artificial intelligence, 5G, sensors, blockchain, satellite communications, Internet of Things and big data, fully demonstrate the strength of innovation and foreshadowing the development and transformation of future life.

After careful selection by the judges, a total of 15 teams were selected as finalists on July 24. Based on the Devkits by Intel and the EdgeX challenge committee, these teams started use cases in various verticals, solving industry pain points, empowering deployable applications, and committing themselves to build a feast of technological innovation and change. The expert judges focused on the development process of the teams and worked with them with daily Q&As, weekly discussions and mid-term check-point meetings.

This Hackathon event has been running smoothly in a tense and methodical atmosphere with strong strength and support. As of mid-September, 15 teams had submitted their final projects, culminating in the finals. Click here to watch snapshot video of the EdgeX Challenge finalists preparing for the competition.

List of teams for roadshow:

Commerce Track

Sequence Code of team Name of team Use Cases Time
1 c18 HYD Miracle Innovation Apparel store edge computing 9:12
2 c27 DreamTech Mortgage assets monitoring based on blockchain +edge computing 9:24
3 c16 USST,YES! Retail store edge 9:36
4 c19 Breakthrough Studio Scenic Intelligent Navigation 9:48
5 c26 i-Design Edge-cloud based far ocean intelligent logistics/monitor 10:00
6 c07 Starlink team from JD.com Retail store heatmap & Surveillance 10:12
7 c03 AI Cooling Building automation 10:24
8 c08 Sugarinc Intelligence Smart building & office management 10:36
9 c10 Five Card Stud fighting force Automobile 4S (sales, service, spare part, survey) store 10:48

Industrial Track

Sequence Code of team Name of team Use Cases Time
1 i07 VSE Drivers school surveillance & analytics 11:00
2 i09 Smart eyes recognize energy CV based intelligent system of meters recognition 11:12
3 i01 Baowu Steel Defects detection in steel manufacture 11:24
4 i08 CyberIOT Guardians of electrical and mechanical equipment 11:36
5 i10 Power Blazers Autotuning system for orbital wind power system 11:48
6 i06 Jiangxing Intelligence Intelligent Edge Gateway for Industrial 12:00

 

Note: The sequence of the roadshow above is determined by the drawing of lots by the representatives of each participating teams.

The live roadshow was successfully broadcasted on September 24, where all the final teams presented their proposals and interacted with the judges. To ensure a comprehensive and professional assessment, the organizer invited expert judges with blended strong background, including:

– Business experts: Gao Ge, General Manager of Bailian Group’s Science and Innovation Center; Violet Lu, IT director of Fast Retailing LTD; and Yang Feng, Head of Wireless and Internet of Things of Tencent’s Network Platform Department

– EdgeX & AIOT experts: Henry Lau, Distinguished Technical Expert on Retail Solutions in HP Inc.; Gavin Lu, Director in VMware (China) R&D Center; Jim Wang, senior expert on IoT in Intel Corporation; Chris Wang, director of artificial intelligence in Intel; Jack Xu and Mi Wu, IoT experts in Intel; Xiaojing Xu, Senior Expert, Thundersoft Corporation

– Representatives from the investment community: Riel Capital, Intel Capital, VMware Capital …… In addition, the quality of the team & proposals attracted a large number of investment institutions, including Qiming Venture Capital, Morningside Capital, ZGC Qihang Investment, and other professional investors constitute the Members of the observer group for this live roadshow.

During the live streaming roadshow, all the teams’ works demonstrated the unique advantages of EdgeX in IoT and the edge computing development. The judges provided feedback and suggestions for Creativity, use of Computer Vision, Impact, Viability, Usefulness, simplicity, as well as documentation preparation and performance in the roadshow session. Observers and fans from all over the world also expressed their opinions through messaging and comment sections, which made the competition exciting.

The live roadshow has ended but the organizing committee is still in the process of collecting the judges’ questions and compiling the teams’ answers. Based on the above information, the competition will carefully score the ideations and demos to filter out competitive & deployable solutions to customer’s pain points. Meanwhile, we will hold an award ceremony at the Global Technology Transfer Conference on October 29, where the winners will receive a total of 90,000 RMB in cash prizes and rich rewards for their efforts.

Video playback of the final roadshow will be available on bilibili website, if you are interested in watching it, please follow the channel of EdgeX China Project on Bilibili. For more about the EdgeX Foundry China Project, visit the wiki at https://wiki.edgexfoundry.org/display/FA/China+Project.

Akraino White Paper: Cloud Interfacing at the Telco 5G Edge

By Akraino, Akraino Edge Stack, Blog

Written by Aaron Williams, LF Edge Developer Advocate

As cloud computing migrates from large Centralized Data Centers to the Server Provider Edge’s Access Edge (i.e. Server-based devices at the Telco tower) to be closer to the end user and reduce latency, more applications and use cases are now available.  When you combine this with the widespread roll out of 5G, with its speed, the last mile instantly becomes insanely fast.  Yet, if this new edge stack is not constructed correctly, the promise of these technologies will not be possible.

LF Edge’s Akraino project has dug into these issues and created its second white paper called Cloud Interfacing at the Telco 5G Edge.  In this just released white paper, Akraino analyzes the issues, proposes detailed enabler layers between edge applications and 5G core networks, and makes recommendations for how Telcos can overcome these technical and business challenges as they bring the next generation of applications and services to life.

Click here to view the complete white paper: https://bit.ly/34yCjWW

To learn more about Akraino, blueprints or end user case stories, please visit the wiki: https://wiki.akraino.org.