Monthly Archives

October 2020

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

Onboard edge computing devices with Secure Device Onboard and Open Horizon

By Blog, Open Horizon, Secure Device Onboard

Written by Joe Pearson, Chair of the Open Horizon TSC and Technology Strategist for IBM

For many companies, setting up heterogeneous fleets of edge devices across remote sites has traditionally been a time-consuming and sometimes difficult process. At the Linux Foundation’s Open Networking & Edge Summit conference last month, IBM announced that Intel’s Secure Device Onboard (SDO) solution is now fully integrated into Open Horizon and IBM Edge Application Manager and available to developers as a tech preview.

The Intel-developed SDO enables low-touch bootstrapping of required software at device initial power-on. For the Open Horizon project, this enables the agent software to be automatically and autonomously installed and configured. SDO technology is now being incorporated into a new industry onboarding standard being developed by the FIDO Alliance.

Developers can try this out by using the all-in-one version of Open Horizon. Simply run a one-line script on a target edge compute device or VM and simulate powering-up an SDO-enabled device and its onboarding.

Both Open Horizon and SDO recently joined the LF Edge umbrella, which aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. Thanks to IBM’s participation in the LF Edge open source community, contributors in the community are helping advance the future of open edge computing solutions.

See how SDO works

Simplifying edge device onboarding

Our team uses the term “device onboarding” to describe the initial bootstrapping process of installing and configuring required software on an edge computing device. In the case of Open Horizon, that includes connecting it to the Horizon management hub services. We have simplified the software installation process by providing a one-liner script, so that a person can install and run a development version of Open Horizon and SDO on a laptop or in a small virtual machine.

Before SDO was available, the typical installation process required a person to open a secure connection to the device (sometimes on premises), manually install all of the software pre-requisites, then install the Horizon agent, configure it, and register it with the management hub. With SDO support enabled, an administrator simply loads the voucher into the management hub when the device is purchased and then associates the configuration. When a technician powers on the device and connects it to the network, the device automatically finds the SDO services, presents the voucher, and downloads and installs the software automatically.

Integrating SDO into Open Horizon

The Open Horizon project has created a repository specifically for integrating the SDO project components into the Open Horizon management hub services and CLI. The SDO rendezvous service runs along side the management hub and provides a simple interface to bulk load import vouchers.

LF Edge and open source leadership

LF Edge continues to strive to ensure that edge computing solutions remain open. In May 2020, IBM contributed Open Horizon to LF Edge. With Intel also contributing SDO to LF Edge, this ensures that another vital component of a complete edge computing framework is also open source.

We’re excited to collaborate with Intel to expand the deployment of applications from open hybrid cloud environments down to the edge, making them accessible, secure, and scalable for the developer ecosystem and community. For more videos about Open Horizon, please visit LF Edge’s Youtube Channel or click here LF Edge Open Horizon Playlist. If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#open-horizon, #open-horizon-docs, #sdo-general or #sdo-tsc).

EdgeX Foundry Challenge Shanghai 2020 was a success!

By Blog, EdgeX Foundry, Event

Written by  co-chairs of the EdgeX Foundry China Project Melvin Sun, Senior Marketing Manager for the IoT Group at Intel, and Gavin Lu, R&D Director OCTO at VMware

On Thursday, September 24, after more than 2 months of fierce competition, the EdgeX Foundry Challenge Shanghai 2020 culminated in the final roadshow. Companies, colleges, hacker teams and enthusiastic audiences from around the world participated in the live video broadcast.

Yingjie Lu, Senior Director of IOT Group in Intel Corporation and Jim White, CTO of IOTech Systems and Chairman of EdgeX Foundry Technical Steering Committee, delivered opening and closing speeches for the final roadshow respectively. Mr. Peng Jianzhen, Secretary General of CCFA (China Chain-store & Franchise Association), and Mr. Shu Bin, experts from China State Grid, made summarizing remarks on all the teams of commerce and industrial tracks respectively.

The EdgeX Challenge Shanghai 2020 is an international hackathon event based on EdgeX, Foundry. It was successfully kicked off as a joint effort among EdgeX community members Canonical, Dell Technologies, HP, Intel, IOTech, Tencent, Thundersoft, VMware and Shanghai startup incubator Innospace.

In the age of the Internet of Things, collaboration is the foundation of all technological and business innovation. And such a spirit can be spread and practiced through the EdgeX community. Today, the EdgeX community members have collaborated extensively across the globe to drive technological development and business prosperity in the global IoT industry.

40 teams submitted their proposals covering both commerce and industrial tracks, including retail, assets tracking, manufacturing, environmental protection and energy conservation, utilities and building automation. The cutting-edge technologies adopted in the works, such as edge computing, artificial intelligence, 5G, sensors, blockchain, satellite communications, Internet of Things and big data, fully demonstrate the strength of innovation and foreshadowing the development and transformation of future life.

After careful selection by the judges, a total of 15 teams were selected as finalists on July 24. Based on the Devkits by Intel and the EdgeX challenge committee, these teams started use cases in various verticals, solving industry pain points, empowering deployable applications, and committing themselves to build a feast of technological innovation and change. The expert judges focused on the development process of the teams and worked with them with daily Q&As, weekly discussions and mid-term check-point meetings.

This Hackathon event has been running smoothly in a tense and methodical atmosphere with strong strength and support. As of mid-September, 15 teams had submitted their final projects, culminating in the finals. Click here to watch snapshot video of the EdgeX Challenge finalists preparing for the competition.

List of teams for roadshow:

Commerce Track

Sequence Code of team Name of team Use Cases Time
1 c18 HYD Miracle Innovation Apparel store edge computing 9:12
2 c27 DreamTech Mortgage assets monitoring based on blockchain +edge computing 9:24
3 c16 USST,YES! Retail store edge 9:36
4 c19 Breakthrough Studio Scenic Intelligent Navigation 9:48
5 c26 i-Design Edge-cloud based far ocean intelligent logistics/monitor 10:00
6 c07 Starlink team from JD.com Retail store heatmap & Surveillance 10:12
7 c03 AI Cooling Building automation 10:24
8 c08 Sugarinc Intelligence Smart building & office management 10:36
9 c10 Five Card Stud fighting force Automobile 4S (sales, service, spare part, survey) store 10:48

Industrial Track

Sequence Code of team Name of team Use Cases Time
1 i07 VSE Drivers school surveillance & analytics 11:00
2 i09 Smart eyes recognize energy CV based intelligent system of meters recognition 11:12
3 i01 Baowu Steel Defects detection in steel manufacture 11:24
4 i08 CyberIOT Guardians of electrical and mechanical equipment 11:36
5 i10 Power Blazers Autotuning system for orbital wind power system 11:48
6 i06 Jiangxing Intelligence Intelligent Edge Gateway for Industrial 12:00

 

Note: The sequence of the roadshow above is determined by the drawing of lots by the representatives of each participating teams.

The live roadshow was successfully broadcasted on September 24, where all the final teams presented their proposals and interacted with the judges. To ensure a comprehensive and professional assessment, the organizer invited expert judges with blended strong background, including:

– Business experts: Gao Ge, General Manager of Bailian Group’s Science and Innovation Center; Violet Lu, IT director of Fast Retailing LTD; and Yang Feng, Head of Wireless and Internet of Things of Tencent’s Network Platform Department

– EdgeX & AIOT experts: Henry Lau, Distinguished Technical Expert on Retail Solutions in HP Inc.; Gavin Lu, Director in VMware (China) R&D Center; Jim Wang, senior expert on IoT in Intel Corporation; Chris Wang, director of artificial intelligence in Intel; Jack Xu and Mi Wu, IoT experts in Intel; Xiaojing Xu, Senior Expert, Thundersoft Corporation

– Representatives from the investment community: Riel Capital, Intel Capital, VMware Capital …… In addition, the quality of the team & proposals attracted a large number of investment institutions, including Qiming Venture Capital, Morningside Capital, ZGC Qihang Investment, and other professional investors constitute the Members of the observer group for this live roadshow.

During the live streaming roadshow, all the teams’ works demonstrated the unique advantages of EdgeX in IoT and the edge computing development. The judges provided feedback and suggestions for Creativity, use of Computer Vision, Impact, Viability, Usefulness, simplicity, as well as documentation preparation and performance in the roadshow session. Observers and fans from all over the world also expressed their opinions through messaging and comment sections, which made the competition exciting.

The live roadshow has ended but the organizing committee is still in the process of collecting the judges’ questions and compiling the teams’ answers. Based on the above information, the competition will carefully score the ideations and demos to filter out competitive & deployable solutions to customer’s pain points. Meanwhile, we will hold an award ceremony at the Global Technology Transfer Conference on October 29, where the winners will receive a total of 90,000 RMB in cash prizes and rich rewards for their efforts.

Video playback of the final roadshow will be available on bilibili website, if you are interested in watching it, please follow the channel of EdgeX China Project on Bilibili. For more about the EdgeX Foundry China Project, visit the wiki at https://wiki.edgexfoundry.org/display/FA/China+Project.

Akraino White Paper: Cloud Interfacing at the Telco 5G Edge

By Akraino, Akraino Edge Stack, Blog

Written by Aaron Williams, LF Edge Developer Advocate

As cloud computing migrates from large Centralized Data Centers to the Server Provider Edge’s Access Edge (i.e. Server-based devices at the Telco tower) to be closer to the end user and reduce latency, more applications and use cases are now available.  When you combine this with the widespread roll out of 5G, with its speed, the last mile instantly becomes insanely fast.  Yet, if this new edge stack is not constructed correctly, the promise of these technologies will not be possible.

LF Edge’s Akraino project has dug into these issues and created its second white paper called Cloud Interfacing at the Telco 5G Edge.  In this just released white paper, Akraino analyzes the issues, proposes detailed enabler layers between edge applications and 5G core networks, and makes recommendations for how Telcos can overcome these technical and business challenges as they bring the next generation of applications and services to life.

Click here to view the complete white paper: https://bit.ly/34yCjWW

To learn more about Akraino, blueprints or end user case stories, please visit the wiki: https://wiki.akraino.org.