Skip to main content
Category

LF Edge

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

On the “Edge” of Something Great

By Akraino, Announcement, Baetyl, Blog, EdgeX Foundry, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard, State of the Edge

As we kick off Open Networking and Edge Summit today, we are celebrating the edge by sharing the results of our first-ever LF Edge Member Survey and insight into what our focuses are next year.

LF Edge, which will celebrate its 2nd birthday in January 2021, sent the survey to our more than 75 member companies and liaisons. The survey featured about 15 questions that collected details about open source and edge computing, how members of the LF Edge community are using edge computing and what project resources are most valuable. 

Why did you chose to participate in LF Edge?

The Results Are In

The Top 3 reasons to participate in LF Edge are market creation and adoption acceleration, collaboration with peers and industry influence. 

  • More than 71% joined LF Edge for market creation and adoption acceleration
  • More than 57% indicated they joined LF Edge for business development
  • More than 62% have either deployed products or services based on LF Edge Projects or they are planned by for later this year, next year or within the next 3-5 years

Have you deployed products or services based on LF Edge Projects?

This feedback corresponds with what we’re seeing in some of the LF Edge projects. For example, our Stage 3 Projects Akraino and EdgeX Foundry are already being deployed. Earlier this summer, Akraino launched its Release 3 (R3) that delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R3, Akraino brings deployments and PoCs from a swath of global organizations including Aarna Networks, China Mobile, Equinix, Futurewei, Huawei, Intel, Juniper, Nokia, NVIDIA, Tencent, WeBank, WiPro, and more. 

Additionally, EdgeX Foundry has hit more than 7 million container downloads last month and a global ecosystem of complementary products and services that continues to increase. As a result, EdgeX Foundry is seeing more end-user case studies from big companies like Accenture, ThunderSoft and Jiangxing Intelligence

Have you gained insight into end user requirements through open collaboration?


Collaboration with peers

The edge today is a solution-specific story. Equipment and architectures are purpose-built for specific use cases, such as 5G and network function virtualization, next-generation CDNs and cloud, and streaming games. Which is why collaboration is key and more than 70% of respondents said they joined LF Edge to collaborate with peers. Here are a few activities at ONES that showcase the cross-project and members collaboration. 

Additionally, LF Edge created a LF Edge Vertical Solutions Group that is working to enable easily-customized deployments based on market/vertical requirements. In fact, we are hosting an LF Edge End User Community Event on October 1 that provides a platform for discussing the utilization of LF Edge Projects in real-world applications. The goal of these sessions is to educate the LF Edge community (both new and existing) to make sure we appropriately tailor the output of our project collaborations to meet end user needs. Learn more.

Industry Influence

More than 85% of members indicated they have gained insights into end user requirements through open collaboration. A common definition of the edge is gaining momentum. Community efforts such as LF Edge and State of the Edge’s assets, the Open Glossary of Edge Computing, and the Edge Computing Landscape are providing cohesion and unifying the industry. In fact,  LF Edge members in all nine of the projects collaborated to create an industry roadmap that is being supported by global tech giants and start-ups alike.

 

 

Where do we go from here? 

When asked, LF Edge members didn’t hold back. They want more. They want to see more of everything – cross-project collaboration, end user events and communication, use cases, open source collaboration with other liaisons. As we head into 2021, LF Edge will continue to lay the groundwork for markets like cloud native, 5G, and edge for  more open deployments and collaboration.  

 

LF Edge Member Spotlight: NetFoundry

By Blog, EdgeX Foundry, LF Edge, Member Spotlight

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Jim Clardy, Co-Founder and Global Cloud Partners and Alliances at NetFoundry, to discuss the importance of open source, collaborating with industry leaders in edge computing and the impact of being a part of the LF Edge ecosystem.

Please tell us a little about your organization.

NetFoundry provides the leading zero trust networking platform offered as Network-as-a-Service (NaaS) to connect distributed applications, users, devices and locations through an optimized  global fabric. This enables: solutions and applications, ranging from edge to cloud, to easily embed zero trust networking inside the solution. Developers can embed secure, programmable, private, application-specific networking into their apps, using the open source Ziti software (Ziti.dev) which NetFoundry built and is the leading contributor to.

 

Why is your organization adopting an open source approach?

NetFoundry is built on open source Ziti. The next paradigm in networking is “Networking as code” and zero trust. With open source Ziti SDKs, developers can embed private networking into apps with a few lines of code. Ziti enables a new networking paradigm that greatly reduces the costs and simplifies the complexity of networking and implements zero-trust application embedded connectivity. Ziti is the leading open source platform for creating zero trust network connectivity over the Internet.

Why did you join LF Edge and what sort of impact do you think it has on the industry?

We believe open source communities have the power to shape technologies and markets. In addition to LF Edge, we are members of the Linux Foundation, EdgeX Foundry, and CNCF communities.

What do you see as the top benefits of being part of the LF Edge community?

Accelerating the next paradigm in networking where networking as code and zero trust become ubiquitous. We believe networking will be transformed with cloud-orchestrated interoperability fueled by open source communities like LF Edge.

What contributions has your team made (or plans to make) to the community/ecosystem through LF Edge participation?

NetFoundry built and is the leading contributor to open source Ziti software, and we are excited to build the open Ziti community. NetFoundry is contributing code to open Ziti regularly.

What do you think sets LF Edge apart from other industry alliances?

You are able to draw on the Linux Foundation and related ecosystem of communities and contributors – there is a massive and unstoppable network effect created by LF Edge.

How might LF Edge help your business?

Accelerate the development of the Ziti project and community.

 

What advice would you give to someone considering joining the LF Edge community?

Don’t wait – do it today.

Learn more about NetFoundry here.

Learn more about open Ziti here.

Get started with Ziti on GitHub.

To find out more about our members or how to join LF Edge, click here. Additionally, if you have questions or comments, visit the  LF Edge Slack to share your thoughts and engage with community members.

 

 

LF Edge Demos at Open Networking & Edge Summit

By Blog, EdgeX Foundry, Event, Fledge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard

Open Networking & Edge Summit, which takes place virtually on September 28-30, is co-sponsored by LF Edge, the Linux Foundation and LF Networking. With thousands expected to attend, ONES will be the epicenter of edge, networking, cloud and IoT. If you aren’t registered yet – it takes two minutes to register for US$50 – click here.

Several LF Edge members will be at the conference leading discussions about trends, presenting use cases and sharing best practices. For a list of LF Edge focuses sessions, click here and add them to your schedule. LF Edge will also host a pavilion – in partnership with our sister organization LF Networking – that will showcase demos, including the debut of two new ones that feature a collaboration between Project EVE and Fledge and Open Horizon and Secure Device Onboarding. Check out the sneak peek of the demos below:

Managing Industrial IoT Data Using LF Edge (Fledge, EVE)

Presented by Flir, Dianomic, OSIsoft, ZEDEDA and making its debut at ONES, this demo showcases the strength of Project EVE and Fledge. The demo Fledge will show how the two open source projects work together to securely manage, connect, aggregate, process, buffer and forward any sensor, machine or PLC’s data to existing OT systems and any cloud. Specifically, it will show a FLIR IR Camera video and data feeds being managed as described.

 

Real-Time Sensor Fusion for Loss Detection (EdgeX Foundry):

Presented by LF Edge members HP, Intel and IOTech, this demo showcases the strength of the Open Retail Initiative and EdgeX Foundry. Learn how different sensor devices can use LF Edge’s EdgeX Foundry open-middleware framework to optimize retail operations and detect loss at checkout. The sensor fusion is implemented using a modular approach, combining point-of-sale , computer vision, RFID and scale data into a POC for loss prevention.

This demo was featured at the National Retail Federation Show in January. More details about the demo can be found in HP’s blog and  Intel blog.

               

Low-touch automated onboarding and application delivery with Open Horizon and Secure Device Onboard

Presented by IBM and Intel, this demo features two of the newest projects accepted into the LF Edge ecosystem – Secure Device Onboard was announced in July while Open Horizon was announced in April.

An OEM or ODM can generate a voucher with SDO utilities that is tied to a specific device. Upon purchase, they can send the voucher to the purchaser. With LF Edge’s Open Horizon Secure Device Onboard integration, an administrator can load the voucher into Open Horizon and pre-register the device. Once the device is powered on and connected to the network, it will automatically authenticate, download and install the Open Horizon agent, and begin negotiation to receive and run relevant workloads.

For more information about ONES, visit the main website: https://events.linuxfoundation.org/open-networking-edge-summit-north-america/. 

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge

LF Edge’s Akraino Project Release 3 Now Available, Unifying Open Source Blueprints Across MEC, AI, Cloud and Telecom Edge

By Akraino, Announcement, LF Edge

    • 6 New R3 Blueprints (total of 20)  covering use cases across Telco, Enterprise, IoT, Cloud and more
    • Akraino Blueprints cover areas including MEC, AI/ML, Cloud, Connected Vehicle, AR/VR, Android Cloud Native, smartNICs, Telco Core & Open- RAN, with — ongoing support for R1-R2 blueprints and more
    • Community delivers open edge API specifications — to standardize across devices, applications (cloud native), orchestrations,  and multi-cloud — via new white paper

SAN FRANCISCO  August 12, 2020LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced the availability of Akraino Release 3 (“Akraino R3”).  Akraino’s third and most mature release to date delivers fully functional edge solutions– implemented across global organizations– to enable a diversity of edge deployments across the globe. New blueprints include a focus on  MEC, AI/ML, and Cloud edge. In addition, the community authored the first iteration of a new white paper to bring common open edge API standards to align the industry.

Launched in 2018, and now a Stage 3 (or “Impact” stage) project under the LF Edge umbrella, Akraino Edge Stack delivers an open source software stack that supports a high-availability cloud stack optimized for edge computing systems and applications. Designed to improve the state of carrier edge networks, edge cloud infrastructure for enterprise edge, and over-the-top (OTT) edge, it enables flexibility to scale edge cloud services quickly, maximize applications and functions supported at the edge, and to improve the reliability of systems that must be up at all times. 

“Akraino has evolved to unify edge blueprints across use cases,” said Arpit Joshipura, general manager, Networking, Automation, Edge and IoT, the Linux Foundation. “With a growing set of blueprints that enable more and more use cases, we are seeing the power of open source impact every aspect of the edge and how the world accesses and consumes information.”  

About Akraino R3

Akraino Release 3 (R3) delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R3, Akraino brings deployments and PoCs from a swath of global organizations including Aarna Networks, China Mobile, Equinix, Futurewei, Huawei, Intel, Juniper, Nokia, NVIDIA, Tencent, WeBank, WiPro, and more.

Akraino enables innovative support for new levels of flexibility that scale 5G, industrial IoT, telco, and enterprise edge cloud services quickly, by delivering community-vetted and tested edge cloud blueprints to deploy edge services.  New use cases and new and existing blueprints provide an edge stack for Connected Vehicle, AR/VR, AI at the Edge, Android Cloud Native, SmartNICs, Telco Core and Open-RAN, NFV, IOT, SD-WAN, SDN, MEC, and more. 

 Akraino R3 includes  6 new blueprints for a total of 20,  all tested and validated on real hardware labs supported by users and community members — the Akraino community has established a full-stack, automated testing with strict community standards to ensure high-quality blueprints. 

The 20 “ready and proven” blueprints include both updates and long-term support to existing R1 & R2 blueprints, and the introduction of six new blueprints:

      • The AI Edge – School/Education Video Security Monitoring 
      • 5G MEC/Slice System–  Supports Cloud Gaming, HD Video, and Live Broadcasting
      • Enterprise Applications on Lightweight 5G Telco Edge (EATLEdge)
      • Micro-MEC (Multi-access Edge Computing) for SmartCity Use Cases
      • IEC Type 3: Android Cloud Native Applications on Arm®-based  Servers on the Edge 
      • IEC Type 5: Smart NIC: Edge hardware acceleration 

More information on Akraino R3, including links to documentation, code, installation docs for all Akraino Blueprints from R1-R3, can be found here. For details on how to get involved with LF Edge and its projects, visit https://www.lfedge.org/

API  White Paper

The Akraino community published the first iteration of a  new white paper to bring common open edge API standards to the industry. The new white paper makes available, for the first time, generic edge APIs for developers to standardize across devices, applications (cloud native), orchestrations,  and multi-cloud. The paper serves as a stepping stone for broad industry alignment on edge definitions, use cases, APIs. Download the paper here: https://www.lfedge.org/wp-content/uploads/2020/06/Akraino_Whitepaper.pdf

Looking Ahead

The community is already planning R4, which will include more implementation of open edge API guidelines, more automation of testing, increased alliance with upstream and downstream communities, and development of public cloud standard edge interfaces. Additionally, the community is expecting new blueprints as well as additional enhancements to existing blueprints. 

Don’t miss the Open Networking and Edge Summit (ONES) virtual event happening September 28-29, where Akraino and other LF Edge communities will collaborate on the latest open source edge developments. Registration is now open!

Ecosystem Support for Akraino R3

Arm
“The demands on compute, networking, and storage infrastructure are changing significantly as we connect billions of intelligent devices, many of which live at the edge of the 5G network,” said Kevin Ryan, senior director of software ecosystem development, Infrastructure Line of Business, Arm. “By working closely with the Akraino community on the release of Akraino R3, and through our efforts with Project Cassini for seamless cloud-native deployments, Arm remains committed to providing our partners with full- edge solutions primed to take on the 5G era.”

AT&T 
Mazin Gilbert, VP of Technology and Innovation, AT&T, said: “As a founding member of the Akraino platform, AT&T has seen first-hand the remarkable progress as a result of openness and industry collaboration. AI and edge computing are essential when it comes to creating an intelligent, autonomous 5G network, and we’re proud to work together with the community to deliver the best possible solutions for our customers.”

Baidu
In the 5G era, AI+ Edge Computing is not only an important guarantee for updating the consumer and industrial Internet experience (such as video consumption re-upgrading, scene-based AI capabilities, etc.), but also a necessary infrastructure for the development of the Internet industry,” said Ning Liu, Director of AI Cloud Group, Baidu. “Providing users with AI-capable edge computing platforms, products and services is one of Baidu’s core strategies. Looking towards the future, Baidu will continue to adhere to the core strategy of open source and cooperate with partners to build a more open and improved ecosystem.” 

China Unicom
“Commercial 5G is going live around the world. Edge computing will play an important role for large bandwidth and low delay services in the 5G era. The key to the success of edge computing is to provide integrated ICT PaaS capabilities, which is beneficial for the collaboration between networks and services, maximizing the value of 5G,” said Xiongyan Tang, Chief Scientist and CTO of the Network Technology Research Institute of China Unicom. “The PCEI Blueprint will define a set of open and common APIs, to promote the deep cooperation between operators and OTTs, and help to build a unified network edge ecosystem.”  

Huawei 
“High bandwidth, low latency, and massive connections are 5G typical features. Based on MEC’s edge computing and open capabilities, 5G network could build the connection, computing, and capabilities required by vertical industries and enables many applications. In the future, 5G MEC will be an open system that provides an application platform with rich atomic capabilities,” said by Bill Ren, Huawei Chief Open Source Liaison Officer. “Managing a large number of applications and devices on the MEC brings great challenges and increases learning costs for developers. We hope to make 5G available through open source, so that more industry partners and developers can easily develop and invoke 5G capabilities. Build a common foundation for carriers’ MEC through open source to ensure the consistency of open interfaces and models. Only in this way can 5G MEC bring tangible benefits to developers and users.”

Juniper Networks
“Juniper Networks is proud to have been an early member of the Akraino community and supportive of this important work. We congratulate this community for introducing new blueprints to expand the use cases for managed edge cloud with this successful third release,” said Raj Yavatkar, Chief Technology Officer at Juniper Networks. “Juniper is actively involved in the integration of multiple blueprints and we look forward to applying these solutions to evolve edge cloud and 5G private networks to spur new service innovations – from content streaming to autonomous vehicles.”

Tencent
“The new generation network is coming, IoT and Edge Computing are developing rapidly. At the same time, it also brings great challenges to technological innovation. High performance, low latency, high scalability, large-scale architecture is a must for all applications. TARS has released the latest version to meet the adjustment of 5G and Edge Computing. Massive devices can easily use TARS Microservice Architecture to realize the innovation of edge applications. The Connect Vehicle Blueprint and AR/VR Blueprint in Akraino are all using the TARS Architecture,” said Mark Shan, Chairman of Tencent Open Source Alliance, Chairman of TARS Foundation, and Akraino TSC Member. “The blueprints on the TARS Architecture solve the problem of high throughput and low latency. TARS is a neutral project in the Linux Foundation, which can be easily used and helped by anyone from the open-source community.”

Zenlayer
“We are proud to be part of the Edge Cloud community. Zenlayer is actively exploring edge solutions and integrating the solutions to our bare metal product. We hope the edge products will empower rapid customer innovation in video streaming, gaming, enterprise applications and more,” said Jim XU, chief engineering architect of Zenlayer.

About the Linux Foundation
Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

LF Edge Member Spotlight: Mocana

By Blog, EdgeX Foundry, LF Edge, Member Spotlight

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Dave Smith, President of Mocanato discuss the importance of open source, collaborating with industry leaders in edge computing, security, how they leverage the EdgeX Foundry framework and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Mocana revolutionizes OT and IoT with cyber protection as a service for trustworthy systems. The company helps device operators bridge the adoption challenge between vendors and service providers, and delivers key cybersecurity benefits to the emerging 5G network, edge computing applications, and SD-WAN enterprise networks. Mocana protects the content delivery supply chain and device lifecycle for tamper-resistance from manufacture to end of life, with root-of-trust and chain-of-trust anchors. Mocana measures devices for sustained integrity and the trustworthiness of operations and data to power artificial intelligence/machine learning analytics. The Mocana team of security professionals works with semiconductor vendors and certificate authorities to integrate with emerging technologies to comply with data privacy and protection standards. The goal of cyber protection as a service is to eliminate the initial cost of modernization for device vendors and empower service providers to offer subscription-based services for the effective and efficient expansion of corporate and industrial digital transformation strategies.

Mocana’s core technology protects more than 100 million devices today, and is trusted by more than 200 of the largest energy, government, healthcare, manufacturing, IoT, telecommunications & networking, and transportation companies globally.

Why is your organization adopting an open-source approach?

Mocana is eager to support the global body of customers adopting the EdgeX Foundry open source solution. OpenSSL is by far the most broadly integrated and implemented open source security stack. It comes freely available and is distributed as part of the LF Edge distributions. However, in recent years OpenSSL has come under scrutiny because of critical security vulnerabilities and the resulting issuance of CVEs. The Heartbleed vulnerability from 2014 was a notable exploit, and there are several other recent CVEs that have generated concern in the information security community. The strategy of taking a defensive position through ongoing patching of vulnerabilities continues to challenge efforts to protect mission-critical OT environments.

Since the founding of the LF Edge projects, the goal has been to pull together a body of code to standardize the microservices delivery and orchestration for edge computing systems and devices. The projects continues to advance commercial third-party solutions to address key functional areas, especially for mission-critical and vertical industry applications. Mocana’s solution is based upon a commercially supported, NIST FIPS 140-2 certified, cryptographic module. Many of the company’s Fortune 500 customers have realized significant benefits from the ability to quickly migrate from default products integrated with OpenSSL to Mocana’s offering, leveraging its OpenSSL connector.

Why did you join LF Edge, and what sort of impact do you think LF Edge has on edge computing, networking, and IoT industries?

Developing, deploying, operating, and managing IoT and edge computing requires a community of key, forward-looking technology innovators. The IoT-edge ecosystem spans a wide supply chain from first silicon to the cloud, and includes system integrators, end-user operators and asset owners. Mocana was one of the first 50 founding members of EdgeX Foundry in 2017. Early on, the company took an industry leadership position by driving industry adoption through off-the-shelf solutions developed through stakeholder collaboration. This approach addressed a variety of common use cases delivered by new edge computing technologies and applications, and required much more than a reference architecture. Mocana recognized the need for the user community and developing ecosystem to leverage community-developed code (e.g. Github) to reduce feature and software code duplication and enable the broadest possible market adoption. The customer benefit reduces the implementation risk for such new technologies and accelerates community stakeholder time to market.

What do you see as the top benefits of being part of the LF Edge community?

Mocana values LF Edge’s ecosystem breadth and depth of community members and stakeholders, which includes chip companies, device ODMs, OEMs, carrier service providers, and asset owner/operators. Each contributes key use case challenges that have been invaluable for ensuring that LF Edge can support key technology developments and marketplace challenges.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

As key contributor to the community, Mocana worked with the EdgeX Foundry Security Working Group and offered insights and guidance on vital security use cases. The company ensured there was always a path to address developing cybersecurity mandates and best practices from NIST Cybersecurity Framework and ISA/IEC 62443. As a result, the community has delivered a number of key security functions. They added a reverse proxy, provided a method to secure the key store with the ability to manage it, and has integrated access to session-based security to the microservices.

Perhaps most important, Mocana has enabled the community to incorporate a scalable, robust, and commercially supported cybersecurity offering for EdgeX Foundry production development and deployments.

Mocana developed its OpenSSL connector to ease migration from default project configurations with OpenSSL to Mocana’s TrustCenter and TrustPoint offerings. This solution aligns well with the project’s objectives to accelerate adoption and deployments of standardized implementations addressing key edge computing use cases with microservices.

What do you think sets LF Edge apart from other industry alliances?

Delivering actual code that organizations can download, compile, run, and then operate is a tremendous benefit compared to most other industry alliances. It is a major differential in comparison to groups that only suggest frameworks and prescriptions of possible features, implementations, and suggested “best practices.”

How will LF Edge help your business?

Demand is growing for edge computing solutions. Hitting 5 million downloads of the EdgeX Foundry SDK in May are proof of that. Mocana also is beginning to see initial commercial success and adoption in the innovation and R&D centers by key community members. The company’s ability to enable its fully integrated TrustCenter and TrustPoint solutions leveraging an OpenSSL connector provides a clear and rapid path to EdgeX device security lifecycle management and supply chain provenance. Plus, it will increase adoption of Mocana’s latest edge device offerings from the community.

What advice would you give to someone considering joining LF Edge?

Find your niche in one of LF Edge’s nine collaborative projects where your offering can deliver the most value and contribute. There has never been a better time to participate in this open source community, which is looking for complementary solutions and ways to deepen the ecosystem.

To learn more about EdgeX Foundry, click here. To find out more about our members or how to join LF Edge, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack or the EdgeX Foundry Slack to share your thoughts and engage with community members.

LF Edge Update: Taxonomy, SIGs and Project EVE

By Blog, LF Edge, Project EVE

Written by Jason Shepherd, LF Edge Governing Board Member and VP Ecosystem ZEDEDA

I hope you and yours are doing well in these crazy times. Things are going great within the LF Edge community as we increase alignment across projects, fine-tune our processes and recently welcomed two new projects: Open Horizon and Secure Device Onboard. Net-net, more and more people by the day are joining in to grow an inclusive, structured community focused on developing an open framework for edge computing.

As a board member who is also involved in a number of the LF Edge working groups, I wanted to take this opportunity to provide an update on several fronts: the recently released LF Edge taxonomy white paper, the emerging Special Interest Groups (SIGs), and Project EVE.

LF Edge Taxonomy White Paper

Released last week, the community-created white paper outlining the LF Edge taxonomy and framework is an important piece that we believe will really help clear up a lot of market confusion by providing a balanced view of the edge landscape that is based on inherent technical and logistical trade offs spanning the edge to cloud continuum. This is compared to many existing edge taxonomies that break down the continuum using ambiguous terms that can be interpreted in different ways.  

The goal was to provide a universal framework for various industries to apply their preferred terminology on top of. We’ve received universally positive feedback in previews with key analysts and encourage you to check out both the paper and this related webinar that presents key insights.

LF Edge Vertical Solutions Focus Groups

With nine projects and the new taxonomy serving as a solid foundation, we’re now increasing focus on spinning up Vertical Solutions Focus Groups within the LF Edge community. These focus groups are a precedent set by CNCF and other Linux Foundation projects. The purpose is to document unique requirements for specific markets and feed desired features back into the LF Edge project working groups for consideration as part of their roadmaps. Doing this across as many use cases spanning Industrial, Enterprise and Consumer markets will ensure that each project is maximizing impact while also recognizing inherent tradeoffs.  

Both member and non-member companies will be welcome to volunteer to lead a vertical, or join one already in flight. It’s a low time commitment and a great way to demonstrate thought leadership as an end user while making sure LF Edge projects are developing the right features and specific extensions for the verticals and use cases that matter the most to your company. Stay tuned for more details, including a virtual launch event in the late summer!

Project EVE Update

LF Edge projects are growing across the board and Project EVE is no exception. The EVE community is now approaching 50 unique contributors from organizations including ZEDEDA, Xilinx, Intel, Global Logic, Atomic, GE Research and Timesys and contributions are also growing at a steady pace.  

In terms of market focus, EVE is optimized for supporting IoT edge computing workloads at the “Smart Device Edge,” as defined in LF Edge taxonomy.  Edge compute nodes in this subcategory are characterized by two attributes: 1) being deployed outside of a physically-secure data center but 2) still having enough memory (approximately 256MB) to support application abstraction in the form of virtualization and/or containerization.

 

Devices at the IoT Edge are constrained, heterogeneous, and physically accessible, dictating that special attention needs to be made to optimize footprint, simplify support for diverse hardware, enable zero touch provisioning, and establish a zero trust security model that eliminates the guesswork in securing distributed edge computing nodes at scale. EVE builds on the principles of traditional virtualization tools optimized for the data center, but is optimized for the unique needs of the IoT Edge.

The chart below explains how EVE takes a balance of architectural approaches to serve as a holistic, open engine for supporting any IoT edge computing workload. The net is that the bare metal foundation enables deep security and networking capabilities, supports both containers and virtual machines to provide options for both modern and legacy workloads, and mitigates lock-in with its open API. 

Since launching as a founding LF Edge project in early 2019, the EVE community has been working hard to modularize the EVE foundation so developers can choose preferred components, ultimately wrapped up into a de-facto standard interface in the form of the open EVE API. 

As part of the community’s effort to increase modularity, support for ACRN and KVM hypervisors as alternatives to the original Xen baseline has been added. The project has adopted continerd given that it is the most common container runtime and a general tenet is to integrate leading OSS projects and standards wherever possible rather than reinventing. 

The community’s goals for the balance of 2020 include further increasing modularity and reducing footprint, and adding support for Kubernetes via K3S. Regarding the latter, this will be done by integrating the right features from the Kubernetes paradigm rather than simply trying to cut and paste the same functionality from centralized data centers to the necessarily different IoT edge. Join in if you’d like to help the community shape this important bridge from the IoT Edge to the data center paradigm! 

In terms of adoption, EVE is being leveraged as the edge computing foundation for deployments in several market verticals including oil and gas, renewable energy, manufacturing and healthcare. Check out the EVE in Market page for a growing list of community-supported images for hardware models from Advantech, Dell, HPE, IEI, Intel, Kontron, Lanner, Nexcomm, Raspberry Pi, Siemens, Supermicro and more! This page also has links to the OSS Adam and commercial controller offers that leverage the open EVE API.

Finally, thanks to the great work of Stefano Stabelini at Xylinx, among others, an image is now available for the Raspberry Pi 4 to make it even easier and cost-effective to get started with EVE! This image includes GPU support and can be used with the OSS Adam controller today. Stay tuned to “EVE in the Market” page for other controller options, or contribute your own!

The mission of the EVE community is to do for the IoT Edge what Android did for the smartphone. Learn more about EVE through the project page on the LF Edge site and in this EVE webinar in which I also highlight the importance of an open edge for realizing the true business potential of digital transformation. We welcome you to join the community to make EVE the one foundational stack needed to scale IoT edge computing deployments with choice of hardware, applications and cloud!

In closing, we have a lot of great things going on within the LF Edge community and we’re just scratching the surface of the opportunity ahead. Our future is bright and we encourage you to get involved, whether it be providing key market input through a SIG or diving straight in and contributing code. After all, LF Edge is a technical meritocracy and the best way to vote on the direction of a project is with fingers on your keyboard!

ZEDEDA is a LF Edge member and leader in Project EVE. For more details about LF Edge members, visit here. For more details about Project EVE, visit the project page

Other resources:

 

The LF Edge Interactive Landscape

By Blog, Landscape, LF Edge, State of the Edge

New tool aims to help users understand and navigate the expansive edge computing ecosystem, requesting collaboration from the edge community.

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group

A few years ago, the Cloud Native Computing Foundation (CNCF) introduced their CNCF Cloud Native Interactive Landscape, which quickly became a go-to resource for the cloud-native ecosystem. Using this as a guide and framework, the State of the Edge project has been building the LF Edge Interactive Landscape.

The LF Edge Interactive Landscape is dynamically generated from data maintained in a community-supported Github account. Based on user inputs and overseen by the State of the Edge Landscape Working Group, the map categorizes LF Edge projects alongside edge-related organizations and technologies to provide a comprehensive overview of the edge ecosystem.

The State of the Edge Landscape Working Group needs help from the larger edge community to continue to build out and improve this resource. Pull requests and issue submissions are welcome and encouraged, whether for new additions or for edits to existing listings.

How to Add a New Listing to the LF Edge Interactive Landscape

To add a new listing to the LF Edge Interactive Landscape, follow the steps using one of the options below:

Option 1: Submit a PR

  1. Visit the community Github repository at https://github.com/State-of-the-Edge/lfedge-landscape
  2. Open a pull request to add your listing to landscape.yml. Follow formatting of peer listings, making sure to include all required information and logo file:
    1. Name of organization or technology
    2. Homepage url
    3. .svg logo (Important: Only .svg formatted logos are accepted.) – see https://github.com/State-of-the-Edge/lfedge-landscape#logos for help converting/creating proper SVGs
    4. Twitter url (if applicable)
    5. Crunchbase url
    6. Assigned category (Descriptions for categories can be found in the README.md)

Full instructions available at https://github.com/State-of-the-Edge/lfedge-landscape#new-entries

Option 2: Open an issue

  1. Visit the community Github repository at https://github.com/State-of-the-Edge/lfedge-landscape
  2. Open an issue that includes all required information and logo file (reference Option 1).

Option 3: Email

  1. Email glossary-wg-landscape@lists.lfedge.org with all of the required information and logo (reference Option 1).

How to Modify a Listing in the LF Edge Interactive Landscape

To modify or make suggestions on an existing listing in the LF Edge Interactive Landscape, open an issue in the Github repository and be sure to include the following information:

  • Name of organization or technology, as listed in the landscape.
  • Detailed description of the modifications that you are requesting.

For more detailed information and instructions, you may refer to the README.md in the Github repository.

About State of the Edge

Founded in 2017, State of the Edge (recently acquired by LF Edge) provides a vendor-neutral, community-driven platform for open research on edge computing while also seeking to align the market on what edge computing truly is and what’s needed to implement it. State of the Edge publishes free research on Edge Computing, maintains the Open Glossary of Edge Computing and oversees the LF Edge Interactive Landscape. Follow State of the Edge on Twitter via @StateoftheEdge.

Molly Wojcik was recently appointed Chair of the State of the Edge Landscape Working Group. She is the Director of Education & Awareness at Section, an edge compute platform technology provider, an LF Edge member organization. Molly has been involved as an active contributor and facilitator within the Landscape working group since its beginnings with LF Edge in early 2019. If you have questions or would like to be involved int he LF Edge Landscape, feel free to email Molly at molly@section.io.

EdgeX Foundry Hits Major Milestone with 5 Million+ Container Downloads and a New Release that Simplifies Deployment for AI, Data Analytics and Digital Transformation

By Announcement, EdgeX Foundry, LF Edge
  • EdgeX’s sixth release (Geneva) offers more scalable and secure solutions to move more data faster from multiple edge devices to cloud, enterprise and on-premises applications.
  • As one of LF Edge’s Stage 3 Projects, EdgeX Foundry is seeing increased community growth and adoption and deployments.
  • New LF Edge project Open Horizon is building an integration project that will demonstrate automated delivery and lifecycle management of EdgeX Foundry as a containerized application.

SAN FRANCISCOMay 21, 2020EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for IoT edge computing independent of connectivity protocol, hardware, operating system, applications or cloud, today announced a major milestone of hitting 5 million container downloads and the availability of its “Geneva” release. This release offers more robust security, optimized analytics, and secure connectivity for multiple devices.

“EdgeX Foundry is committed to developing an open IoT platform for edge-related applications and shows no signs of slowing down the momentum,” said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. “As one of the Stage 3 projects under LF Edge, EdgeX Foundry is a clear example of how member collaboration and diversity are the keys to creating an interoperable open source framework across IoT, Enterprise, Cloud and Telco Edge.”

Launched in April 2017, and now part of the LF Edge umbrella, EdgeX Foundry is an open source, loosely-coupled microservices framework that provides the choice to plug and play from a growing ecosystem of available third-party offerings or to augment proprietary innovations. With a focus on the IoT Edge, EdgeX simplifies the process to design, develop and deploy solutions across industrial, enterprise, and consumer applications.

Currently, there are more than 170 unique contributors to the project and EdgeX Foundry averages one million container downloads a month, with a total of 5 million reached last month, and rising.

“The massive volume of devices coming online represents a huge opportunity for innovation and is making edge computing a necessity,” said Keith Steele, EdgeX Foundry Chair of the Technical Steering Committee. “With at least 50% of data being stored, processed and analyzed at the edge we need an open, cloud-native edge ecosystem enabled by EdgeX to minimize reinvention and facilitate building and deploying distributed, interoperable applications from the edge to the cloud. In 3 short years, EdgeX has achieved incredible global momentum and is now being designed into IOT systems and product roadmaps.”

The Geneva Release

As the sixth release in the EdgeX Foundry roadmap, Geneva offers simplified deployment, optimized analytics, secure connectivity for multiple devices and more robust security. Key features include:

  • Automate on-boarding: simplify, scale and quicken connection of devices by allowing automatic provisioning of devices
  • Improved Performance: A new rules engine that is written in Go for faster performance, a smaller footprint and more memory
  • Connectivity: Improved bandwidth utilization and efficiency through use of new batch and send capabilities provided in the App Functions SDK
  • Secure Authentication: Store and use/authenticate secrets to connect with cloud providers
  • Testing: New integration and backward compatibility testing along with enhanced security and blackbox testing

EdgeX Foundry works closely with several of the other LF Edge projects such as Akraino Edge Stack and new project Open Horizon. During this release cycle, EdgeX was made to work under the Akraino Edge Lightweight IOT (ELIOT) Blueprint and tested under the Akraino Community Lab.

Launched last month, Open Horizon is a platform for managing the service software lifecycle of containerized workloads and related machine learning assets. Open Horizon is building an integration project that will demonstrate delivery and management of EdgeX Foundry as a containerized solution in stages, beginning with a single deployable unit and then progressing to a more modular set of services and alternate delivery targets.

Support from Contributing Members and Users of EdgeX Foundry:

 “To further enhance use in production environments, EdgeX Foundry’s Geneva release brings simplified deployments and improved security,” said Tony Espy, Technical Architect at Canonical. “With EdgeX available as a snap, this aligns to the fundamentals of snaps’ core principles which allow developers to benefit from confinement and transactional updates to ensure deployments are secure and with minimal need for manual intervention. As the EdgeX ecosystem continues to see strong traction, we look forward to continuing our contribution to building an open, interoperable framework for edge computing.”

“EdgeX Foundry’s middleware solution is an important component of an open, vendor-neutral pipeline connecting IoT devices and their data to analytics and data management at the on-premise edge,” said Joe Pearson, Engineering Strategy & Innovation Leader, Edge Computing, IBM. “This latest release underscores the importance of working within LF Edge to encourage interoperability as we build a comprehensive open edge computing framework, beginning with Open Horizon.”

“With the evolution of IoT and edge computing, there is a growing realization to deploy and run compute engines near the data source in a truly globally distributed manner. This architecture requires running intelligent AI-based functionality at the edge while processing a significant amount of data at high-throughput and low latency on small form-factor devices,” said Yiftach Shoolman, CTO and co-founder at Redis Labs. “EdgeX Foundry with Redis as the primary data store provides an open-source data platform to meet these expectations by combining in-memory data processing with modern data-models, and can be extended with a serverless engine and AI-serving platform.”

Additional resources:

For more information about LF Edge and its projects, visit https://www.lfedge.org/

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###