All Posts By

LF Edge

EdgeX Foundry Virtual F2F Recap: Ireland Planning

By Blog, EdgeX Foundry

Written by Jim White, Chair of the EdgeX Foundry Technical Steering Committee and CTO of IOTech Systems

It’s the holiday season upon us in the US.  On behalf of the EdgeX Foundry community, I’d like to wish you and yours a very warm, blessed and peaceful holiday season.

This time of year is special to me because it usually means some peace after a some long hard release cycle.  The EdgeX community is working on EdgeX 1.3 – the Hanoi Release. It is a minor (dot) release and backward compatible with Edinburgh (1.0), Fuji (1.1) and Geneva (1.2) along with any patch release of these.  For more details on the Hanoi release, stay tuned for my release blog in a few weeks.

Virtual Face-to-Face

In addition to the release, we also completed our semi-annual release planning sessions this past week in order to get ready for our next release for the spring of 2021.  The next EdgeX Foundry release is called Ireland.

Up until this year, the planning meetings were held in-person at a venue hosted by one of our sponsoring companies.  We called the events our “face-to-face” meetings because it was the only time that the contributors and members of our global development community had a chance to meet in person.  This year, due to the pandemic, our planning sessions have had to be held “virtually.”  Somewhat paradoxically, this had led members of the community to refer to these on-line meetings now as “virtual face-to-face” meetings.  Leave it to a group of bright, energetic engineers to shake off the negative and embrace the new normal. Here we are, all online together.

In five, half-day meetings, we assembled our technical steering committee, development teams, and EdgeX adopters/users to scope the features, technical debt, and architectural direction of the next release and general roadmap of EdgeX.

Ireland Planning

We follow an alphabetical naming sequence in our releases and select members of our community that have contributed significantly to the project to help with the naming process.  This release was named by Intel’s Lenny Goodell and Mike Johnanson who have contributed immensely to the project, both in leadership and code contributions, over the past few years.  Each release is named after some geographical place on the earth (city, state, country, mountain, etc.).

EdgeX 2.0 Major Release

During our planning meetings, the general themes, objectives and overall direction of the next release are the first thing we decide.  Ireland will be EdgeX 2.0 – our project’s second major release.

As a major release, the Ireland release will include non-backward compatible features and APIs.  This is, in large part, due to the fact that we began work in the spring of 2020 to implement a new and improved set of EdgeX micro service APIs.  We call this new collection of APIs for each of the EdgeX micro services the V2 APIs (the V1 APIs are currently in place).

The existing EdgeX APIs have been in place since its very first release in 2017. The V2 APIs will remove a lot of early EdgeX technical debt and provide a better informational exchange. While we began the implementation this past spring, it will take the community until the spring release to complete the V2 APIs.  The new APIs will also allow for many new, future release features. For one, the request and response object models in the new APIs are richer and better organized.  The models will better support communications via alternate protocols in the future.  The V1 APIs will also be removed from the EdgeX micro services.

Because this is a non-backward compatible release, we are taking the opportunity to remove as much technical debt and include as many desired non-backward compatible features as possible.  This includes:

  • Removal of archived/deprecated services like the Supporting Rules Engine and Logging services
  • Removal of support for MongoDB (we have used Redis by default since our Fuji release)
  • Support for device services to send data directly to application services
  • Update configuration values and structures so they are more consistent with one another
  • More appropriately name properties, objects, services and artifacts

New Features

In addition to the new V2 APIs, what is going to be in this major release?  This list is long and I encourage those with a need for all the details to have a look at our documentation on our Wiki site, but here are some of the major new features:

  • Device services (our “thing” connectors) will send data directly to our application services via message bus (versus REST) that prepare the data for export (to cloud or enterprise systems) and local analytics packages (rules engines, predictive analytics packages, etc.). Optionally, the data can also be persisted via our core services locally.  This will help improve latency issues, allow for better quality of service, and reduce storing data at the edge when it is not needed.
  • We are improving the security services to allow for you to bring-your-own certificates (in Kong for example), provide abstraction around our secret provider (and make sure that abstraction is used by all services in the same way), secure admin ports and more.
  • Application services that prepare sensor/device data for cloud and enterprise applications (north side systems) will allow for conditionalized transformation, enrichment, or filtering functions to be performed on exported data.
  • A number of device services have been recently contributed to EdgeX. We have new connectors for Constrained Application Protocol (CoAP), General Purpose Input/Output (GPIO), Universal Asynchronous Receiver-Transmitter (UART), and Low-Level Reader Protocol (LLRP) that are under review and will be made available in this release cycle.
  • This release will include an example of how to include AI/ML analytics into the platform and data flow of EdgeX.
  • Our EdgeX user interface will include new data visualization capability and a “wizard” tool to help provision and establish new devices in the EdgeX instance.

Additional Improvements

In addition to scoping and planning for new features to the platform for the Ireland release, the community also decided to address additional needs of our user community in this release.

  • Because this Ireland release will be non-backward compatible with our current Hanoi and any 1.x version of EdgeX, we are also going to provide some tools and documentation for helping adopters migrate the existing release databases, configurations, etc. into the new 2.0 environment.
  • We plan to increase our interoperability testing, especially around our use of 3rd party services such as Kuiper, and provide some scalability/performance guidance as it relates to the number of connected things and how much sensor data can be passed through EdgeX from those things.
  • Our DevOps team is going to explore GitHub repository badges to provide adopters/users with better confidence in the platform.

Jakarta Release and Beyond

During these semi-annual planning meetings, the focus is squarely on the next release.  However, we also take the time to take stock of the project as a whole and look into the future and roadmap where the project is heading a year or more into the future.

At this time, the community is forecasting that the Jakarta release – scheduled for around the fall of 2021 – will be a “stability release.”  Meaning, Jakarta will probably not include any large enhancements.  Its purpose will be to provide a release that addresses any issues discovered in the EdgeX 2.0 release of Ireland. We also hope that Jakarta will be our first ever Long-Term-Support (LTS) release.  And with an LTS release, we hope to begin the implementation of an EdgeX certification program.

The EdgeX LTS policy has already been established and we have indicated to the world that once we have an LTS release, we plan to support that release (with bug fixes, security patches, documentation and artifact updates) for 2.5 years.  That is a significant commitment on the part of our open source community and the stability release will help us achieve that goal.

The certification program is one we have envisioned for a number of years.  The idea is that we eventually want to get to a point where a 3rd party could create and provide a replacement EdgeX service and the community would help test and validate that the service adheres to the APIs and criteria for that service and thereby is a suitable replacement in an EdgeX deployment.  In order to deliver the certification program, the community feels we need to get to the stability that an LTS release provides with the product.

Wrap Up

It’s been a heck of a year.  Despite the significant global pandemic and economic challenges, the EdgeX community did not miss a beat and managed to complete its goals for the year (2 more successful releases).  And with our fruitful planning meeting, despite it being held on-line, the community has plotted a path for an even more successful 2021 that will start with the delivery of EdgeX 2.0 in the spring.

As always, I want to thank the members of the community for their outstanding efforts and talents, patience and commitment and professionalism.  You could not find a group of people that are more fun to work with.  Here is wishing that in 2021, we can resume actual “face to face” meetings.  Happy holidays and a happy new year to everyone.

To learn more about the EdgeX Foundry releases and roadmap, visit https://www.edgexfoundry.org/software/releases/.

 

XGVela: Bring More Power to 5G with Cloud Native Telco PaaS

By Akraino, Blog

Written by Sagar Nangare, an Akraino community member and technology blogger who writes about data center technologies that are driving digital transformation. He is also an Assistant Manager of Marketing at Calsoft.

A Platform-as-a-Service (PaaS) model of cloud computing brings lots of power to the end-users. It provides a platform to the end customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. In the roadmap to build a 5G network, telecom networks need such breakthrough models that help them focus more on the design and development of network services and applications.

XGVela is 5G cloud-native open-source framework that introduces Platform-as-a-Service (PaaS) capabilities for telecom operators. With this PaaS platform, telecom operators can quickly design, develop, and innovate the telco network functions and services. This way, there will be more focus on seizing business opportunities for telecom companies rather than diving into complexities of telecom infrastructure.

China Mobile initiated this project internally which was later hosted and announced by Linux Foundation as a part to support the initiative to accelerate the telco cloud adoption. Apart from China Mobile, other supporters of XGVela are China Unicom, China Telecom, ZTE, Red Hat, Ericsson, Nokia, H3C, CICT, and Intel.

Initially for building XGVela, PaaS functionality was brought from existing open-source PaaS projects like Grafana, Envoy, Zookeeper, and further enhanced with telco requirements.

Why need XGVela PaaS capabilities for the 5G network?

  • XGVela helps telecom operators to align with fast changes in requirements to build a 5G network driven by cloud-native backhaul. They need faster upgrades to network functions and services along with agility in new service deployments.
  • Telecom operators need to focus more on the microservices driven and containers-based network functions (CNFs) rather than VM based network functions (VNFs). But need to continue to use both concurrently. XGVela can support CNFs and VNFs both in the underlying platform.
  • Telecom operators want to reduce network construction costs by adopting an open and healthy technology ecosystem like ONAP, Kubernetes, OpenDaylight, Akraino, OPNFV, etc. XGVela adopted this to reduce the barriers that bring end-to-end orchestrated 5G telecom networks.
  • XGVela simplifies the design and innovation of 5G network functions by allowing developers to focus on application development with service logic rather than dealing with underlying complex infrastructure. XGVela provides standard APIs to tie many internal projects

XGVela integrates CNCF projects based on telco requirements to form General-PaaS. It is architected kept in mind the microservices design method for network function development and further integrated into telco PaaS.

XGVela is integrated with OPNFV and Akraino for integration and testing. In a recent presentation, by Srinivasa Adepalli and Kuralamudhan Ramakrishnan shown that feature set of Akraino ICN blueprints are like XGVela. It is seen that many like to deployed CNFs and application on same cluster. In those environments, there is a need to figure out what additional things to be done on top of K8s to enable CNF deployment, especially telco CNF, normal CNF and application deployments, and further need to evaluate how it worked together. This is where XGVela and Akraino ICN BP family are inclined to each other.

You can subscribe to the XGVela mailing list here to track the project progress. For more information about Akraino Blueprints, click here: https://wiki.akraino.org/. Or, join the conversation on the LF Edge Slack Channel #akraino-blueprints #akraino-help #akraino-tsc

Scaling Ecosystems Through an Open Edge (Part One)

By Blog, Industry Article, Trend

Written By Jason Shepherd, LF Edge Governing Board Member and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Image for post

In my  piece earlier this year, I highlighted an increase in collaboration on interoperability, between different IT and OT stakeholders and building trust in data. In this three-part blog, I’ll expand on some related thoughts around building ecosystems, including various technical and business considerations and the importance of an open underlying foundation at the network edge.

For more on this topic, make sure to register and tune in tomorrow, July 28 for Stacey Higginbotham’s virtual event ! Stacey always puts on a great event and this will be a great series of panel discussions on this important topic. I’m on a roundtable about building partnerships for data sharing and connected services.

The new product mindset

I’ve  about the “new product mindset” to highlight the importance of developing products and services with the cumulative lifetime value they will generate in mind, compared to just the immediate value delivered on the day of launch. In the rapidly-evolving technology landscape, competitive advantage is based on the ability to innovate rapidly and continuously improve an offering through software- and, increasingly, hardware-defined experiences, not to mention better services.

Related to this is the importance of ecosystems. Whether you build one or join one, it’s more important than ever in this world of digital transformation to be part of an ecosystem to maintain a competitive edge. This includes consideration on two vectors — both horizontal for a solid and scalable foundation and vertical for domain-specific knowledge that drives targeted outcomes. Ultimately, it’s about creating a network effect for maximizing value, both for customers and the bottom line.

Approaches to building ecosystems

An ecosystem in the business world can take various forms. In simplest terms it could be a collection of products and services that are curated to be complementary to a central provider’s offering. Alternatively, it could consist of partners working towards a common goal, with each benefitting from the sum of the parts. Another ecosystem could take the form of an operations-centric supply chain or a group of partners targeting common customers with a joint go-to-market strategy.

Approaches to ecosystem development span a spectrum of closed to open philosophies. Closed ecosystems are based on closely-governed relationships, proprietary designs, and, in the case of software, proprietary APIs. Sometimes referred to as “walled gardens,” the tight control of closed ecosystems can provide a great customer experience. However, this tends to come with a premium cost and less choice. Apple is a widely cited example of a company that has taken a more closed ecosystem approach.

Then there are “closed-open” approaches which have APIs and tools that you can openly program to — as long as you pay access fees or are otherwise advancing the sponsoring company’s base offer. Amazon’s “Works with Alexa” program is an example of this model, and a big driver in how they emerged as a smart home leader — they spent many years establishing relationships and trust with a broad consumer base. This foundation creates a massive network effect, and in turn, Amazon makes the bulk of their revenue from the Alexa ecosystem by selling more consumer goods and furthering customer relationships — much more so than on the Alexa-enabled devices themselves.

Scaling through open source

To fully grasp the business trade-offs of closed vs. open ecosystems, consider Apple’s iOS and Android. While Apple provides a curated experience, the open source Android OS provides more choice and, as a result, . Android device makers may not be able to collect as much of a premium as Apple because of the open competition and difficulty in differentiating with both hardware and a better user experience by skinning the stock Android UI. However, a key upside element is the sheer volume factor.

Plus, that’s not to say there still isn’t an opportunity to differentiate with Android as the base of your ecosystem. As an example, Samsung has built a strong ecosystem approach through their Galaxy brand and has recently branched out even further by  to offer a better-together story with their products. We’ve seen this play before — if you’re old enough to have rented video tapes at a brick-and-mortar store (be kind, rewind!), you’ll recall that VHS was an inferior technology to Beta but it won out after a short format battle because the backers of VHS got the most movie studios onboard.

It’s ultimately about taking a holistic approach, and these days open source software like Android is an essential driver of ecosystem enablement and scale. On top of the benefit of helping developers reduce “undifferentiated heavy lifting,” open source collaboration provides an excellent foundation for creating a network effect that multiplies value creation.

After all, Google didn’t purchase the startup Android in 2005 (for just $50M!) and seed it into the market just to be altruistic — the result was a driver for a massive footprint that creates revenue through outlets including Google’s search engine and the Play store. Google doesn’t reveal how much Android in particular contributes to their bottom line, but in a 2015 court battle with Oracle over claims of a Java-related patent infringement . Further, reaching this user base to drive more ad revenue is now free for Google, compared to them reportedly paying Apple $1B in 2014 for making their search engine the default for the iPhone.

Striking a balance between risk and reward

There’s always some degree of a closed approach when it comes to resulting commercial offers, but an open source foundation undoubtedly provides the most flexibility, transparency and scalability over time. However, it’s not just about the technology — also key to ecosystems is striking a balance between risk and reward.

Ecosystems in the natural world are composed of organisms and their environments — this highly complex network finds an equilibrium based on ongoing interactions. The quest for ecosystem equilibrium carries over to the business world, and especially holds true for IoT solutions at the edge due to the inherent relationship of actions taken based on data from tangible events occurring in the physical world.

Several years back, I was working with a partner that piloted an IoT solution for a small farmer, “Fred,” who grows microgreens in greenhouses outside of the Boston area. In the winter, Fred used propane-based heaters to maintain the temperature in his greenhouses and if these heaters went out during a freeze he would lose thousands of dollars in crops overnight. Meanwhile, “Phil” was his propane supplier. Phil was motivated to let his customers’ propane tanks run almost bone dry before driving his truck out to fill them so he could maximize his profit on the trip.

As part of the overall installation, we instrumented Fred’s propane tank with a wireless sensor and gave both Fred and Phil visibility via mobile apps. With this new level of visibility, Fred started freaking out any time the tank wasn’t close to full and Phil would happily watch it drop down to near empty. This created some churn up front but after a while it turned out that both were satisfied if the propane tank averaged 40% full. Side note: Fred was initially hesitant about the whole endeavor but when we returned to check in a year later, you couldn’t peel him away from his phone monitoring all of his farming operations.

Compounding value through the network effect

Where it starts getting interesting is compounding value-add on top of foundational infrastructure. Ride sharing is an example of an IoT solution (with cars representing the “things”) that has created an ecosystem network effect in the form of supplemental delivery services by providers (such as Uber Eats) partnering with various businesses. There’s also an indirect drag effect for ecosystems, for example Airbnb has created an opportunity for homeowners to rent out their properties and this has produced a network effect for drop-in IoT solutions that simplify tasks for these new landlords such as remotely ensuring security and monitoring for issues such as water leaks.

The suppliers and workers in a construction operation also represent an ecosystem, and here too it’s important to strike a balance. Years back, I worked with a provider of an IoT solution that offered cheap passive sensors that were dropped into the concrete forms during the pouring of each floor of a building. Combined with a local gateway and software, these sensors told the crew exactly when the concrete was sufficiently cured so they could pull the forms faster.

While the company’s claims of being able to pull forms 30% faster sounded great on the surface, it didn’t always translate to a 30% reduction in overall construction time because other logistics issues often then became the bottleneck. This goes beyond the supply chain — one of the biggest variables construction companies face is their workforce, and people represent yet another complex layer in an overall ecosystem. And beyond construction workers, there are many other stakeholders during the building process spanning financial institutions, insurance carriers, real estate brokers, and so forth. Once construction is complete then come tenants, landlords, building owners, service providers and more. All of these factors represent a complex ecosystem that continues to find a natural equilibrium over time as many variables inevitably change.

In a final ecosystem example, while IoT solutions can drive entirely new services and experiences in a smart city, it’s especially important to balance privacy with value creation when dealing with crossing public and private domains. A city’s internally-focused solution for improving the efficiency of their waste services isn’t necessarily controversial, meanwhile if a city was to start tracking citizens’ individual recycling behaviors with sensors this is very likely to be perceived as an encroachment on civil liberties. I’m a big recycler (and composter), and like many people I’ll give up a little privacy if I receive significant value, but there’s always a fine line.

In closing

The need to strike an equilibrium across business ecosystems will be commonplace in our increasingly connected, data-driven world. These examples of the interesting dynamics that can take place when developing ecosystems highlight how it’s critical to have the right foundation and holistic approach to enable more complex interactions and compounding value, both in the technical sense and to balance risk and privacy with value created for all parties involved.

Stay tuned for part two of this series in which I’ll walk through the importance of having an open foundation at the network edge to provide maximum flexibility and scale for ecosystem development.

EdgeX Foundry New Contributors Q3, Tutorials & More!

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

As we’re headed into holiday season, we’d like to reflect a bit on Q3 and how busy it was for the EdgeX Foundry community.   The fourth quarter will be more of the same with our next release, Hanoi and a Face to Face (F2F) planning meeting for our next release, Ireland.

Quick stats: 

We had 38 (77 YTD) unique contributors this quarter making more than 500 (2000 YTD) commits.  We surpassed 7 million Docker downloads and have over 500k deployments.  These are amazing milestones but we could definitely not reach it without our community contributors. In Q3, we had four new contributors that we would like to welcome and recognize.

Q3 New Contributors’ GitHub Usernames:  

Alexmcminn

AlexCuse

jinfahua

siggiskulason

We really appreciate your contributions and look forward to your next contribution.  We wouldn’t be a community without you!  And to our wider community, please go to GitHub and find our new contributors to see what other projects that they are working on.

New “How to” Video and Updated Tutorial Released:

Lenny Goodell (EdgeX Foundry TSC member from Intel) recorded a great presentation on how EdgeX services work.  Here is a short description: The session is meant to assist those looking to understand existing or create a brand-new service using the EdgeX bootstrapping, configuration, dependency injection (of clients), etc.

Do you want to get involved with EdgeX Foundry-The World’s First Plug and Play Ecosystem-Enabled Open Platform for the IoT Edge or just learn more about the project and how to get started?  Either way, visit our Getting Started page and you will find everything that you need to get going.  We don’t just need developers, we welcome tech writers, translators, and many other disciplines to help us create, extend and expand the EdgeX platform

EdgeX Foundry is a Stage 3- Impact project under the LF Edge umbrella.  Visit the EdgeX Foundry website for more information or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 6 million and more downloads! You can also follow EdgeX on twitter, LinkedIn, YouTube.

Edge Excitement! Innovation & Collaboration (Q4 2020)

By Blog, LF Edge, Open Horizon

Written by Ryan Anderson, Member and Contributor of LF Edge’s Open Horizon Project and IBM Architect in Residence, CTO Group for IBM Edge

This article originally ran on Ryan’s LinkedIN page

Rapid innovation in edge computing

It is an exciting time for the edge computing community! Since my first post April 2019, we are seeing a rapid acceleration of innovation driven by the convergence of multiple factors:

·     a convergence towards shared “mental models” in the edge solution space;

·     the increasing power of edge devices – pure CPU, as well as CPU plus GPU/VPU;

·     enormous investments in 5G infrastructures by network and communications stakeholders;

·     new collaborations across several IT/OT/IOT/Edge ecosystems;

·     increasing participation in, and support for, open source foundations such as LF Edge, by major players; and

·     widespread adoption of Kubernetes and Docker containers as a core layer of the edge.

With this convergence, innovation and accelerating adoption, Gartner’s prediction that 75% of enterprise-generated data will be created and processed at the edge, appears prescient.

Edge nodes – from datacenters to devices 

Much like “AI” and “IT” – edge computing is a broad and nebulous term that means different things to different stakeholders. In the diagram below, we consider four points of view for edge:

  1. Industrial Edge
  2. Enterprise Network Cloud Edge
  3. 5G / Telco Edge
  4. Consumer and Retail Edge
No alt text provided for this image

 

This model illustrates a few key ideas:

·     Some edge use cases fall squarely within one quadrant – whereas others span two, or sometimes three.

·     Solution mapping will help shape architecture discussions and may inform which stakeholders should be involved in conversations.

·     Edge can mean very different things to different people; and consequently, value propositions (and ROI/KPI) will also vary dramatically.

Technology tools for next generation edge computing must be flexible enough to work across different edge quadrants and work across different types of edge nodes.

Terminology. And what is edge computing?  

At IBM our edge computing definition is “act on insights closer to where data is created.”

We define edge node generically as any edge device, edge cluster, or edge server/gateway on which computing (workload, applications, analytics) can be performed, outside of public or private cloud infrastructure, or the central IT data center.

An edge device is a special-purpose piece of equipment that also has compute capacity integrated into that device on which interesting work can be performed. An assembly machine on the factory floor, an ATM, an intelligent camera or a next-generation automobile are all examples of edge devices. It is common to find edge devices with ARM or x86 class CPUs with one or two cores, 128MB of memory, and perhaps 1 GB of local persistent storage.

Sometimes edge devices include GPUs (graphics processing unit) and VPUs (vision processing units) – optimized chips that are very good for running AI models and inferencing on edge devices.

Fixed function IOT equipment that lack general open compute are not typically considered edge nodes, but rather IOT sensors. IOT sensors often interoperate with edge devices – but are not the same thing, as we see on the left side of this diagram.

 

No alt text provided for this image

An edge cluster is a general-purpose IT computer located in remote premises, such as a factory, retail store, hotel, distribution center or bank, for example – and typically used to run enterprise application workloads and shared services.

Edge nodes can also live within network facilities such as central offices, regional data-centers and hub locations operated by a network provider, or a metro facility operated by colocation provider.

An edge cluster is typically an industrial PC, or racked server, or an IT appliance.

Often, edge clusters include GPU/VPU hardware.

Tools for devices to data centers

IBM Edge Application Manager (IEAM) and Red Hat have created reference architectures and tools to manage the workload cross CPU and GPU/VPU compute resources.

Customers want simplicity. IEAM can provide simplicity with a single pane of glass to manage and orchestrate workloads from core to edge, across multiple clouds.

For edge clusters running Red Hat OpenShift Container Platform (OCP), a Kubernetes-based GPU/VPU Operators, solves the problem of needing unique operating system (OS) images between GPU and CPU nodes; instead, the GPU Operator bundles everything you need to support the GPU — the driver, container runtime, device plug-in, and monitoring with deployment by a Helm chart. Now, a single gold master image covers both CPU and GPU nodes.

Caution: Avoid fragmentation and friction with open source

This is indeed an exciting time for the edge computing community, as seen by the acceleration of innovation and emerging use case and architectures.

However, there is an area of concern as relates to fragmentation and friction in this emerging space.

Because the emerging edge market is enormous, there is a risk that some incumbents or niche players may be tempted to “go it alone,” trying to secure and defend a small corner (fragment) of a large space with a proprietary solution. If too many stakeholders do this – edge computing may fail to reach its potential.

This approach can be dangerous for companies for three reasons:

(1)  While isolated walled-garden (defensive) approach may work short term, over time isolated technology stacks may get left behind.

(2)  Customers are increasingly wary of attempts to vendor lock in and will source more flexible solutions.

(3)  Innovation is a team sport (e.g. Linux, Python).

Historically, emergent technologies can also encounter friction when key industry participants or standards organization are not working closely enough together (GSM/CDMA; VHS/Beta or HD-DVD/Blu-ray; Industrial IOT; Digital Twins).

So, what can we do to encourage collaboration?

The answer is open source.

Open source to reduce friction and increase collaboration

The IBM Edge team believes working with and through the open source community is the right approach to help edge computing evolve and reach its potential in the coming years.

IBM has a long history and strong commitment to open source. IBM was one of the earliest champions of communities like Linux, Apache, and Eclipse, pushing for open licenses, open governance, and open standards.

IBM engineers began contributing to Linux and helped to establish the Linux Foundation in 2000. In 1999, we helped to create the Apache Software Foundation (ASF) and supported the creation of the Eclipse Foundation in 2004 – providing open source developers a neutral place to collaborate and innovate in the open.

Continuing our tradition of support for open source collaboration, IBM and Red Hat are active members of Linux Foundation LF Edge;

  • LF Edge is an umbrella organization for several projects that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.
  • By bringing together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices.
  • Fostering collaboration and innovation across the multiple industries including industrial manufacturing, cities and government, energy, transportation, retail, home and building automation, automotive, logistics and health care
No alt text provided for this image

IBM is an active contributor to Open Horizon – one of the LF Edge projects – and the core of IBM Edge Application Manager; LF Edge’s Open Horizon is an open source platform for managing the service software lifecycle of containerized workloads and related machine learning assets. It enables autonomous management of applications deployed to distributed webscale fleets of edge computing nodes – clusters and devices based on Kubernetes and Docker – all from a central management hub.

Open Horizon is already working with several other LF Edge projects including EdgeX Foundry, Fledge and SDO (Secure Device Onboard)

SDO makes it easy and secure to configure edge devices and associate them with an edge management hub. Devices built with SDO can be added as an Edge Node by simply importing their associated ownership vouchers and then powering on the edge devices.

Additional Resources for Open Horizon

Open-Horizon documentation: https://open-horizon.github.io

Open-Horizon GitHub (source code): https://github.com/open-horizon

Example programs for Open-Horizon: https://github.com/open-horizon/examples

Open-Horizon Playlist on YouTube: https://bit.ly/34Xf0Ge

Xen on Raspberry Pi 4 with Project EVE

By Blog, Project EVE

Written by Roman Shaposhnik, Project EVE lead and Co-Founder &VP of Products and Strategy at ZEDEDA, and Stefano Stabellini, Principal Engineer for Xilinx

Originally posted on Linux.com, the Xen Project is excited to share that the Xen Hypervisor now runs on Raspberry Pi. This is an exciting step for both hobbyists and industries. Read more to learn about how Xen now runs on RPi and how to get started.

Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.

Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the xen-devel mailing list. “It should be easy,” we answered. “It might even work out of the box,” we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.

The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in Dom0 was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for Dom0 on RPi4. The patch is now present in Xen 4.14.

“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.” We were wrong again. The Xen subsystem in Linux uses virt_to_phys to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using virt_to_phys, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linux’s master branch.

We felt confident that we finally reached the end of the line. “Memory allocations – check. Memory translations — check. We are good to go!” No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.

To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasn’t easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.

Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.

We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.

HACKING XEN ON RASPBERRY PI 4

If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See this tutorial on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to this article.

Use the rpi-imager to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit config.txt. Make sure to add the following:

 

    kernel=u-boot.bin

    enable_uart=1

    arm_64bit=1

Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance OpenSUSE. Download the JeOS image, then open it and save u-boot.bin:

    xz -d 
openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz

   kpartx -a 
./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw

    mount /dev/mapper/loop0p1 /mnt
    cp /mnt/u-boot.bin /tmp

Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for Dom0, the Dom0 rootfs, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot boot.scr script on the SD card:

    setenv serverip 192.168.0.1

    setenv ipaddr 192.168.0.2

    tftpb 0xC00000 boot2.scr

    source 0xC00000

Where:
– serverip is the IP of your TFTP server

– ipaddr is the IP of the RPi4

Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:

   mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr

Where:

– boot.source is the input

– boot.scr is the output

UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using ImageBuilder.

Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb). RPi4 has two UARTs; the default is bcm2835-aux-uart at address 0x7e215040. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:

  console=dtuart dtuart=serial1 sync_console

The Xen command line is provided by the boot2.scr script generated by ImageBuilder as “xen,xen-bootargs“. After editing boot2.source you can regenerate boot2.scr with mkimage:

 mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr

XEN ON RASPBERRY PI 4: AN EASY BUTTON

Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasn’t been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is LF Edge’s Project EVE.

Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels.

You can find more about EVE on the project’s website at http://projecteve.dev and its GitHub repo https://github.com/lf-edge/eve/blob/master/docs/README.md. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at:

https://github.com/lf-edge/eve/blob/master/docs/README.md

Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:

$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like

$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw

This is followed by flashing the resulting live.raw binary onto an SD card using your favorite tool.

Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as Dom0 under Xen.

As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVE’s sister project Eden: https://github.com/lf-edge/eden#raspberry-pi-4-support and following a short tutorial over there.

If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edge’s Slack channels starting with #eve over at http://lfedge.slack.com/ — we’d love to hear your feedback.

In the meantime – happy hacking!

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

Onboard edge computing devices with Secure Device Onboard and Open Horizon

By Blog, Open Horizon, Secure Device Onboard

Written by Joe Pearson, Chair of the Open Horizon TSC and Technology Strategist for IBM

For many companies, setting up heterogeneous fleets of edge devices across remote sites has traditionally been a time-consuming and sometimes difficult process. At the Linux Foundation’s Open Networking & Edge Summit conference last month, IBM announced that Intel’s Secure Device Onboard (SDO) solution is now fully integrated into Open Horizon and IBM Edge Application Manager and available to developers as a tech preview.

The Intel-developed SDO enables low-touch bootstrapping of required software at device initial power-on. For the Open Horizon project, this enables the agent software to be automatically and autonomously installed and configured. SDO technology is now being incorporated into a new industry onboarding standard being developed by the FIDO Alliance.

Developers can try this out by using the all-in-one version of Open Horizon. Simply run a one-line script on a target edge compute device or VM and simulate powering-up an SDO-enabled device and its onboarding.

Both Open Horizon and SDO recently joined the LF Edge umbrella, which aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. Thanks to IBM’s participation in the LF Edge open source community, contributors in the community are helping advance the future of open edge computing solutions.

See how SDO works

Simplifying edge device onboarding

Our team uses the term “device onboarding” to describe the initial bootstrapping process of installing and configuring required software on an edge computing device. In the case of Open Horizon, that includes connecting it to the Horizon management hub services. We have simplified the software installation process by providing a one-liner script, so that a person can install and run a development version of Open Horizon and SDO on a laptop or in a small virtual machine.

Before SDO was available, the typical installation process required a person to open a secure connection to the device (sometimes on premises), manually install all of the software pre-requisites, then install the Horizon agent, configure it, and register it with the management hub. With SDO support enabled, an administrator simply loads the voucher into the management hub when the device is purchased and then associates the configuration. When a technician powers on the device and connects it to the network, the device automatically finds the SDO services, presents the voucher, and downloads and installs the software automatically.

Integrating SDO into Open Horizon

The Open Horizon project has created a repository specifically for integrating the SDO project components into the Open Horizon management hub services and CLI. The SDO rendezvous service runs along side the management hub and provides a simple interface to bulk load import vouchers.

LF Edge and open source leadership

LF Edge continues to strive to ensure that edge computing solutions remain open. In May 2020, IBM contributed Open Horizon to LF Edge. With Intel also contributing SDO to LF Edge, this ensures that another vital component of a complete edge computing framework is also open source.

We’re excited to collaborate with Intel to expand the deployment of applications from open hybrid cloud environments down to the edge, making them accessible, secure, and scalable for the developer ecosystem and community. For more videos about Open Horizon, please visit LF Edge’s Youtube Channel or click here LF Edge Open Horizon Playlist. If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#open-horizon, #open-horizon-docs, #sdo-general or #sdo-tsc).

EdgeX Foundry Challenge Shanghai 2020 was a success!

By Blog, EdgeX Foundry, Event

Written by  co-chairs of the EdgeX Foundry China Project Melvin Sun, Senior Marketing Manager for the IoT Group at Intel, and Gavin Lu, R&D Director OCTO at VMware

On Thursday, September 24, after more than 2 months of fierce competition, the EdgeX Foundry Challenge Shanghai 2020 culminated in the final roadshow. Companies, colleges, hacker teams and enthusiastic audiences from around the world participated in the live video broadcast.

Yingjie Lu, Senior Director of IOT Group in Intel Corporation and Jim White, CTO of IOTech Systems and Chairman of EdgeX Foundry Technical Steering Committee, delivered opening and closing speeches for the final roadshow respectively. Mr. Peng Jianzhen, Secretary General of CCFA (China Chain-store & Franchise Association), and Mr. Shu Bin, experts from China State Grid, made summarizing remarks on all the teams of commerce and industrial tracks respectively.

The EdgeX Challenge Shanghai 2020 is an international hackathon event based on EdgeX, Foundry. It was successfully kicked off as a joint effort among EdgeX community members Canonical, Dell Technologies, HP, Intel, IOTech, Tencent, Thundersoft, VMware and Shanghai startup incubator Innospace.

In the age of the Internet of Things, collaboration is the foundation of all technological and business innovation. And such a spirit can be spread and practiced through the EdgeX community. Today, the EdgeX community members have collaborated extensively across the globe to drive technological development and business prosperity in the global IoT industry.

40 teams submitted their proposals covering both commerce and industrial tracks, including retail, assets tracking, manufacturing, environmental protection and energy conservation, utilities and building automation. The cutting-edge technologies adopted in the works, such as edge computing, artificial intelligence, 5G, sensors, blockchain, satellite communications, Internet of Things and big data, fully demonstrate the strength of innovation and foreshadowing the development and transformation of future life.

After careful selection by the judges, a total of 15 teams were selected as finalists on July 24. Based on the Devkits by Intel and the EdgeX challenge committee, these teams started use cases in various verticals, solving industry pain points, empowering deployable applications, and committing themselves to build a feast of technological innovation and change. The expert judges focused on the development process of the teams and worked with them with daily Q&As, weekly discussions and mid-term check-point meetings.

This Hackathon event has been running smoothly in a tense and methodical atmosphere with strong strength and support. As of mid-September, 15 teams had submitted their final projects, culminating in the finals. Click here to watch snapshot video of the EdgeX Challenge finalists preparing for the competition.

List of teams for roadshow:

Commerce Track

Sequence Code of team Name of team Use Cases Time
1 c18 HYD Miracle Innovation Apparel store edge computing 9:12
2 c27 DreamTech Mortgage assets monitoring based on blockchain +edge computing 9:24
3 c16 USST,YES! Retail store edge 9:36
4 c19 Breakthrough Studio Scenic Intelligent Navigation 9:48
5 c26 i-Design Edge-cloud based far ocean intelligent logistics/monitor 10:00
6 c07 Starlink team from JD.com Retail store heatmap & Surveillance 10:12
7 c03 AI Cooling Building automation 10:24
8 c08 Sugarinc Intelligence Smart building & office management 10:36
9 c10 Five Card Stud fighting force Automobile 4S (sales, service, spare part, survey) store 10:48

Industrial Track

Sequence Code of team Name of team Use Cases Time
1 i07 VSE Drivers school surveillance & analytics 11:00
2 i09 Smart eyes recognize energy CV based intelligent system of meters recognition 11:12
3 i01 Baowu Steel Defects detection in steel manufacture 11:24
4 i08 CyberIOT Guardians of electrical and mechanical equipment 11:36
5 i10 Power Blazers Autotuning system for orbital wind power system 11:48
6 i06 Jiangxing Intelligence Intelligent Edge Gateway for Industrial 12:00

 

Note: The sequence of the roadshow above is determined by the drawing of lots by the representatives of each participating teams.

The live roadshow was successfully broadcasted on September 24, where all the final teams presented their proposals and interacted with the judges. To ensure a comprehensive and professional assessment, the organizer invited expert judges with blended strong background, including:

– Business experts: Gao Ge, General Manager of Bailian Group’s Science and Innovation Center; Violet Lu, IT director of Fast Retailing LTD; and Yang Feng, Head of Wireless and Internet of Things of Tencent’s Network Platform Department

– EdgeX & AIOT experts: Henry Lau, Distinguished Technical Expert on Retail Solutions in HP Inc.; Gavin Lu, Director in VMware (China) R&D Center; Jim Wang, senior expert on IoT in Intel Corporation; Chris Wang, director of artificial intelligence in Intel; Jack Xu and Mi Wu, IoT experts in Intel; Xiaojing Xu, Senior Expert, Thundersoft Corporation

– Representatives from the investment community: Riel Capital, Intel Capital, VMware Capital …… In addition, the quality of the team & proposals attracted a large number of investment institutions, including Qiming Venture Capital, Morningside Capital, ZGC Qihang Investment, and other professional investors constitute the Members of the observer group for this live roadshow.

During the live streaming roadshow, all the teams’ works demonstrated the unique advantages of EdgeX in IoT and the edge computing development. The judges provided feedback and suggestions for Creativity, use of Computer Vision, Impact, Viability, Usefulness, simplicity, as well as documentation preparation and performance in the roadshow session. Observers and fans from all over the world also expressed their opinions through messaging and comment sections, which made the competition exciting.

The live roadshow has ended but the organizing committee is still in the process of collecting the judges’ questions and compiling the teams’ answers. Based on the above information, the competition will carefully score the ideations and demos to filter out competitive & deployable solutions to customer’s pain points. Meanwhile, we will hold an award ceremony at the Global Technology Transfer Conference on October 29, where the winners will receive a total of 90,000 RMB in cash prizes and rich rewards for their efforts.

Video playback of the final roadshow will be available on bilibili website, if you are interested in watching it, please follow the channel of EdgeX China Project on Bilibili. For more about the EdgeX Foundry China Project, visit the wiki at https://wiki.edgexfoundry.org/display/FA/China+Project.

Akraino White Paper: Cloud Interfacing at the Telco 5G Edge

By Akraino, Akraino Edge Stack, Blog

Written by Aaron Williams, LF Edge Developer Advocate

As cloud computing migrates from large Centralized Data Centers to the Server Provider Edge’s Access Edge (i.e. Server-based devices at the Telco tower) to be closer to the end user and reduce latency, more applications and use cases are now available.  When you combine this with the widespread roll out of 5G, with its speed, the last mile instantly becomes insanely fast.  Yet, if this new edge stack is not constructed correctly, the promise of these technologies will not be possible.

LF Edge’s Akraino project has dug into these issues and created its second white paper called Cloud Interfacing at the Telco 5G Edge.  In this just released white paper, Akraino analyzes the issues, proposes detailed enabler layers between edge applications and 5G core networks, and makes recommendations for how Telcos can overcome these technical and business challenges as they bring the next generation of applications and services to life.

Click here to view the complete white paper: https://bit.ly/34yCjWW

To learn more about Akraino, blueprints or end user case stories, please visit the wiki: https://wiki.akraino.org.