Monthly Archives

November 2020

Home Edge Launches Coconut Release

By Announcement, Blog, Home Edge

Home Edge

Written by Moonki Hong, LF Edge Governing Board Member, lead for Home Edge and Staff Engineer at the Samsung Research Open Source Group

Home Edge is a robust, reliable and intelligent home edge computing open source framework and ecosystem running on a variety of devices at home. To accelerate the deployment of the edge computing services ecosystem successfully, LF Edge’s Home Edge Project provides users with an interoperable, flexible, and scalable edge computing services platform with a set of APIs that can also run with libraries and runtimes.

Home Edge is made up of multiple modules to allow for a flexible deployment.  The Edge Orchestration Module handles Edge (device) Discovery, Service Offloading (load balancing between devices); Edge Setup, and Service Management and Monitoring.  The Data Storage Module provides persistent storage (Core Data) and Metadata to identify the node.  The DS Module also consists of the I/O Agent that, via APIs, allows for the accessing of the data.  The Home Device Control Module provides device discovery and setup.  The Home Device Client allows for the connection between the Cloud Interface and the Home Device Client (controller for the home devices.  There are also modules for Machine Learning, Security, and a Deep Neural Network Framework.

Today, Home Edge is happy to announce the launch its Coconut release. The third release for the project, Coconut includes new features such as Multi-NAT communications (which enables discovery of devices in different NATs) and Data Storage.  

In collaboration with EdgeX Foundry, a centralized device can be designated as a primary device to store the data from different devices. The Home Edge project appreciates those who have consistently supported and helped us with this release.

We would especially like to thank to the EdgeX Foundry team, specifically TSC Chair Jim White and Cloud Tsai from IoTech, and Taewan Kim, Ayush, Sunchit, Nitu and others at Samsung that helped develop and debug the core features of the Coconut release. In addition, I would like to express my gratitude to Suresh LC, who has contributed all his passions to his advocate role to promote Home Edge’s technical and business perspectives in the LF Edge TAC and other committees.

The Coconut Release builds on the features of the Baobab release, which was launched last year. Home Edge expects its next release will be available in 2021 and will include real time data analytics features. 

If you would like to learn more about the use cases for Home Edge or more technical details, check out the video of our October 2020 webinar. As part of the “On the Edge with LF Edge” webinar series, we shared the general overview for the project, how it fits into LF Edge, key features of the Coconut release, the roadmap, how to get involved and the landscape of the IoT Home Edge.

If you would like to contribute to Home Edge or share feedback, find the project on GitHub, the LF Edge Slack channel (#homeedge) or subscribe to our email list (homeedge-tsc@lists.lfedge.org). We welcome new contributors to help make this project better and expand the LF Edge community.

Additional Home Edge Resources:

1. Coconut release code : https://github.com/lf-edge/edge-home-orchestration-go/releases/tag/coconut

2. Release notes

3. Home Edge Wiki: https://wiki.lfedge.org/display/HOME/Home+Edge+Project

 

EdgeX Foundry Virtual F2F Recap: Ireland Planning

By Blog, EdgeX Foundry

Written by Jim White, Chair of the EdgeX Foundry Technical Steering Committee and CTO of IOTech Systems

It’s the holiday season upon us in the US.  On behalf of the EdgeX Foundry community, I’d like to wish you and yours a very warm, blessed and peaceful holiday season.

This time of year is special to me because it usually means some peace after a some long hard release cycle.  The EdgeX community is working on EdgeX 1.3 – the Hanoi Release. It is a minor (dot) release and backward compatible with Edinburgh (1.0), Fuji (1.1) and Geneva (1.2) along with any patch release of these.  For more details on the Hanoi release, stay tuned for my release blog in a few weeks.

Virtual Face-to-Face

In addition to the release, we also completed our semi-annual release planning sessions this past week in order to get ready for our next release for the spring of 2021.  The next EdgeX Foundry release is called Ireland.

Up until this year, the planning meetings were held in-person at a venue hosted by one of our sponsoring companies.  We called the events our “face-to-face” meetings because it was the only time that the contributors and members of our global development community had a chance to meet in person.  This year, due to the pandemic, our planning sessions have had to be held “virtually.”  Somewhat paradoxically, this had led members of the community to refer to these on-line meetings now as “virtual face-to-face” meetings.  Leave it to a group of bright, energetic engineers to shake off the negative and embrace the new normal. Here we are, all online together.

In five, half-day meetings, we assembled our technical steering committee, development teams, and EdgeX adopters/users to scope the features, technical debt, and architectural direction of the next release and general roadmap of EdgeX.

Ireland Planning

We follow an alphabetical naming sequence in our releases and select members of our community that have contributed significantly to the project to help with the naming process.  This release was named by Intel’s Lenny Goodell and Mike Johnanson who have contributed immensely to the project, both in leadership and code contributions, over the past few years.  Each release is named after some geographical place on the earth (city, state, country, mountain, etc.).

EdgeX 2.0 Major Release

During our planning meetings, the general themes, objectives and overall direction of the next release are the first thing we decide.  Ireland will be EdgeX 2.0 – our project’s second major release.

As a major release, the Ireland release will include non-backward compatible features and APIs.  This is, in large part, due to the fact that we began work in the spring of 2020 to implement a new and improved set of EdgeX micro service APIs.  We call this new collection of APIs for each of the EdgeX micro services the V2 APIs (the V1 APIs are currently in place).

The existing EdgeX APIs have been in place since its very first release in 2017. The V2 APIs will remove a lot of early EdgeX technical debt and provide a better informational exchange. While we began the implementation this past spring, it will take the community until the spring release to complete the V2 APIs.  The new APIs will also allow for many new, future release features. For one, the request and response object models in the new APIs are richer and better organized.  The models will better support communications via alternate protocols in the future.  The V1 APIs will also be removed from the EdgeX micro services.

Because this is a non-backward compatible release, we are taking the opportunity to remove as much technical debt and include as many desired non-backward compatible features as possible.  This includes:

  • Removal of archived/deprecated services like the Supporting Rules Engine and Logging services
  • Removal of support for MongoDB (we have used Redis by default since our Fuji release)
  • Support for device services to send data directly to application services
  • Update configuration values and structures so they are more consistent with one another
  • More appropriately name properties, objects, services and artifacts

New Features

In addition to the new V2 APIs, what is going to be in this major release?  This list is long and I encourage those with a need for all the details to have a look at our documentation on our Wiki site, but here are some of the major new features:

  • Device services (our “thing” connectors) will send data directly to our application services via message bus (versus REST) that prepare the data for export (to cloud or enterprise systems) and local analytics packages (rules engines, predictive analytics packages, etc.). Optionally, the data can also be persisted via our core services locally.  This will help improve latency issues, allow for better quality of service, and reduce storing data at the edge when it is not needed.
  • We are improving the security services to allow for you to bring-your-own certificates (in Kong for example), provide abstraction around our secret provider (and make sure that abstraction is used by all services in the same way), secure admin ports and more.
  • Application services that prepare sensor/device data for cloud and enterprise applications (north side systems) will allow for conditionalized transformation, enrichment, or filtering functions to be performed on exported data.
  • A number of device services have been recently contributed to EdgeX. We have new connectors for Constrained Application Protocol (CoAP), General Purpose Input/Output (GPIO), Universal Asynchronous Receiver-Transmitter (UART), and Low-Level Reader Protocol (LLRP) that are under review and will be made available in this release cycle.
  • This release will include an example of how to include AI/ML analytics into the platform and data flow of EdgeX.
  • Our EdgeX user interface will include new data visualization capability and a “wizard” tool to help provision and establish new devices in the EdgeX instance.

Additional Improvements

In addition to scoping and planning for new features to the platform for the Ireland release, the community also decided to address additional needs of our user community in this release.

  • Because this Ireland release will be non-backward compatible with our current Hanoi and any 1.x version of EdgeX, we are also going to provide some tools and documentation for helping adopters migrate the existing release databases, configurations, etc. into the new 2.0 environment.
  • We plan to increase our interoperability testing, especially around our use of 3rd party services such as Kuiper, and provide some scalability/performance guidance as it relates to the number of connected things and how much sensor data can be passed through EdgeX from those things.
  • Our DevOps team is going to explore GitHub repository badges to provide adopters/users with better confidence in the platform.

Jakarta Release and Beyond

During these semi-annual planning meetings, the focus is squarely on the next release.  However, we also take the time to take stock of the project as a whole and look into the future and roadmap where the project is heading a year or more into the future.

At this time, the community is forecasting that the Jakarta release – scheduled for around the fall of 2021 – will be a “stability release.”  Meaning, Jakarta will probably not include any large enhancements.  Its purpose will be to provide a release that addresses any issues discovered in the EdgeX 2.0 release of Ireland. We also hope that Jakarta will be our first ever Long-Term-Support (LTS) release.  And with an LTS release, we hope to begin the implementation of an EdgeX certification program.

The EdgeX LTS policy has already been established and we have indicated to the world that once we have an LTS release, we plan to support that release (with bug fixes, security patches, documentation and artifact updates) for 2.5 years.  That is a significant commitment on the part of our open source community and the stability release will help us achieve that goal.

The certification program is one we have envisioned for a number of years.  The idea is that we eventually want to get to a point where a 3rd party could create and provide a replacement EdgeX service and the community would help test and validate that the service adheres to the APIs and criteria for that service and thereby is a suitable replacement in an EdgeX deployment.  In order to deliver the certification program, the community feels we need to get to the stability that an LTS release provides with the product.

Wrap Up

It’s been a heck of a year.  Despite the significant global pandemic and economic challenges, the EdgeX community did not miss a beat and managed to complete its goals for the year (2 more successful releases).  And with our fruitful planning meeting, despite it being held on-line, the community has plotted a path for an even more successful 2021 that will start with the delivery of EdgeX 2.0 in the spring.

As always, I want to thank the members of the community for their outstanding efforts and talents, patience and commitment and professionalism.  You could not find a group of people that are more fun to work with.  Here is wishing that in 2021, we can resume actual “face to face” meetings.  Happy holidays and a happy new year to everyone.

To learn more about the EdgeX Foundry releases and roadmap, visit https://www.edgexfoundry.org/software/releases/.

 

LF Edge Member Spotlight: OSIsoft

By Blog, Fledge, LF Edge, Member Spotlight

The LF Edge community is comprised of a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Daniel Lazaro, Senior Technical Program Manager at OSIsoft, to discuss the importance of open source, collaborating with industry leaders in edge computing, their contributions to Fledge and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Since 1980, OSIsoft has focused on a single goal: getting operations data into the hands of industrial professionals so that they have the information they need to reduce costs, increase productivity and transform business. Decades ago, before the modern internet, big data, or AI arrived on the scene, the company’s flagship PI System software became known for breaking ground as a historian: a database used by engineers in an operational environment that captures streaming, time-series data that reflects the state of the physical equipment (assets).

Over time, the PI System has expanded to meet modern industrial needs, allowing not only operations staff but also executives, software developers, data scientists and others to understand, share, and make decisions based on highly curated operations data. With its addition of edge and cloud-based capabilities, the PI System now makes this essential data accessible, usable and valuable to people, systems and communities of customers, suppliers and partners across and beyond the enterprise.

Why is your organization adopting an open source approach?

Open source enables collaboration and integration of heterogeneous technologies across organizational boundaries. Moreover, it provides a platform for innovation to create solutions designed to address technical and business challenges such as those at the edge. Our CEO and founder Pat Kennedy saw the opportunity for an open source approach to address such challenges at the edge and started Dianomic. We believe that open source is the fast track to innovation. Industrial systems are unique in the number of protocols, data mappings and overall diversity. Open source can uniquely address these edge computing challenges by collaborating on code that all participants can access, modify and expand upon.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

Zededa introduced Dianomic to LF Edge before its inception. As a result, Dianomic and OSIsoft joined as founding members. The original idea was and remains to build a thriving open source industrial community. This is a challenge to the Linux Foundation in that industrial companies have not been traditional open source users. The operations (OT) side of the Industrial market tends not to be software/compute experts, they are machine, manufacturing and process experts.

LF Edge curates several open source projects and a community around them that addresses the challenges of edge computing in a wide range of vertical markets at the edge of the network. This framework provides a collaboration platform for organizations to build non-differentiating infrastructure for solutions at the edge driven by inherent tradeoffs between the benefits of centralization and decentralization.

LF Edge plays a critical role helping accelerate deployments of Industrial IoT enabling and expanding visibility of previously untapped aspects of operations.

What do you see as the top benefits of being part of the LF Edge community?

In the LF Edge community, we see a group of like-minded organizations willing to work together. By collaborating through open source, we join forces to build the framework and ecosystem for the future of edge computing. Each project targets different pieces of the puzzle or building blocks to assemble in order to address the complexities encountered at the edge. Divide and conquer, focus, specialize and thrive. The community ecosystem provides learning and growing opportunities and a better together experience. At the same time, it enables exciting new revenue opportunities for new types of services and customers.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

As a Premier member of LF Edge, OSIsoft actively influences the strategic and technical direction of LF Edge as voting members of the Governing Board and technical advisory council. OSIsoft brings its industrial perspective and expertise to LF Edge and contributes its vision working with different committees and through public speaking at various LF Edge related events. Our contributions play an important role in nurturing and growing a community of end users across various industry verticals within LF Edge. The end user vertical solution group kicked off October, 2020 with presentations by industrial companies showcasing valuable use cases implementing solutions using LF Edge projects.

We believe that the industrial edge has a different set of requirements that are better addressed with a specialized approach tailored to its specific needs, namely Fledge. The thriving and growing Fledge community of industrials has contributed back code to the project already deployed in production environments today. This adds to the previous contributions by service providers, system integrators, OEMs such as Dianomic, OSIsoft and Google to name a few. Fledge started when Dianomic contributed the entire FogLAMP stack in winter of 2019. At that time, the code was in its 8th release and had been commercially deployed in energy, manufacturing and mining operations.

Fledge works closely with other LF Edge projects such as EVE and Akraino. EVE provides system and orchestration services and a container runtime for Fledge applications and services. Together industrial operators can build, manage, secure and support both Supervisory Control and Data Acquisition (SCADA) and non-SCADA connected machines, IIoT and sensors as they scale. Fledge is also integrated with Akraino, as both projects support the roll out 5G and private LTE networks.

What do you think sets LF Edge apart from other industry alliances?

Traditionally, alliances have focused on delivering recommendations, guidelines or standards Instead, LF Edge focuses on delivering reference implementations in the form of quality open source software ready for adopters to integrate in their solutions. The strong communities of end users and developers around the software that customize, integrate, implement and contribute back to the projects sets LF Edge apart.

How will LF Edge help your business?

The LF Edge framework lowers the barrier to adoption of edge computing solutions translating in increased industrial implementations that enable new use cases that were not technically possible or cost effective before. This allows OSIsoft customers to rapidly expand their real-time data infrastructure to new systems and devices in industry and operations for greater visibility into operations and business, faster decisions and higher value.

Moreover,  LF Edge enables the expansion to new market opportunities through technical solutions as well as its communities of end users and vertical solutions. LF Edge governance provides customers with confidence that the projects within the framework are developed with broad industry support and openness without vendor lock-in.

Finally, access to a large developer community and marketing efforts are opportunities to share resources and drive down costs.

What advice would you give to someone considering joining LF Edge?

Get familiar with the framework and ecosystem of projects. You can start by checking the website and read the various resources available, white papers and documentation provided by the community. Identify the projects, groups and communities that align with your organization’s goals. Join the relevant groups and communities, mailing lists and calls, listen in and learn and when you are ready participate and contribute. If you identify gaps or have solutions that can enrich the current ecosystem, bring them on. Contributions come in many shapes, not just code, and they are the means to drive direction and influence within LF Edge.

To find out more about LF Edge members or how to join, click here. To learn more about Fledge, click here. To see use cases for Fledge, check out these videos. Additionally, if you have questions or comments, visit the LF Edge Slack Channel and share your thoughts in the #fledge, #fledge-help or #fledge-tsc channels.

XGVela: Bring More Power to 5G with Cloud Native Telco PaaS

By Akraino, Blog

Written by Sagar Nangare, an Akraino community member and technology blogger who writes about data center technologies that are driving digital transformation. He is also an Assistant Manager of Marketing at Calsoft.

A Platform-as-a-Service (PaaS) model of cloud computing brings lots of power to the end-users. It provides a platform to the end customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. In the roadmap to build a 5G network, telecom networks need such breakthrough models that help them focus more on the design and development of network services and applications.

XGVela is 5G cloud-native open-source framework that introduces Platform-as-a-Service (PaaS) capabilities for telecom operators. With this PaaS platform, telecom operators can quickly design, develop, and innovate the telco network functions and services. This way, there will be more focus on seizing business opportunities for telecom companies rather than diving into complexities of telecom infrastructure.

China Mobile initiated this project internally which was later hosted and announced by Linux Foundation as a part to support the initiative to accelerate the telco cloud adoption. Apart from China Mobile, other supporters of XGVela are China Unicom, China Telecom, ZTE, Red Hat, Ericsson, Nokia, H3C, CICT, and Intel.

Initially for building XGVela, PaaS functionality was brought from existing open-source PaaS projects like Grafana, Envoy, Zookeeper, and further enhanced with telco requirements.

Why need XGVela PaaS capabilities for the 5G network?

  • XGVela helps telecom operators to align with fast changes in requirements to build a 5G network driven by cloud-native backhaul. They need faster upgrades to network functions and services along with agility in new service deployments.
  • Telecom operators need to focus more on the microservices driven and containers-based network functions (CNFs) rather than VM based network functions (VNFs). But need to continue to use both concurrently. XGVela can support CNFs and VNFs both in the underlying platform.
  • Telecom operators want to reduce network construction costs by adopting an open and healthy technology ecosystem like ONAP, Kubernetes, OpenDaylight, Akraino, OPNFV, etc. XGVela adopted this to reduce the barriers that bring end-to-end orchestrated 5G telecom networks.
  • XGVela simplifies the design and innovation of 5G network functions by allowing developers to focus on application development with service logic rather than dealing with underlying complex infrastructure. XGVela provides standard APIs to tie many internal projects

XGVela integrates CNCF projects based on telco requirements to form General-PaaS. It is architected kept in mind the microservices design method for network function development and further integrated into telco PaaS.

XGVela is integrated with OPNFV and Akraino for integration and testing. In a recent presentation, by Srinivasa Adepalli and Kuralamudhan Ramakrishnan shown that feature set of Akraino ICN blueprints are like XGVela. It is seen that many like to deployed CNFs and application on same cluster. In those environments, there is a need to figure out what additional things to be done on top of K8s to enable CNF deployment, especially telco CNF, normal CNF and application deployments, and further need to evaluate how it worked together. This is where XGVela and Akraino ICN BP family are inclined to each other.

You can subscribe to the XGVela mailing list here to track the project progress. For more information about Akraino Blueprints, click here: https://wiki.akraino.org/. Or, join the conversation on the LF Edge Slack Channel #akraino-blueprints #akraino-help #akraino-tsc

Scaling Ecosystems Through an Open Edge (Part One)

By Blog, Industry Article, Trend

Written By Jason Shepherd, LF Edge Governing Board Member and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Image for post

In my  piece earlier this year, I highlighted an increase in collaboration on interoperability, between different IT and OT stakeholders and building trust in data. In this three-part blog, I’ll expand on some related thoughts around building ecosystems, including various technical and business considerations and the importance of an open underlying foundation at the network edge.

For more on this topic, make sure to register and tune in tomorrow, July 28 for Stacey Higginbotham’s virtual event ! Stacey always puts on a great event and this will be a great series of panel discussions on this important topic. I’m on a roundtable about building partnerships for data sharing and connected services.

The new product mindset

I’ve  about the “new product mindset” to highlight the importance of developing products and services with the cumulative lifetime value they will generate in mind, compared to just the immediate value delivered on the day of launch. In the rapidly-evolving technology landscape, competitive advantage is based on the ability to innovate rapidly and continuously improve an offering through software- and, increasingly, hardware-defined experiences, not to mention better services.

Related to this is the importance of ecosystems. Whether you build one or join one, it’s more important than ever in this world of digital transformation to be part of an ecosystem to maintain a competitive edge. This includes consideration on two vectors — both horizontal for a solid and scalable foundation and vertical for domain-specific knowledge that drives targeted outcomes. Ultimately, it’s about creating a network effect for maximizing value, both for customers and the bottom line.

Approaches to building ecosystems

An ecosystem in the business world can take various forms. In simplest terms it could be a collection of products and services that are curated to be complementary to a central provider’s offering. Alternatively, it could consist of partners working towards a common goal, with each benefitting from the sum of the parts. Another ecosystem could take the form of an operations-centric supply chain or a group of partners targeting common customers with a joint go-to-market strategy.

Approaches to ecosystem development span a spectrum of closed to open philosophies. Closed ecosystems are based on closely-governed relationships, proprietary designs, and, in the case of software, proprietary APIs. Sometimes referred to as “walled gardens,” the tight control of closed ecosystems can provide a great customer experience. However, this tends to come with a premium cost and less choice. Apple is a widely cited example of a company that has taken a more closed ecosystem approach.

Then there are “closed-open” approaches which have APIs and tools that you can openly program to — as long as you pay access fees or are otherwise advancing the sponsoring company’s base offer. Amazon’s “Works with Alexa” program is an example of this model, and a big driver in how they emerged as a smart home leader — they spent many years establishing relationships and trust with a broad consumer base. This foundation creates a massive network effect, and in turn, Amazon makes the bulk of their revenue from the Alexa ecosystem by selling more consumer goods and furthering customer relationships — much more so than on the Alexa-enabled devices themselves.

Scaling through open source

To fully grasp the business trade-offs of closed vs. open ecosystems, consider Apple’s iOS and Android. While Apple provides a curated experience, the open source Android OS provides more choice and, as a result, . Android device makers may not be able to collect as much of a premium as Apple because of the open competition and difficulty in differentiating with both hardware and a better user experience by skinning the stock Android UI. However, a key upside element is the sheer volume factor.

Plus, that’s not to say there still isn’t an opportunity to differentiate with Android as the base of your ecosystem. As an example, Samsung has built a strong ecosystem approach through their Galaxy brand and has recently branched out even further by  to offer a better-together story with their products. We’ve seen this play before — if you’re old enough to have rented video tapes at a brick-and-mortar store (be kind, rewind!), you’ll recall that VHS was an inferior technology to Beta but it won out after a short format battle because the backers of VHS got the most movie studios onboard.

It’s ultimately about taking a holistic approach, and these days open source software like Android is an essential driver of ecosystem enablement and scale. On top of the benefit of helping developers reduce “undifferentiated heavy lifting,” open source collaboration provides an excellent foundation for creating a network effect that multiplies value creation.

After all, Google didn’t purchase the startup Android in 2005 (for just $50M!) and seed it into the market just to be altruistic — the result was a driver for a massive footprint that creates revenue through outlets including Google’s search engine and the Play store. Google doesn’t reveal how much Android in particular contributes to their bottom line, but in a 2015 court battle with Oracle over claims of a Java-related patent infringement . Further, reaching this user base to drive more ad revenue is now free for Google, compared to them reportedly paying Apple $1B in 2014 for making their search engine the default for the iPhone.

Striking a balance between risk and reward

There’s always some degree of a closed approach when it comes to resulting commercial offers, but an open source foundation undoubtedly provides the most flexibility, transparency and scalability over time. However, it’s not just about the technology — also key to ecosystems is striking a balance between risk and reward.

Ecosystems in the natural world are composed of organisms and their environments — this highly complex network finds an equilibrium based on ongoing interactions. The quest for ecosystem equilibrium carries over to the business world, and especially holds true for IoT solutions at the edge due to the inherent relationship of actions taken based on data from tangible events occurring in the physical world.

Several years back, I was working with a partner that piloted an IoT solution for a small farmer, “Fred,” who grows microgreens in greenhouses outside of the Boston area. In the winter, Fred used propane-based heaters to maintain the temperature in his greenhouses and if these heaters went out during a freeze he would lose thousands of dollars in crops overnight. Meanwhile, “Phil” was his propane supplier. Phil was motivated to let his customers’ propane tanks run almost bone dry before driving his truck out to fill them so he could maximize his profit on the trip.

As part of the overall installation, we instrumented Fred’s propane tank with a wireless sensor and gave both Fred and Phil visibility via mobile apps. With this new level of visibility, Fred started freaking out any time the tank wasn’t close to full and Phil would happily watch it drop down to near empty. This created some churn up front but after a while it turned out that both were satisfied if the propane tank averaged 40% full. Side note: Fred was initially hesitant about the whole endeavor but when we returned to check in a year later, you couldn’t peel him away from his phone monitoring all of his farming operations.

Compounding value through the network effect

Where it starts getting interesting is compounding value-add on top of foundational infrastructure. Ride sharing is an example of an IoT solution (with cars representing the “things”) that has created an ecosystem network effect in the form of supplemental delivery services by providers (such as Uber Eats) partnering with various businesses. There’s also an indirect drag effect for ecosystems, for example Airbnb has created an opportunity for homeowners to rent out their properties and this has produced a network effect for drop-in IoT solutions that simplify tasks for these new landlords such as remotely ensuring security and monitoring for issues such as water leaks.

The suppliers and workers in a construction operation also represent an ecosystem, and here too it’s important to strike a balance. Years back, I worked with a provider of an IoT solution that offered cheap passive sensors that were dropped into the concrete forms during the pouring of each floor of a building. Combined with a local gateway and software, these sensors told the crew exactly when the concrete was sufficiently cured so they could pull the forms faster.

While the company’s claims of being able to pull forms 30% faster sounded great on the surface, it didn’t always translate to a 30% reduction in overall construction time because other logistics issues often then became the bottleneck. This goes beyond the supply chain — one of the biggest variables construction companies face is their workforce, and people represent yet another complex layer in an overall ecosystem. And beyond construction workers, there are many other stakeholders during the building process spanning financial institutions, insurance carriers, real estate brokers, and so forth. Once construction is complete then come tenants, landlords, building owners, service providers and more. All of these factors represent a complex ecosystem that continues to find a natural equilibrium over time as many variables inevitably change.

In a final ecosystem example, while IoT solutions can drive entirely new services and experiences in a smart city, it’s especially important to balance privacy with value creation when dealing with crossing public and private domains. A city’s internally-focused solution for improving the efficiency of their waste services isn’t necessarily controversial, meanwhile if a city was to start tracking citizens’ individual recycling behaviors with sensors this is very likely to be perceived as an encroachment on civil liberties. I’m a big recycler (and composter), and like many people I’ll give up a little privacy if I receive significant value, but there’s always a fine line.

In closing

The need to strike an equilibrium across business ecosystems will be commonplace in our increasingly connected, data-driven world. These examples of the interesting dynamics that can take place when developing ecosystems highlight how it’s critical to have the right foundation and holistic approach to enable more complex interactions and compounding value, both in the technical sense and to balance risk and privacy with value created for all parties involved.

Stay tuned for part two of this series in which I’ll walk through the importance of having an open foundation at the network edge to provide maximum flexibility and scale for ecosystem development.

EdgeX Foundry New Contributors Q3, Tutorials & More!

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

As we’re headed into holiday season, we’d like to reflect a bit on Q3 and how busy it was for the EdgeX Foundry community.   The fourth quarter will be more of the same with our next release, Hanoi and a Face to Face (F2F) planning meeting for our next release, Ireland.

Quick stats: 

We had 38 (77 YTD) unique contributors this quarter making more than 500 (2000 YTD) commits.  We surpassed 7 million Docker downloads and have over 500k deployments.  These are amazing milestones but we could definitely not reach it without our community contributors. In Q3, we had four new contributors that we would like to welcome and recognize.

Q3 New Contributors’ GitHub Usernames:  

Alexmcminn

AlexCuse

jinfahua

siggiskulason

We really appreciate your contributions and look forward to your next contribution.  We wouldn’t be a community without you!  And to our wider community, please go to GitHub and find our new contributors to see what other projects that they are working on.

New “How to” Video and Updated Tutorial Released:

Lenny Goodell (EdgeX Foundry TSC member from Intel) recorded a great presentation on how EdgeX services work.  Here is a short description: The session is meant to assist those looking to understand existing or create a brand-new service using the EdgeX bootstrapping, configuration, dependency injection (of clients), etc.

Do you want to get involved with EdgeX Foundry-The World’s First Plug and Play Ecosystem-Enabled Open Platform for the IoT Edge or just learn more about the project and how to get started?  Either way, visit our Getting Started page and you will find everything that you need to get going.  We don’t just need developers, we welcome tech writers, translators, and many other disciplines to help us create, extend and expand the EdgeX platform

EdgeX Foundry is a Stage 3- Impact project under the LF Edge umbrella.  Visit the EdgeX Foundry website for more information or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 6 million and more downloads! You can also follow EdgeX on twitter, LinkedIn, YouTube.

Open Networking & Edge Summit: Distributed but always Connected

By Blog, LF Edge

This year, hundreds of professionals spanning the telecom, IoT and edge industries came together at Open Networking and Edge Summit, which took place virtually September 28-30.

Hosted by the Linux Foundation, LF Edge and LF Networking, there were 1,778 registrations (1,322 for the live event and an additional 456 to date for post-event platform access to content and technical showcase). Those attendees that joined live hailed from 523 organizations in 71 countries around the globe with 54% from the United States, Canada, and Mexico. 

51% of attendees spent 10 or more hours on the virtual event platform, and over 68% of attendees spent 4 or more hours on the platform. 69% were first-time Open Networking & Edge Summit attendees. To learn more about the event, check out the 2020 post-event report here

If you missed it, you can find the entire Open Networking & Edge Summit playlist here, which includes keynote presentations, lightning talks, in-depth tutorials, panel discussions and project and use case sessions.  

Arpit Joshipura, General Manager of Networking, IoT and Edge at the Linux Foundation, kicked off the event with a keynote that highlighted the five hard questions answered by LF Edge and LF Networking. 

  1. Why Open Source?
  2. Standards or Open Source?
  3. Why Contribute?
  4. POC to Production?
  5. Money?

Keynotes Day 1

LF Edge projects were featured in sessions:  

Your Path to Edge Computing with Akraino: https://youtu.be/_UCaQzachuM

Akraino TSC Co-Chairs Kandan Kathirvel (AT&T) and Tina Tsou (Arm) shared details about the Akraino R2 blueprints and R3 community goals, how to engage and contribute and demos of certain blueprints. 

How Akraino is Used (Panel Discussion): https://youtu.be/4kecTzrUdsI

Akraino TSC Co-Chair Tina Tsou (Arm), Sha Rahman (Facebook), Changming Bai (Alibaba), Mark Shan ( Tencent) and Yongsu Zhang and Hechun Zhang (Bytedance) shared end user stories and opinions on how Akraino is used in 5G, the AI Edge, Connected Vehicle, mixed reality AR/VR, and Private LTE/5G.

Serverless in Akraino: https://youtu.be/7Bosql0T5K8

Tapio Tallgreen (Nokia) decided to use the Akraino uMEC project in junction which is the biggest hackathon in Europe. Their concept was to use a scaled-down version of a smart city, with sensors and servers running in lightpoles. All servers were in the same k3s cluster and connected to the Internet. He wanted to make it possible for developers to create applications that can run on the cluster, and do it in 48 hours or less! This presentation details their journey to leveraging OpenFAAS Cloud as the user management and what we learned.

Self Checkout Theft Detection Showcase Using EdgeX Foundry: https://youtu.be/EQyQRFRsz0o

EdgeX Foundry Vertical Solutions Chair Henry Lau (HP) gave a presentation about how HP teamed up with other LF Edge members to demonstrate solving complex retail problems with an EdgeX Foundry powered HP Retail IoT Gateway. By making use of self-checkout theft detection use case, it showcased the ability to bring together multiple sensor data streams using EdgeX industry-leading open framework that is cloud-agnostic and sensor-agnostic.

Building the Android for the IoT Edge with Project EVE: https://youtu.be/0lchg7slk1k

ZEDEDA’s Roman Shaposhnik and LF Edge Governing Board member Jason Shepherd highlighted the key challenges of edge computing, the unique requirements of the IoT edge and why EVE is critical for IoT scale by serving as an open, standardized edge computing engine. We will host a brief demo and talk to what’s next, including integrating EVE with Kubernetes to extend the benefits from the data center to the IoT edge.

Living on the Edge to Meet Today’s Demands (Panel Discussion): https://www.youtube.com/watch?v=rWWjZ4nEMfs&list=PLbzoR-pLrL6psbdaoF_E1pE-2dRhroc_T&index=72

In this panel, LF Edge Outreach Chair Balaji Ethirajulu (Ericsson),  EdgeX Foundry Chair of the Security Committee Malini Bhandaru (VMware), Roman Shaposhnik (ZEDEDA) and Akraino TSC Co-Chair Tina Tsou (Arm) discussed:

  • Edge use cases being addressed to satisfy the industry need
  • Collaboration between the LF Edge Projects and scope of each project
  • How to engage and contribute to each project

How LF Edge Powers the IoT Vertical Across the Stack (Panel Discussion): https://youtu.be/nZSNYDwK3Xc 

In this panel, LF Edge Board member Thomas Nadeau (Red Hat), EdgeX Foundry Chair of the Security Committee Malini Bhandaru (VMware), LF Edge Governing Board member Jason Shepherd (ZEDEDA), LF Edge Governing Board member Daniel Lazaro (OSISoft) and LF Edge Governing Board member Vikram Siwach (MobileedgeX) discussed contributions and cross-project synergies across Akraino, Baetyl, EdgeX Foundry, Fledge, Glossary, and HomeEdge. 

LF Edge/LF Networking Pavilion

As co-hosts, LF Edge and sister umbrella project LF Networking teamed up to present a pavilion at Open Networking and Edge Summit that showcased different technologies and use cases. Between 100-150 attendees visited the booths to download materials and watch demo videos. You can find the LF Edge demo videos on our Youtube Channel

Feedback from attendees was positive, with an overall average satisfaction rating of over 95%. 97% said they plan to attend a future Open Networking & Edge Summit, and 94% said they would recommend it to a friend or colleague. We’re in the process of planning next year’s event…stay tuned for more details!

Edge Excitement! Innovation & Collaboration (Q4 2020)

By Blog, LF Edge, Open Horizon

Written by Ryan Anderson, Member and Contributor of LF Edge’s Open Horizon Project and IBM Architect in Residence, CTO Group for IBM Edge

This article originally ran on Ryan’s LinkedIN page

Rapid innovation in edge computing

It is an exciting time for the edge computing community! Since my first post April 2019, we are seeing a rapid acceleration of innovation driven by the convergence of multiple factors:

·     a convergence towards shared “mental models” in the edge solution space;

·     the increasing power of edge devices – pure CPU, as well as CPU plus GPU/VPU;

·     enormous investments in 5G infrastructures by network and communications stakeholders;

·     new collaborations across several IT/OT/IOT/Edge ecosystems;

·     increasing participation in, and support for, open source foundations such as LF Edge, by major players; and

·     widespread adoption of Kubernetes and Docker containers as a core layer of the edge.

With this convergence, innovation and accelerating adoption, Gartner’s prediction that 75% of enterprise-generated data will be created and processed at the edge, appears prescient.

Edge nodes – from datacenters to devices 

Much like “AI” and “IT” – edge computing is a broad and nebulous term that means different things to different stakeholders. In the diagram below, we consider four points of view for edge:

  1. Industrial Edge
  2. Enterprise Network Cloud Edge
  3. 5G / Telco Edge
  4. Consumer and Retail Edge
No alt text provided for this image

 

This model illustrates a few key ideas:

·     Some edge use cases fall squarely within one quadrant – whereas others span two, or sometimes three.

·     Solution mapping will help shape architecture discussions and may inform which stakeholders should be involved in conversations.

·     Edge can mean very different things to different people; and consequently, value propositions (and ROI/KPI) will also vary dramatically.

Technology tools for next generation edge computing must be flexible enough to work across different edge quadrants and work across different types of edge nodes.

Terminology. And what is edge computing?  

At IBM our edge computing definition is “act on insights closer to where data is created.”

We define edge node generically as any edge device, edge cluster, or edge server/gateway on which computing (workload, applications, analytics) can be performed, outside of public or private cloud infrastructure, or the central IT data center.

An edge device is a special-purpose piece of equipment that also has compute capacity integrated into that device on which interesting work can be performed. An assembly machine on the factory floor, an ATM, an intelligent camera or a next-generation automobile are all examples of edge devices. It is common to find edge devices with ARM or x86 class CPUs with one or two cores, 128MB of memory, and perhaps 1 GB of local persistent storage.

Sometimes edge devices include GPUs (graphics processing unit) and VPUs (vision processing units) – optimized chips that are very good for running AI models and inferencing on edge devices.

Fixed function IOT equipment that lack general open compute are not typically considered edge nodes, but rather IOT sensors. IOT sensors often interoperate with edge devices – but are not the same thing, as we see on the left side of this diagram.

 

No alt text provided for this image

An edge cluster is a general-purpose IT computer located in remote premises, such as a factory, retail store, hotel, distribution center or bank, for example – and typically used to run enterprise application workloads and shared services.

Edge nodes can also live within network facilities such as central offices, regional data-centers and hub locations operated by a network provider, or a metro facility operated by colocation provider.

An edge cluster is typically an industrial PC, or racked server, or an IT appliance.

Often, edge clusters include GPU/VPU hardware.

Tools for devices to data centers

IBM Edge Application Manager (IEAM) and Red Hat have created reference architectures and tools to manage the workload cross CPU and GPU/VPU compute resources.

Customers want simplicity. IEAM can provide simplicity with a single pane of glass to manage and orchestrate workloads from core to edge, across multiple clouds.

For edge clusters running Red Hat OpenShift Container Platform (OCP), a Kubernetes-based GPU/VPU Operators, solves the problem of needing unique operating system (OS) images between GPU and CPU nodes; instead, the GPU Operator bundles everything you need to support the GPU — the driver, container runtime, device plug-in, and monitoring with deployment by a Helm chart. Now, a single gold master image covers both CPU and GPU nodes.

Caution: Avoid fragmentation and friction with open source

This is indeed an exciting time for the edge computing community, as seen by the acceleration of innovation and emerging use case and architectures.

However, there is an area of concern as relates to fragmentation and friction in this emerging space.

Because the emerging edge market is enormous, there is a risk that some incumbents or niche players may be tempted to “go it alone,” trying to secure and defend a small corner (fragment) of a large space with a proprietary solution. If too many stakeholders do this – edge computing may fail to reach its potential.

This approach can be dangerous for companies for three reasons:

(1)  While isolated walled-garden (defensive) approach may work short term, over time isolated technology stacks may get left behind.

(2)  Customers are increasingly wary of attempts to vendor lock in and will source more flexible solutions.

(3)  Innovation is a team sport (e.g. Linux, Python).

Historically, emergent technologies can also encounter friction when key industry participants or standards organization are not working closely enough together (GSM/CDMA; VHS/Beta or HD-DVD/Blu-ray; Industrial IOT; Digital Twins).

So, what can we do to encourage collaboration?

The answer is open source.

Open source to reduce friction and increase collaboration

The IBM Edge team believes working with and through the open source community is the right approach to help edge computing evolve and reach its potential in the coming years.

IBM has a long history and strong commitment to open source. IBM was one of the earliest champions of communities like Linux, Apache, and Eclipse, pushing for open licenses, open governance, and open standards.

IBM engineers began contributing to Linux and helped to establish the Linux Foundation in 2000. In 1999, we helped to create the Apache Software Foundation (ASF) and supported the creation of the Eclipse Foundation in 2004 – providing open source developers a neutral place to collaborate and innovate in the open.

Continuing our tradition of support for open source collaboration, IBM and Red Hat are active members of Linux Foundation LF Edge;

  • LF Edge is an umbrella organization for several projects that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.
  • By bringing together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices.
  • Fostering collaboration and innovation across the multiple industries including industrial manufacturing, cities and government, energy, transportation, retail, home and building automation, automotive, logistics and health care
No alt text provided for this image

IBM is an active contributor to Open Horizon – one of the LF Edge projects – and the core of IBM Edge Application Manager; LF Edge’s Open Horizon is an open source platform for managing the service software lifecycle of containerized workloads and related machine learning assets. It enables autonomous management of applications deployed to distributed webscale fleets of edge computing nodes – clusters and devices based on Kubernetes and Docker – all from a central management hub.

Open Horizon is already working with several other LF Edge projects including EdgeX Foundry, Fledge and SDO (Secure Device Onboard)

SDO makes it easy and secure to configure edge devices and associate them with an edge management hub. Devices built with SDO can be added as an Edge Node by simply importing their associated ownership vouchers and then powering on the edge devices.

Additional Resources for Open Horizon

Open-Horizon documentation: https://open-horizon.github.io

Open-Horizon GitHub (source code): https://github.com/open-horizon

Example programs for Open-Horizon: https://github.com/open-horizon/examples

Open-Horizon Playlist on YouTube: https://bit.ly/34Xf0Ge