Written by Aaron Williams, LF Edge Developer Advocate
It’s our favorite time of the year – where we take a step back and reflect on what we have accomplished and honor the members of our community who have gone above and beyond to support EdgeX Foundry. We are pleased to announce the launch nominations for the 4th Annual EdgeX Community Awards.
Back in 2018 when we started the awards, we had about 1,000 Docker downloads and about 20 contributors. Today, we are well over 7 million downloads and have 4 times the number of contributors.
If you are already a member of the EdgeX Foundry community, please take a moment and visit our EdgeX Awards page and nominate the community member that you feel best exemplified innovation and/or contributed leadership and helped EdgeX Foundry advance and grow over the last year. Nominations are being accepted through April 7, 2021.
Innovation Award: The Innovation Award recognizes individuals who have contributed the most innovative solution to the project over the past year.
2018- Drasko Draskovic (Mainflux) and Tony Espy (Canonical)
2019- Trevor Conn (Dell) and Cloud Tsai (IOTech)
2020- Michael Estrin (Dell Technologies and James Gregg (Intel)
Contribution Award: The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.
2018- Andy Foster (IOTech) and Drasko Draskovic (Mainflux)
2019- Michael Johanson (Intel) and Lenny Goodell (Intel)
2020- Bryon Nevis (Intel) and Lisa Ranjbar (Intel)
From top left to right: Keith Steele, Maemalynn Meanor, Bryon Nevis. Michael Estrin, Lisa Rashidi-Ranjbar, James Gregg, Aaron Williams, Jim White
To learn more about EdgeX Foundry, visit our website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join, simply visit our wiki page and/or check out our Git Hub. And you can follow us on Twitter, and LinkedIn.
Similar to our previous experience last Fall, we had a wealth of potential applicants to choose from. Thanks in large part to being featured on the LFX Mentoring home page, our applicant pool grew from 30 last term to 43 this term. This time, the range of educational experience from candidates was even wider – from sophomores to doctoral candidates. The wealth of experience was much greater this term, and the drive and determination was incredible. Our methods of choosing the top candidates last time narrowed the pool from 30 to 12, but this time only took us from 43 to 30. After conducting interviews and matching applicant experience to our mentorship opportunities, we selected the final four candidates.
Two mentors returned for the Spring term, and we added two new mentors. Returning were:
David Booz (Dave) – Chief Architect and one of the project founders, Dave is also Chair of the Agent Working Group and a member over the last six years. Dave understands the complete breadth and depth of the project code.
Ben Courliss – Chair of the DevOps Working Group, Ben is a multi-decade veteran of open-source projects and has a wealth of knowledge and experience. Ben is also active with the LF Edge’s Shared Community Lab.
Dave and Bruce decided to collaborate and provide a challenging mentorship opportunity to the mentees by providing a linked project to work on. One mentee would collaborate with the Agent Working Group and one with the Management Hub Working Group. Together they would add the ability to support shared secrets. On the Agent side, Dave chose:
Debabrata Mandal (Deb) – Deb is a final year student at IIT Bombay in Mumbai. Deb described his current work, saying: “I am currently working on an issue … related to fetching the logging information from the specified service using the service url. While solving this issue I am getting a more detailed idea of the working of the agent component.”
Bruce wanted to work with:
Megha Varshney – Megha is a final year undergraduate at Indira Gandhi Delhi Technical University for Women. She is: “… currently working on [an] issue wherein I will be adding an API which would provide org specific status info.”
Los cooked up a challenge this year by adding support for the RISC-V microarchitecture to the Open Horizon project. For that effort, he selected:
Quang Hiệp Mai (Mike) – Mike is in the second year of his master’s degree program at Soongsil University in South Korea. He said: “Since RISC-V is blooming in China, supporting RISC-V arch for the agent device would attract more users in China market. So first I will port some of the examples to RISC-V arch then I will help to port Anax to RISC-V”
And Ben wanted to work with a mentee to evaluate and choose an appropriate technology and subsequently to migrate pipelines to that choice. He wanted to work with:
Mustafa Al-tekreeti – Mustafa is a Ph.D. educator transitioning to a career in software engineering. He explained: “I am working on developing a CI/CD pipeline for Anax, one of the Open-Horizon projects, using one of the state-of-art technologies. First, we evaluate three of the available options (Circle-CI, GitHub Actions, and Jenkins Job Builder hosted by LF Edge) and summarize their pros and cons. Then, we implement the pipeline using the chosen technology.”
We’re now three weeks into the Spring 2021 term and beginning to make plans for a potential Summer 2021 term, which would open for applicants beginning April 15th. In the meantime, we recently recorded a webinar with the graduates from our Fall 2020 term. It was a great opportunity to get everyone together on a call and discuss what we accomplished and learned and to provide helpful advice to future applicants.
Written by Taras Drozdovsky and Taewan Kim, LF Edge Home Edge Committer & Staff Engineer at Samsung Research as well as Peter Moonki Hong, LF Edge Governing Board member & Principal Engineer at Samsung Research
Every year the number of Free/Libre and Open Source Software (FLOSS) projects increases and the community has a goal to maintain high quality code, documentation, test coverage and, of course, a high level of security.
The Linux Foundation, in collaboration with and major IT companies in its Core Infrastructure Initiative (CII), has collected best practices for FLOSS projects and provided these criteria as CII Best Practices.
Thank you to the committers and key contributors of Home Edge, Taras Drozdovskyi, Taewan Kim, Somang Park, MyeongGi Jeong, Suresh L C, Sunchit Sharma, Dwarkaprasad Dayama, Peter Moonki Hong. We would also like to thank Jim White with EdgeX Foundry for helping to guide us and the entire LF Edge/Linux Foundation team for their support.
Benefits achieved on the way to getting the CII Best Practices passing badge:
Security and Testing policy
How to Contributing Guide
Descriptions External APIs
Improved the build and testing system
CI infrastructures: Github->Actions – 20 checks
Integrated of external software tools for analysis code:
We have reached a high level of code quality, but continue to improve our code!
Improving the Edge Orchestration project infrastructure (using many tools for analyzing code and searching for vulnerabilities) allowed us to increase not only the level of security, but also the reliability of our product. We hope that improving the documentation will reduce the time to enter the project, and therefore increase the number of external developers participating in the project. Their advice and input are very important for us.
It should also be noted that there were many areas of self-development for the members of the project team: developers became testers, technical writers, security officers. This is a wonderful experience.
Next step of Edge Orchestration team
Further improving the Edge-Home-Orchestration project and achieving “silver” and “gold” badges. Implementation of OpenSSF protected code development practices intoEdge-Home-Orchestration.
If you would like to learn more about the use cases for Home Edge or more technical details, check out the video of our October 2020 webinar. As part of the “On the Edge with LF Edge” webinar series, we shared the general overview for the project, how it fits into LF Edge, key features of the Coconut release, the roadmap, how to get involved and the landscape of the IoT Home Edge.
If you would like to contribute to Home Edge or share feedback, find the project on GitHub, the LF Edge Slack channel (#homeedge) or subscribe to our email list (email@example.com). We welcome new contributors to help make this project better and expand the LF Edge community.
Written by Jacob Smith, State of the Edge Co-Chair and VP Bare Metal Strategy at Equinix Metal
On Wednesday, LF Edge published their 2021 edition of the State of the Edge Report. As a co-chair and researcher for the report, I want to share insight into how this year’s report was created. In development, we focused on three key questions to ensure the report covered essential subject matter:
What’s new in edge computing since last year?
What are the challenges and opportunities facing the ecosystem?
How can we help?
Designed to be “vendor neutral” and to shy away from any bias, we had multiple authors contribute content and utilized an independent editor to balance the report to better serve the edge computing community. Several experts weighed in on what they were paying attention to in the edge space and identified four major stand-out points:
Critical infrastructure (data centers)
Hardware and silicon
Software as a whole
Networks and networking
It’s abundantly clear there is growth in edge computing. Over the last year, COVID-19 accelerated nearly every facet of digital transformation, and this factored into the growth of the edge computing space. But, it also might just be a long overdue increase in digital innovation and transformation.
We will continue to discover more about the edge and how to use it to optimize the computing ecosystem. For now, our attention is focused on how diverse the edge is and all of the solutions that are waiting to be discovered.
Summarized from Equinix Metal’s The State of Your Edge. Read the full story here. Download the report here.
Getting to “Advanced Class”, Including Focusing on Compounding Value with the Right Partners
By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA
This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.
Thanks for continuing to read this series on ecosystem development and the importance of an open edge for scaling to the true potential of digital transformation — interconnected ecosystems that drive new business models and customer outcomes. In parts one and two, I talked about various ecosystem approaches spanning open and closed philosophies, the new product mindset related to thinking about the cumulative lifetime value of an offer, and the importance of the network effect and various considerations for building ecosystems. This includes the dynamics across both technology choices and between people.
In this final installment I’ll dive into how we get to “advanced class,” including the importance of domain knowledge and thinking about cumulative value add vs. solutions looking for problems that can be solved in other, albeit brute force, ways.
Getting to advanced class
I talked about the importance of compounding value in part one of this series and will touch on this concept a little more here, in addition to the criticality of domain knowledge. Speaking of the later, hundreds of IoT platform providers claim they can do a use case like Predictive Maintenance (PdM) but few actually have the domain knowledge to do it. A PdM solution not only requires analytics tools and data science but also an intimate knowledge of failure patterns based on various attributes (vibration, oil particulates, temperature, motor current, etc.). Often an operator on a factory floor that has “been there done that” sits alongside a data scientist to help program the analytics algorithms based on his/her tribal knowledge. A former colleague once worked on a PdM solution with a line operator named Brad, who was the expert that helped the data scientist understand what machine and process behaviors were non-issues, despite looking like bad news, and vice versa. They jokingly called the end result “Bradalytics”.
Further, there’s a certain naivety in the industry today in terms of pushing IoT solutions when there’s already established “good enough” precedent. In the case of PdM, manual data acquisition with a handheld vibration meter once a month or so is a common practice to accurately predict impending machine failures because these failures don’t typically happen overnight. An industrial operator can justify this manual data collection as OpEx and the only thing permanently installed on their machines are brass pads that indicate where to apply the contact-based handheld sensor, from which data is manually loaded into an analytical tool.
Similar manual procedures are all too common in other industries, such as USB data loggers used to monitor temperature in cold chain logistics operations. In another example, structural engineers have historically used the practice of attaching an expensive sensor package to monitor the structural integrity of a bridge or building for a few weeks, only to then move this equipment on to the next piece of infrastructure.
The promise of IoT is that this instrumentation is becoming so inexpensive that it can be deployed permanently everywhere for acquiring real-time data from the physical world; however, the value of deploying this infrastructure still must be greater than the cost of deploying and maintaining it through its full lifecycle in addition to the associated risk in terms of maintaining security and privacy.
Don’t get me wrong — PdM enabled by IoT is a valuable use case — despite machines typically failing over a long period of time it’s expensive to roll a truck to do repairs, especially if an industrial process experiences a loss of production. For example, downtime in a major jetliner factory can be upwards of $20k a minute! It’s just important to think about the big picture and whether you’re trying to solve a problem that has already been solved.
Looking at the bigger picture, a supply chain linked to a manufacturing process is a great example of an ecosystem that easily justifies an IoT solution for real-time monitoring and analytics. In a multi-stage manufacturing operation, the cost of discovering a flaw within a given part increases steadily with each process stage. The cost is even higher if that part gets into the supply chain and it’s higher yet if a defective product gets to a customer, not to mention impacting the manufacturer’s brand. Here the cumulative value of instrumenting throughout the product lifecycle is very high and definitely warrants a solution that can report issues the moment they occur.
Speaking of the holistic value that I touched on in part one and above, the real potential is not just the remote monitoring of a single machine, rather a whole fleet of interconnected machines. Imagine a world where you can send a tech out with work instructions to preventatively repair a fleet of machines in order to get the most out of a truck roll to a given location. This is similar to the story of “Fred and Phil” in part one, in which Phil wanted the propane tanks to be bone dry before rolling a truck. And on top of that — imagine that the algorithm could tell you that it will cost you less money in the long run to replace a specific machine altogether, rather than trying to repair it yet again.
It goes beyond IoT and AI, last I checked, systems ranging from machines to factory floors and vehicles are composed of subsystems from various suppliers. As such, open interoperability is also critical when it comes to Digital Twins. I think this is a great topic for another blog!
Sharpening the edge
In my recent Edge AI blog I highlighted the new LF Edge taxonomy white paper and how we think this taxonomy will help put an end to the current market confusion caused by various industries (cloud, telco, IT, OT/industrial, consumer) defining the edge with a strong bias towards their interests/PoV, not to mention taxonomies that use ambiguous terms such as “thin and thick” and “near and far” that mean different things to different people. The team at ZEDEDA had a heavy hand in shaping this with the LF Edge community. In fact, a lot of the core thinking stems from my “Getting a Deeper Edge on the Edge” blog from last year.
As highlighted in the paper, the IoT component of the Smart Device Edge (e.g., compared to client devices like smartphones, PCs, etc. that also live at this sub-category) spans a single node with 256MB of memory up to a small server cluster, deployed outside of a physically-secure data center or Modular Data Center (MDC). The upper end of this spectrum is increasingly blurring into the Kubernetes paradigm thanks to efforts like K3S. However, due to resource constraints and other factors these nodes will not have the exact same functionality as higher edge tiers leveraging full-blown Kubernetes.
Below the Smart Device Edge is the “Constrained Device Edge”. This sub-tier consists of a highly fragmented landscape of microcontroller-based devices that typically have their own custom OTA tools. Efforts like Microsoft’s Sphere OS are trying to address this space and it’s important to band together on efforts like this due to the immense fragmentation at this tier.
Ultimately it’s important to think of the edge to cloud as a continuum and that there isn’t an “industrial edge” vs. an “enterprise edge” and a “consumer edge” as some would contend. Rather, it’s about building an ecosystem of technology and services providers that create necessarily unique value-add on top of more consistent infrastructure while taking into account the necessary tradeoff across the continuum.
We build the guts so you can bring the glory
You can think of ZEDEDA’s SaaS-based orchestration solution as being like VMware Tanzu in principle (in the sense that we support both VMs and containers) but optimized for IoT-centric edge compute hardware and workloads at the “Smart Device Edge” as defined by the LF Edge taxonomy. We’re not trying to compete with great companies like VMware and Nutanix who shine at the On-prem Data Center Edge up through the Service Provider Edge and into centralized data centers in the cloud. In fact, we can help industry leaders like these, telcos and cloud service providers extend their offerings including managed services down into the lighter compute edges.
Our solution is based on the open source Project EVE within LF Edge which provides developers with maximum flexibility for evolving their ecosystem strategy with their choice of technologies and partners, regardless of whether they ultimately choose to take an open or more closed approach. EVE aims to do for IoT what Android did for the mobile component of the Smart Device Edge by simplifying the orchestration of IoT edge computing at scale, regardless of applications, hardware or cloud used.
The open EVE orchestration APIs also provide an insurance policy in terms of developers and end users not being locked into only our commercial cloud-based orchestrator. Meanwhile, it’s not easy to build this kind of controller for scale, and ours places special emphasis on ease of use in addition to security. It can be leveraged as-is by end users, or white-labeled by solution OEMs to orchestrate their own branded offers and related ecosystems. In all cases, the ZEDEDA cloud is not in the data path so all permutations of edge data sources flowing to any on-premises or cloud-based system are supported across a mix of providers.
As I like to say, we build the guts so our partners and customers can bring the glory!
I’ll close with a highlight of the classic Clayton Christiansen book Innovator’s Dilemma. In my role I talk with a lot of technology providers and end users, and I often see people stuck in this pattern, thinking “I’ll turn things around if I can just do what I’ve always done better”. This goes not just for large incumbents but also fairly young startups!
One interaction in particular that has stuck with me over the years was a IoT strategy discussion with a large credit card payments provider. They were clearly trapped in the Innovator’s Dilemma, wondering what to do about the mobile payment players like Square that had really disrupted their market. As they were talking about how they didn’t want another Square situation to happen again, I asked them “have you thought about when machines start making payments?”. The fact that this blew their minds is exactly why another Square is going to happen to them, if they don’t get outside of their own headspace.
When approaching digital transformation and ecosystem development it’s often best to start small, however equally important is to think holistically for the long term. This includes building on an open foundation that you can grow with and that enables you to focus your investments on business relationships and innovation rather than reinvention. Sure, you may be able to build your own orchestration solution like we have at ZEDEDA, or your own full turn-key stack, but if you do either in a proprietary silo then you’ve just locked yourself out of the biggest potential — creating a network effect across various markets through increasingly interconnected ecosystems! Again, imagine if the internet was owned by one company.
We invite you to get involved with Project EVE, and more broadly-speaking LF Edge as we jointly build out this foundation. You’re of course free to build your own controller with EVE’s open APIs, or reach out to us here at ZEDEDA and we can help you orchestrate your commercial IoT edge ecosystem. We’re here to help you start small and scale big and have a growing entourage of technology and service partners to drive outcomes for your business.
Written by Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM
On Thursday, March 4, 2021 the Open Horizon open source project was officially promoted to stage two maturity. This was the culmination of a process that began on January 27, when I presented the proposal to the LF Edge Technical Advisory Committee (TAC).
The Open Horizon software project was seeded by a contribution from IBM to the Linux Foundation in April 2020. The project’s mission: to develop open-source software that manages the service software lifecycle of containerized applications on edge compute nodes. Since that initial code contribution, the project and community has grown as it has attracted new project partners and contributors.
About Stage Two
Reaching stage two is a significant milestone for open-source software projects because it is a strong indicator of both the organization’s healthy growth, and potential stakeholder interest in using the software as a solution for specific use cases. In the LF Edge umbrella organization, projects have to meet the following criteria to achieve stage two:
Past community participation met previous growth plans
An ongoing flow of merged code commits and other contributions
Documented proofs of concept using the software
Collaboration with other LF Edge projects
Growth plan, including projected feature sets and community targets
The project has also been actively involved in reaching out to students and universities. Thanks to funding from the Linux Foundation and Intel, the LFX Mentorship program was able to provide stipends for mentees who complete a term with Linux Foundation projects. Open Horizon joined the LFX program and was able to mentor four students and early career persons from October to December 2020. This mentorship program continues, and Open Horizon has just begun the Spring 2021 term with four new mentees.
Using the Platform
Last year, IBM created a commercially supported distribution of Open Horizon that was named IBM Edge Application Manager (IEAM). We’ve seen Open Horizon, or Open Horizon-based distributions, being used in an autonomous ship, vertical farming solutions, regional climate protection management, shipping ports, and even the International Space Station to deliver applications and related machine learning assets. And last week HP Inc, Intel, and IBM presented a webinar to invite retailers and vendors to participate in creating an Open Retail Reference Architecture based on the EdgeX Foundry, Open Horizon, and SDO projects.
Creating an Open Edge Computing Framework
The LF Edge organization provides a structure to support open edge computing projects. This should allow those projects to collectively form a framework for edge computing if they support common standards and interoperability. The Open Horizon project is working to further that goal by both working with other LF Edge projects to create end-to-end edge computing solutions when combined, but also to support existing open standards and to create new standards where none exist today. An example of supporting existing standards is in the area of zero-touch device provisioning where Open Horizon incorporates SDO services, a reference implementation for the FIDO IoT v1 specification.
Now that Open Horizon has demonstrated its value as a platform and a community, it is preparing to expand the community by adding new SIGs, Partners, and contributors to the project. To work with the project community to shape application delivery and lifecycle management within edge computing, consider attending one of the Working Group meetings, contributing code, working on standards, or even installing the software.
Written by Glen Darling, contributor to Open Horizon and Advisory Software Engineer at IBM
If they are debian/ubuntu-style distros with Docker installed, or RedHat-style distros with docker or podman installed, Open Horizon can make this easy for you. Open Horizon is an LF Edge open source project originally contributed by IBM that is designed to manage application containers on potentially very large numbers of edge machines. IBM’s commercial distribution of Open Horizon for example supports 30,000 Linux hosts from a single Management Hub!
How can this scale be achieved? On each Linux host (called a “node” in Open Horizon) a small autonomous Open Horizon Agent must be installed. Since these Agents act autonomously, this minimizes the need for connections to the central Open Horizon Management Hub and also minimizes the network traffic required. In fact, the local Agent can continue to perform many of its monitoring and management functions for your applications even when completely disconnected from the Management Hub! Also, the Management Hub never initiates communications with any Agents, so no firrewall ports need to be opened on your edge nodes. Each Agent is responsible for its own node and it reaches out to contact the Management Hub to receive new information as appropriate based on its configuration.
Open Horizon’s design also simplifies operations at scale. No longer will you need to maintain hard-coded lists of nodes and the software that’s appropriate for each of them. When a fleet of devices is diverse maintaining complex overlapping software requirements can quickly become unmanageable with even relatively small scale. Instead, Open Horizon lets you specify your intent in “policies”, and then the Agents, in collaboration with the Agreement Robots (AgBots for short) in the Management Hub will work to achieve your intent across your whole fleet. You can specify policies for individual nodes, and/or policies for your services (collections of one or more containers deployed as a unit), and/or policies to govern deployment of your applications. Policies also enable your large data files, like neural network models (e.g., large edge weight files) to have lifecycles independent of your service containers.
I recently presented a hands-on Open Horizon deployment example at Cisco Systems’ “DevNet 2020” developer conference. In this example I showed how to take an existing container, publish it to DockerHub, define Open Horizon artifacts to describe it as a service, and a simple deployment pattern to deploy it. The example application detects the presence or absence of a facial mask when provided an image over an HTTP REST API. Cisco developers, and most developers working with Linux machines, can use Open Horizon as I did in this example to deploy their software across small or large fleets of machines.
You can view my Cisco DevNet 2020 presentation at the link below:
Written by Nima Afraz, Connect Centre for Future Networks and Communications, Trinity College Dublin
The Telecom Special Interest group in collaboration with the Linux Foundation’s LF Edge initiative has published a solution brief addressing the issues concerning the centralized ID and Access Management (IAM) in IoT Networks and introducing a distributed alternative using Hyperledger Fabric.
The ever-growing number of IoT devices means data vulnerability is an ongoing risk. Existing centralized IoT ecosystems have led to concerns about security, privacy, and data use. This solution brief shows that a decentralized ID and access management (DIAM) system for IoT devices provides the best solution for those concerns, and that Hyperledger offers the best technology for such a system.
The IoT is growing quickly. IDC predicts that by 2025 there will be 55.7 billion connected devices in the world. Scaling and securing a network of billions of IoT devices starts with a robust device. Data security also requires a strong access management framework that can integrate and interoperate with existing legacy systems. Each IoT device should carry a unique global identifier and have a profile that governs access to the device.
In this solution brief, we propose a decentralized approach to validate and verify the identity of IoT devices, data, and applications. In particular, we propose using two frameworks from the Linux Foundation: Hyperledger Fabric for the distributed ledger (DLT) and Hyperledger Indy for the decentralized device IDs. These two blockchain frameworks provide the core components to address end-to-end IoT device ID and access management (IAM).
The Problem: IoT Data Security
The ambitious IoT use cases including smart transport infer a massive volume of vehicle-to-vehicle (V2V) and vehicle-to-road communications that must be safeguarded to prevent malicious activity and malfunctions due to single points of failure.
The Solution: Decentralized Identity
IoT devices collect, handle, and act on data as proxies for a wide range of users, such as a human, a government agency, or a multinational enterprise.With tens of billions of IoT devices to be connected over the next few years, numerous IoT devices may represent a single person or institution in multiple roles. And IoT devices may play roles that no one has yet envisioned.
A decentralized ID management system removes the need for any central governing authority and makes way for new models of trust among organizations. All this provides more transparency, improves communications, and saves costs.
The solution is to use Hyperledger technology to create a trusted platform for a telecom ecosystem that can support IoT devices throughout their entire lifecycle and guarantee a flawless customer experience. The solution brief includes Reference Architecture and a high-level architecture view of the proof of concept (PoC) that IBM is working on with some enterprise clients. This PoC uses Hyperledger Fabric as described above.
Successful Implementations of Hyperledger-based IoT Networks
IBM and its partners have successfully developed several global supply-chain ecosystems using IoT devices, IoT network services, and Hyperledger blockchain software. Two examples of these implementations are Food Trust and TradeLens.
To learn more about the Hyperledger Telecom Special Interest Group, check out the group’s wiki and mailing list. If you have questions, comments or suggestions, feel free to post messages to the list. And you’re welcome to join any of the group’s upcoming calls to meet group members and learn how to get involved.
The Hyperledger Telecom Special Interest Group would like to thank the following people who contributed to this solution brief: Nima Afraz, David Boswell, Bret Michael Carpenter, Vinay Chaudhary, Dharmen Dhulla, Charlene Fu, Gordon Graham, Saurabh Malviya, Lam Duc Nguyen, Ahmad Sghaier Omar, Vipin Rathi, Bilal Saleh, Amandeep Singh, and Mathews Thomas.
Written by Scott Smith, Industry Principal for Facilities and Data Centers for OSIsoft, an LF Edge member company
This article originally ran on the OSIsoft website – click here for more content like this.
The job of facilities managers used to be pretty straightforward: keep the lights on, keep people comfortable. It was all about availability, uptime, and comfort. However; the world is changing, and now facility management must respond to failures before people even notice them. They must predict maintenance based on conditions rather than time, and all this needs to be done with less people and less energy. With the critical role facilities play in the health and safety of occupants, there’s no room for missteps.
It seems like there are not enough hours in a day to accomplish all that we face. So, what does an ideal scenario look like for facilities managers?
First, we need to step back and realize we will not be adding all the necessary staff that we require, so we need to find a substitution. The first place to look is how to leverage the data we already collect and how to enable the use of the data as our “eyes on” and provide us real time situational awareness. If we can enhance our data with logic-based analytics and provide information, we can shift our resources to critical priorities and ensure the availability of our facilities.
Second, this operational information can notify the facilities team of problems and allow us to be proactive in solutions and communications instead of waiting for complaints to roll in. We can even go one step further; we predict our facility performance and use that information to identify underlying issues we may have never found until it was too late.
This is not about implementing thousands of rules and getting overwhelmed with “alarm fatigue,” but about enhancing your own data, based on your own facilities, based on your needs. It allows you to break down silos and use basic information from meters, building automation and plant systems to provide you an integrated awareness.
University of California, Davis is a great example of this. In 2013, they set the goal of net zero greenhouse gas emissions from the campus’s building and vehicle fleet by 2025. With more than 1,000 buildings comprising 11.3 million square feet, this was no small feat. Beyond the obvious projects, UC Davis turned to the PI System to drive more efficient use of resources. By looking at the right data, UC Davis has been able to better inform their strategy for creating a carbon neutral campus.
UC Davis a perfect example of simply starting and starting simple. The data will provide the guidance on where there are opportunities to meet your objectives.
OSIsoft is an LF Edge member company. For more information about OSIsoft, visit their website.
Written by Dan Brown, Senior Manager, Content & Social Media, LF Training
Is your organization considering setting up an open source program office (OSPO) but you aren’t sure where to start? Or do you already have an OSPO but want to learn best practices to make it more effective? Linux Foundation Training & Certification has released a new series of seven training courses designed to help.
The Open Source Management & Strategy program is designed to help executives, managers, software developers and engineers understand and articulate the basic concepts for building effective open source practices within their organization. Developed by Guy Martin, Executive Director of OASIS Open, an internationally recognized standards development and open source projects consortium, the series of online courses provides the foundational knowledge and skills to build out an effective OSPO.
The courses in the program are designed to be modular, so participants only need to take those of relevance to them. This enables organizations to let each member of the team train in the skills they need to be successful in their given roles. The courses included in the program are:
LFC202 – Open Source Introduction – covers the basic components of open source and open standards
LFC203 – Open Source Business Strategy – discusses the various open source business models and how to develop practical strategies and policies for each
LFC204 – Effective Open Source Program Management – explains how to build an effective OSPO and the different types of roles and responsibilities needed to run it successfully
LFC205 – Open Source Development Practices – talks about the role of continuous integration and testing in a healthy open source project
LFC206 – Open Source Compliance Programs – covers the importance of effective open source license compliance and how to build programs and processes to ensure safe and effective consumption of open source
LFC207 – Collaborating Effectively with Open Source Projects – discusses how to work effectively with upstream open source projects and how to get the maximum benefit from working with project communities
LFC208 – Creating Open Source Projects – explains the rationale and value for creating new open source projects as well as the required legal, business and development processes needed to launch new projects
The Open Source Management & Strategy program is available to begin immediately. The $499 enrollment fee provides unlimited access to all seven courses for one year, as well as a certificate upon completion. Interested individuals may enroll here. The program is also included in all corporate training subscriptions.