Skip to main content
Category

Trend

Scaling Ecosystems Through an Open Edge (Part Three)

By Blog, LF Edge, Project EVE, Trend

Getting to “Advanced Class”, Including Focusing on Compounding Value with the Right Partners

By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA

 

Image for post

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Thanks for continuing to read this series on ecosystem development and the importance of an open edge for scaling to the true potential of digital transformation — interconnected ecosystems that drive new business models and customer outcomes. In parts one and two, I talked about various ecosystem approaches spanning open and closed philosophies, the new product mindset related to thinking about the cumulative lifetime value of an offer, and the importance of the network effect and various considerations for building ecosystems. This includes the dynamics across both technology choices and between people.

In this final installment I’ll dive into how we get to “advanced class,” including the importance of domain knowledge and thinking about cumulative value add vs. solutions looking for problems that can be solved in other, albeit brute force, ways.

Getting to advanced class

I talked about the importance of compounding value in part one of this series and will touch on this concept a little more here, in addition to the criticality of domain knowledge. Speaking of the later, hundreds of IoT platform providers claim they can do a use case like Predictive Maintenance (PdM) but few actually have the domain knowledge to do it. A PdM solution not only requires analytics tools and data science but also an intimate knowledge of failure patterns based on various attributes (vibration, oil particulates, temperature, motor current, etc.). Often an operator on a factory floor that has “been there done that” sits alongside a data scientist to help program the analytics algorithms based on his/her tribal knowledge. A former colleague once worked on a PdM solution with a line operator named Brad, who was the expert that helped the data scientist understand what machine and process behaviors were non-issues, despite looking like bad news, and vice versa. They jokingly called the end result “Bradalytics”.

Further, there’s a certain naivety in the industry today in terms of pushing IoT solutions when there’s already established “good enough” precedent. In the case of PdM, manual data acquisition with a handheld vibration meter once a month or so is a common practice to accurately predict impending machine failures because these failures don’t typically happen overnight. An industrial operator can justify this manual data collection as OpEx and the only thing permanently installed on their machines are brass pads that indicate where to apply the contact-based handheld sensor, from which data is manually loaded into an analytical tool.

Similar manual procedures are all too common in other industries, such as USB data loggers used to monitor temperature in cold chain logistics operations. In another example, structural engineers have historically used the practice of attaching an expensive sensor package to monitor the structural integrity of a bridge or building for a few weeks, only to then move this equipment on to the next piece of infrastructure.

The promise of IoT is that this instrumentation is becoming so inexpensive that it can be deployed permanently everywhere for acquiring real-time data from the physical world; however, the value of deploying this infrastructure still must be greater than the cost of deploying and maintaining it through its full lifecycle in addition to the associated risk in terms of maintaining security and privacy.

Don’t get me wrong — PdM enabled by IoT is a valuable use case — despite machines typically failing over a long period of time it’s expensive to roll a truck to do repairs, especially if an industrial process experiences a loss of production. For example, downtime in a major jetliner factory can be upwards of $20k a minute! It’s just important to think about the big picture and whether you’re trying to solve a problem that has already been solved.

Looking at the bigger picture, a supply chain linked to a manufacturing process is a great example of an ecosystem that easily justifies an IoT solution for real-time monitoring and analytics. In a multi-stage manufacturing operation, the cost of discovering a flaw within a given part increases steadily with each process stage. The cost is even higher if that part gets into the supply chain and it’s higher yet if a defective product gets to a customer, not to mention impacting the manufacturer’s brand. Here the cumulative value of instrumenting throughout the product lifecycle is very high and definitely warrants a solution that can report issues the moment they occur.

Speaking of the holistic value that I touched on in part one and above, the real potential is not just the remote monitoring of a single machine, rather a whole fleet of interconnected machines. Imagine a world where you can send a tech out with work instructions to preventatively repair a fleet of machines in order to get the most out of a truck roll to a given location. This is similar to the story of “Fred and Phil” in part one, in which Phil wanted the propane tanks to be bone dry before rolling a truck. And on top of that — imagine that the algorithm could tell you that it will cost you less money in the long run to replace a specific machine altogether, rather than trying to repair it yet again.

It goes beyond IoT and AI, last I checked, systems ranging from machines to factory floors and vehicles are composed of subsystems from various suppliers. As such, open interoperability is also critical when it comes to Digital Twins. I think this is a great topic for another blog!

Sharpening the edge

In my recent Edge AI blog I highlighted the new LF Edge taxonomy white paper and how we think this taxonomy will help put an end to the current market confusion caused by various industries (cloud, telco, IT, OT/industrial, consumer) defining the edge with a strong bias towards their interests/PoV, not to mention taxonomies that use ambiguous terms such as “thin and thick” and “near and far” that mean different things to different people. The team at ZEDEDA had a heavy hand in shaping this with the LF Edge community. In fact, a lot of the core thinking stems from my “Getting a Deeper Edge on the Edge” blog from last year.

As highlighted in the paper, the IoT component of the Smart Device Edge (e.g., compared to client devices like smartphones, PCs, etc. that also live at this sub-category) spans a single node with 256MB of memory up to a small server cluster, deployed outside of a physically-secure data center or Modular Data Center (MDC). The upper end of this spectrum is increasingly blurring into the Kubernetes paradigm thanks to efforts like K3S. However, due to resource constraints and other factors these nodes will not have the exact same functionality as higher edge tiers leveraging full-blown Kubernetes.

Below the Smart Device Edge is the “Constrained Device Edge”. This sub-tier consists of a highly fragmented landscape of microcontroller-based devices that typically have their own custom OTA tools. Efforts like Microsoft’s Sphere OS are trying to address this space and it’s important to band together on efforts like this due to the immense fragmentation at this tier.

Ultimately it’s important to think of the edge to cloud as a continuum and that there isn’t an “industrial edge” vs. an “enterprise edge” and a “consumer edge” as some would contend. Rather, it’s about building an ecosystem of technology and services providers that create necessarily unique value-add on top of more consistent infrastructure while taking into account the necessary tradeoff across the continuum.

We build the guts so you can bring the glory

You can think of ZEDEDA’s SaaS-based orchestration solution as being like VMware Tanzu in principle (in the sense that we support both VMs and containers) but optimized for IoT-centric edge compute hardware and workloads at the “Smart Device Edge” as defined by the LF Edge taxonomy. We’re not trying to compete with great companies like VMware and Nutanix who shine at the On-prem Data Center Edge up through the Service Provider Edge and into centralized data centers in the cloud. In fact, we can help industry leaders like these, telcos and cloud service providers extend their offerings including managed services down into the lighter compute edges.

Our solution is based on the open source Project EVE within LF Edge which provides developers with maximum flexibility for evolving their ecosystem strategy with their choice of technologies and partners, regardless of whether they ultimately choose to take an open or more closed approach. EVE aims to do for IoT what Android did for the mobile component of the Smart Device Edge by simplifying the orchestration of IoT edge computing at scale, regardless of applications, hardware or cloud used.

The open EVE orchestration APIs also provide an insurance policy in terms of developers and end users not being locked into only our commercial cloud-based orchestrator. Meanwhile, it’s not easy to build this kind of controller for scale, and ours places special emphasis on ease of use in addition to security. It can be leveraged as-is by end users, or white-labeled by solution OEMs to orchestrate their own branded offers and related ecosystems. In all cases, the ZEDEDA cloud is not in the data path so all permutations of edge data sources flowing to any on-premises or cloud-based system are supported across a mix of providers.

As I like to say, we build the guts so our partners and customers can bring the glory!

In closing

I’ll close with a highlight of the classic Clayton Christiansen book Innovator’s Dilemma. In my role I talk with a lot of technology providers and end users, and I often see people stuck in this pattern, thinking “I’ll turn things around if I can just do what I’ve always done better”. This goes not just for large incumbents but also fairly young startups!

One interaction in particular that has stuck with me over the years was a IoT strategy discussion with a large credit card payments provider. They were clearly trapped in the Innovator’s Dilemma, wondering what to do about the mobile payment players like Square that had really disrupted their market. As they were talking about how they didn’t want another Square situation to happen again, I asked them “have you thought about when machines start making payments?”. The fact that this blew their minds is exactly why another Square is going to happen to them, if they don’t get outside of their own headspace.

When approaching digital transformation and ecosystem development it’s often best to start small, however equally important is to think holistically for the long term. This includes building on an open foundation that you can grow with and that enables you to focus your investments on business relationships and innovation rather than reinvention. Sure, you may be able to build your own orchestration solution like we have at ZEDEDA, or your own full turn-key stack, but if you do either in a proprietary silo then you’ve just locked yourself out of the biggest potential — creating a network effect across various markets through increasingly interconnected ecosystems! Again, imagine if the internet was owned by one company.

We invite you to get involved with Project EVE, and more broadly-speaking LF Edge as we jointly build out this foundation. You’re of course free to build your own controller with EVE’s open APIs, or reach out to us here at ZEDEDA and we can help you orchestrate your commercial IoT edge ecosystem. We’re here to help you start small and scale big and have a growing entourage of technology and service partners to drive outcomes for your business.

Managing in the dark: facilities in COVID

By Blog, Trend

Written by Scott Smith,  Industry Principal for Facilities and Data Centers for OSIsoft, an LF Edge member company

This article originally ran on the OSIsoft website – click here for more content like this. 

The job of facilities managers used to be pretty straightforward: keep the lights on, keep people comfortable. It was all about availability, uptime, and comfort. However; the world is changing, and now facility management must respond to failures before people even notice them. They must predict maintenance based on conditions rather than time, and all this needs to be done with less people and less energy. With the critical role facilities play in the health and safety of occupants, there’s no room for missteps.

It seems like there are not enough hours in a day to accomplish all that we face.  So, what does an ideal scenario look like for facilities managers?

First, we need to step back and realize we will not be adding all the necessary staff that we require, so we need to find a substitution.  The first place to look is how to leverage the data we already collect and how to enable the use of the data as our “eyes on” and provide us real time situational awareness.  If we can enhance our data with logic-based analytics and provide information, we can shift our resources to critical priorities and ensure the availability of our facilities.

Second, this operational information can notify the facilities team of problems and allow us to be proactive in solutions and communications instead of waiting for complaints to roll in.  We can even go one step further; we predict our facility performance and use that information to identify underlying issues we may have never found until it was too late.

This is not about implementing thousands of rules and getting overwhelmed with “alarm fatigue,” but about enhancing your own data, based on your own facilities, based on your needs.  It allows you to break down silos and use basic information from meters, building automation and plant systems to provide you an integrated awareness.

University of California, Davis is a great example of this. In 2013, they set the goal of net zero greenhouse gas emissions from the campus’s building and vehicle fleet by 2025. With more than 1,000 buildings comprising 11.3 million square feet, this was no small feat. Beyond the obvious projects, UC Davis turned to the PI System to drive more efficient use of resources. By looking at the right data, UC Davis has been able to better inform their strategy for creating a carbon neutral campus.

UC Davis a perfect example of simply starting and starting simple. The data will provide the guidance on where there are opportunities to meet your objectives.

OSIsoft is an LF Edge member company. For more information about OSIsoft, visit their website.

Kubernetes Is Paving the Path for Edge Computing Adoption

By Blog, State of the Edge, Trend

Written by Molly Wojcik, Chair of the State of the Edge Landscape Working Group and Marketing Director at Section

For more content like this, please visit the Section website

Since Kubernetes was released five years ago by Google, it has become the standard for container orchestration in the cloud and data center. Its popularity with developers stems from its flexibility, reliability, and scalability to schedule and run containers on clusters of physical or virtual machines (VMs) for a diverse range of workloads.

When it comes to the Infrastructure (or Service Provider) Edge, Kubernetes is increasingly being adopted as a key component of edge computing. As in the cloud, Kubernetes allows organizations to efficiently run containers at the edge in a way that enables DevOps teams to move with greater dexterity and speed by maximizing resources (and spend less time integrating with heterogeneous operating environments), particularly important as organizations consume and analyze ever-increasing amounts of data.

A Shared Operational Paradigm

Edge nodes represent an additional layer of IT infrastructure available to enterprises and service providers alongside their cloud and on-premise data center architecture. It is important for admins to be able to manage workloads at the edge layer in the same dynamic and automated way as has become standard in the cloud environment.

As defined by the State of the Edge’s Open Glossary, an “edge-native application” is one which is impractical or undesirable to operate in a centralized data center. In a perfect world, developers would be able to deploy containerized workloads anywhere along the cloud-to-edge continuum to balance the attributes of distributed and centralized computing in areas such as cost efficiencies, latency, security, and scalability.

Ultimately, cloud and edge will work alongside one another, with workloads and applications at the edge being those that have low latency, high bandwidth, and strict privacy requirements. Other distributed workloads that benefit from edge acceleration include Augmented Reality (AR), Virtual Reality (VR), Massively Multiplayer Gaming (MMPG), etc.

There is a need for a shared operational paradigm to automate processing and execution of instructions as operations and data flows back and forth between cloud and edge devices. Kubernetes offers this shared paradigm for all network deployments, allowing policies and rulesets to be applied to the entire infrastructure. Policies can also be made more specific for certain channels or edge nodes that require bespoke configuration.

Kubernetes-Based Edge Architecture

According to a presentation from the Kubernetes IoT Edge Working Group at KubeCon Europe 2019, there are three approaches to using Kubernetes in edge-based architecture to manage workloads and resource deployments.

A Kubernetes cluster involves a master and nodes. The master exposes the API to developers and schedules the deployment of all clusters, including nodes. Nodes contain the container runtime environment (such as Docker), a Kubelet (which communicates with the master), and pods, which are a collection of one or multiple containers. Nodes can be a virtual machine in the cloud.

The three approaches for edge-based scenarios can be summarized as follows:

  1. The whole Kubernetes cluster is deployed within edge nodes. This is useful for instances in which the edge node has low capacity resources or a single-server machine. K3s is the reference architecture for this solution.
  2. The next approach comes from KubeEdge, and involves the control plane residing in the cloud and managing the edge nodes containing containers and resources. This architecture enables optimization in edge resource utilization because it allows support for different hardware resources at the edge.
  3. The third approach is hierarchical cloud plus edge, using a virtual kubelet as reference architecture. Virtual kubelets live in the cloud and contain the abstract of nodes and pods deployed at the edge. This approach allows for flexibility in resource consumption for edge-based architecture.

Section’s Migration to Kubernetes

Section migrated to Kubernetes from a legacy homegrown scheduler last year. Instead of building our own fixed hardware network, Section distributes Kubernetes clusters across a vendor-neutral worldwide network of leading infrastructure providers, including AWS, Google Cloud, Microsoft Azure, Packet, DigitalOcean, CenturyLink, and RackCorp. Kubernetes allows us to be infrastructure-agnostic and seamlessly manage a diverse set of workloads.

Our first-hand experience of the many benefits of Kubernetes at the edge include:

  • Flexible tooling, allowing our developers to interact with the edge as they need to;
  • Our users can run edge workloads anywhere along the edge continuum;
  • Scaling up and out as needed through our vendor-neutral worldwide network;
  • High availability of services;
  • Fewer service interruptions during upgrades;
  • Greater resource efficiency – in particular, we use the Kubernetes Horizontal Pod Autoscaler, which automatically scales the number of pods up or down according to defined latency or volume thresholds;
  • Full visibility into production workloads through the built-in monitoring system;
  • Improved performance.

Summary

As more organizations and operators continue to adopt and support Kubernetes-based cloud-edge patterns, the ecosystem will continue to mature. However, not every organization will have the resources and/or expertise to build these systems themselves. This is where edge platforms (like Section) bridge those gaps, offering DevOps teams familiar tooling to take advantage of the benefits that Kubernetes has to offer without the complexities that come along with it.

Scaling Ecosystems Through an Open Edge (Part Two)

By Blog, Industry Article, Project EVE, Trend

Why an Open, Trusted Edge is Key to Realizing the True Potential of Digital Transformation

By Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – click here for more content like this.

In  of this series, I walked through various approaches to ecosystems and highlighted how business value tends to find a natural equilibrium across stakeholders. In this second installment I’ll walk through the importance of an open edge for scaling ecosystems and realizing the true potential of digital transformation, as well as providing some tips on building ecosystems that I’ve picked up over the past years.

Enabling increasingly connected Intranets of Things

Imagine if the overall internet was built as a closed ecosystem, controlled by a small set of organizations, much less one. Of course, there are browsing restrictions placed at a company level and in some countries, but the internet simply wouldn’t have made the same massive impact on society without fundamental openness and interoperability.

All data is created at the edge, whether it be from user-centric or IoT devices. A few years ago, the number of devices on the internet surpassed the global population and the growth for IoT devices is expected to continue at an exponential rate. This will not only unlock new paths to value, but also .

However, as it turns out, the term “Internet of Things” is actually a bit of a misnomer. It’s really about a series of increasingly connected Intranets of Things. Starting with simple examples like the story of Fred and Phil from part one of this series, ecosystems will get increasingly larger and more interconnected as the value to do so exceeds the complexity and risk. So how do we carry this out at scale?

The importance of an open edge

Previously, I’ve outlined the  when deploying applications closer to devices in the physical world, outside the confines of a traditional data center. Edge and IoT solutions inherently require a diverse collection of ingredients and expertise to deploy and over the past five years the emerging market has attempted to address this fragmentation with a dizzying landscape of proprietary platforms — each with wildly different methods for data collection, security and management.

That said, having hundreds of closed, siloed platforms and associated ecosystems is definitely not a path to scale. As with the internet itself, it is important to have an open, consistent infrastructure foundation for IoT and edge computing, and from there companies can decide how open or closed they want to build their business ecosystems on top.

The diversity of the IoT edge makes it impractical to develop one catch-all standard to bind everything together. Open source software frameworks are an excellent way to bridge together various ingredients, unifying rather than reinventing standards, accommodating legacy installations, and enabling developers to focus on value creation.  is an example of a collaborative effort to build an open edge computing foundation in partnership with other open source and standards-oriented initiatives. The key to this being the base for a truly open edge ecosystem is the vendor-neutral governance offered by the Linux Foundation.

In this recent  I highlighted that the winners in the end will be the ones creating differentiated value through their domain knowledge, building necessarily unique software and hardware and offering great services — not those that are reinventing the middle over and over again. AI models for common tasks such as identifying a person’s demographic, detecting a license plate number or determining if an object is a water bottle or weapon will be commonplace, meanwhile there will always be room to differentiate with AI when specific industry context is in play.

Trust is essential

I’ve written in the past about the “” being selling or sharing data, resources and services across total strangers, all while maintaining privacy on your terms. This is the ultimate scale factor, but we need to turn to technology for help because it simply isn’t feasible to take people out to dinner fast enough one by one to build the necessary trust relationships.

Consumers are often comfortable pledging allegiance to specific brands and giving up a little privacy as long as they trust the provider and get value. However, in order to build more complex ecosystem relationships that span private and public boundaries at scale we not only need open interoperability but also ensure that no single entity owns the trust.

As such, we also need to collaborate on a technology foundation that automates the establishment of trust as data flows across heterogeneous systems. We’re seeing the industry increasingly step up here, from distributed ledger efforts like IOTA and Hyperledger that provide smart contracts to the  and the emerging  which aims to build out the concept of data confidence fabrics by layering trust insertion technologies with a system-based approach. Both the ToIP Foundation and Alvarium are focused on facilitating trust in both machine-generated data and across human relationships.

While these efforts don’t replace the need to build business relationships (and take people out to dinner!), they will provide necessary acceleration. Moreover, beyond helping us scale complex ecosystems these tools can also aid in combating the increasing issue with deepfakes and ensuring ethical AI solutions, in addition to accelerating workload consolidation at the edge and dealing with regulatory requirements like GDPR… but these are blog topics for another time!

Tips on building an ecosystem (hint, hint: domain knowledge rules)

When I started building the IoT ecosystem with the team at Dell back in 2015 it was clear that the market was going to go vertical before going horizontal. This is a typical pattern in any new market and proprietary offers often get the initial traction while solutions based on an open foundation always win in the end when it comes to sheer scale (recall the Apple — Android discussion from part one…).

So, our ecosystem strategy starting in 2015 was to go super broad before going deep, partnering widely and enabling the “cream to rise to the top”. I joked with the team back then that we were going to do “Tinder” and then “D-Harmony”. Sure enough, the partners that had the most initial traction were those that were laser-focused on one use case, meanwhile the horizontal “peanut-butter platforms” were stalling. Case in point, if you do a little bit of everything you rarely do one thing really well.

But it was really about these vertically-focused platforms’ domain knowledge that resonated with a specific customer need and desired outcome. They built their technology platforms to get to data but their real value was that domain knowledge. Over time as we get to a more consistent, open foundation for Edge and IoT solutions these providers will either need to pivot to being more like system integrators, or build differentiated software and/or hardware. So, in 2017 we purposely switched from going broad to being more deliberate in partnerships, focusing more on partners that have realized the importance of separating domain knowledge from the underlying technology foundation and helping with matchmaking across the partner landscape.. Enter “D-Harmony”!

In closing

Deploying IoT and edge computing solutions takes a village and it’s important to establish an entourage of partners that have a “go to dance move” rather than working with those that are trying to do too much and as a result not doing anything particularly well. Five years have passed and a lot of providers have felt the pain of trying to own everything and have since realized the importance of having focus and establishing meaningful partnerships. So, now I joke with the team at ZEDEDA as we build up our ecosystem of market-leading pure-play solutions and domain experts that we’re going straight to “Z-Harmony”!

Thanks for reading. In the third and final part of this series, I’ll provide more insights on how we get to “advanced class” and more on what we’re doing at ZEDEDA to help build the necessarily open foundation to facilitate ecosystem scale. In the meantime, feel free to drop me a line with any comments or questions.

Predictions 2021: Open Edge & Networking

By Blog, LF Edge, Trend

Written by Arpit Joshipura is General Manager, Networking, Edge & IoT at the Linux Foundation

 

As we wrap up 2020, I wanted to take a moment to look at where the industry is headed and what we’ve learned this year. 

Telecom & Cloud ‘Plumbing’ based on 5G Open Source will drive accelerated investments from top markets (Government, Manufacturing, and Enterprises) 

This broad acceptance of open networking stacks shows the true power of what is possible when fat, fast, and functional features are at your fingertips. See information on ONAP’s Guilin release, EdgeX Foundry’s Hanoi release, and this recent post from FierceTelecom.

The Last piece of the “open” puzzle will fall in place: Radio Access Network (RAN)

The final closed architecture in the 148- year- old Telecom industry — the RAN — is finally open!  2021 will bring the first build-outs of open RAN technology in close collaboration with Edge and Core. Visit the O-RAN Software Community for more information. 

Remote Work” will continue to be the greatest positive distraction, especially within the open source community

LFN and LFE saw about 25-40% Growth in Developers and Contribution during 2020, and we expect the pace to pick up to almost 50% as more vertical industries embrace open source technologies. See Software Defined Vertical Industries: Transformation Through Open Source, a Linux Foundation white paper. 

“Futures” (aka bells and whistle features & future-looking capabilities) will give way to “functioning blueprints”  

Open source interoperability, compliance & verification for rapid deployment becomes the highest priority in 2021 beyond software. See the latest Blueprints from LF Edge’s Akraino project, as well as information on OPNFV + CNTT’s latest integrations.

AI/ML technologies become mainstream 

Closed loop control in an Intelligent Network paves the way for Intent-based Networking, and Predictive Maintenance emerges as a top use case in Edge using AI/ML.  What do you expect 2021 will bring to the open networking and edge table?

What did I miss? I would love to have your comments on LinkedIn.

Demand for DevOps Talent

By Blog, Linux Foundation News, Trend

A few years ago, the demand for open source DevOps talent was relatively low. But this year, there has been a massive shift in the hiring pattern. Companies need developers with these skills who can not only maintain the performance of legacy workloads but also deliver agile and responsive operations for digital transformation.

According to The Linux Foundation’s 2020 Open Source Jobs Report, which examines demand for open source talent and trends amongst open source professionals, DevOps is currently in high demand and there are no signs of slowing down.

DevOps is the top role hiring managers are looking to fill (65% are looking to hire DevOps talent), moving demand for developers to second (59%) for the first time in this report’s history. 74% of employers are now offering to pay for employee certifications, up from 55% in 2018, 47% in 2017, and only 34% in 2016.

Additionally, companies have increased their recruitment of open source technology talent while offering educational opportunities for existing staff to fill skills gaps.

Companies and organizations continue to increase their recruitment of open source technology talent while offering increased educational opportunities for existing staff to fill skills gaps. 93% of hiring managers report difficulty finding open source talent, and 63% say their organizations have begun to support open source projects with code or other resources for the explicit reason of recruiting individuals with those software skills, a significant jump from the 48% who stated this in 2018.

Other key findings from the 2020 Open Source Jobs Report include:

  • Hiring is down, but not out, due to COVID-19: Despite the pandemic and economic slowdown, 37% of hiring managers say they will be hiring more skilled IT professionals in the next six months.
  • Online training gains popularity during the COVID-19 era: A full 80% of employers now report that they provide online training courses for employees to learn open-source software, up from 66% two years ago.
  • Certifications grow in importance: 52% of hiring managers are more likely to hire someone with a certification, up from 47% two years ago.
  • Cloud technology is hot: In terms of knowledge domains, hiring managers report knowledge of open cloud technologies has the most significant impact, with 70% being more likely to hire a pro with these skills, up from 66% in 2018.

 The full 2020 Open Source Jobs Report is available to download for free here.

 

Scaling Ecosystems Through an Open Edge (Part One)

By Blog, Industry Article, Trend

Written By Jason Shepherd, LF Edge Governing Board Member and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

Image for post

In my  piece earlier this year, I highlighted an increase in collaboration on interoperability, between different IT and OT stakeholders and building trust in data. In this three-part blog, I’ll expand on some related thoughts around building ecosystems, including various technical and business considerations and the importance of an open underlying foundation at the network edge.

For more on this topic, make sure to register and tune in tomorrow, July 28 for Stacey Higginbotham’s virtual event ! Stacey always puts on a great event and this will be a great series of panel discussions on this important topic. I’m on a roundtable about building partnerships for data sharing and connected services.

The new product mindset

I’ve  about the “new product mindset” to highlight the importance of developing products and services with the cumulative lifetime value they will generate in mind, compared to just the immediate value delivered on the day of launch. In the rapidly-evolving technology landscape, competitive advantage is based on the ability to innovate rapidly and continuously improve an offering through software- and, increasingly, hardware-defined experiences, not to mention better services.

Related to this is the importance of ecosystems. Whether you build one or join one, it’s more important than ever in this world of digital transformation to be part of an ecosystem to maintain a competitive edge. This includes consideration on two vectors — both horizontal for a solid and scalable foundation and vertical for domain-specific knowledge that drives targeted outcomes. Ultimately, it’s about creating a network effect for maximizing value, both for customers and the bottom line.

Approaches to building ecosystems

An ecosystem in the business world can take various forms. In simplest terms it could be a collection of products and services that are curated to be complementary to a central provider’s offering. Alternatively, it could consist of partners working towards a common goal, with each benefitting from the sum of the parts. Another ecosystem could take the form of an operations-centric supply chain or a group of partners targeting common customers with a joint go-to-market strategy.

Approaches to ecosystem development span a spectrum of closed to open philosophies. Closed ecosystems are based on closely-governed relationships, proprietary designs, and, in the case of software, proprietary APIs. Sometimes referred to as “walled gardens,” the tight control of closed ecosystems can provide a great customer experience. However, this tends to come with a premium cost and less choice. Apple is a widely cited example of a company that has taken a more closed ecosystem approach.

Then there are “closed-open” approaches which have APIs and tools that you can openly program to — as long as you pay access fees or are otherwise advancing the sponsoring company’s base offer. Amazon’s “Works with Alexa” program is an example of this model, and a big driver in how they emerged as a smart home leader — they spent many years establishing relationships and trust with a broad consumer base. This foundation creates a massive network effect, and in turn, Amazon makes the bulk of their revenue from the Alexa ecosystem by selling more consumer goods and furthering customer relationships — much more so than on the Alexa-enabled devices themselves.

Scaling through open source

To fully grasp the business trade-offs of closed vs. open ecosystems, consider Apple’s iOS and Android. While Apple provides a curated experience, the open source Android OS provides more choice and, as a result, . Android device makers may not be able to collect as much of a premium as Apple because of the open competition and difficulty in differentiating with both hardware and a better user experience by skinning the stock Android UI. However, a key upside element is the sheer volume factor.

Plus, that’s not to say there still isn’t an opportunity to differentiate with Android as the base of your ecosystem. As an example, Samsung has built a strong ecosystem approach through their Galaxy brand and has recently branched out even further by  to offer a better-together story with their products. We’ve seen this play before — if you’re old enough to have rented video tapes at a brick-and-mortar store (be kind, rewind!), you’ll recall that VHS was an inferior technology to Beta but it won out after a short format battle because the backers of VHS got the most movie studios onboard.

It’s ultimately about taking a holistic approach, and these days open source software like Android is an essential driver of ecosystem enablement and scale. On top of the benefit of helping developers reduce “undifferentiated heavy lifting,” open source collaboration provides an excellent foundation for creating a network effect that multiplies value creation.

After all, Google didn’t purchase the startup Android in 2005 (for just $50M!) and seed it into the market just to be altruistic — the result was a driver for a massive footprint that creates revenue through outlets including Google’s search engine and the Play store. Google doesn’t reveal how much Android in particular contributes to their bottom line, but in a 2015 court battle with Oracle over claims of a Java-related patent infringement . Further, reaching this user base to drive more ad revenue is now free for Google, compared to them reportedly paying Apple $1B in 2014 for making their search engine the default for the iPhone.

Striking a balance between risk and reward

There’s always some degree of a closed approach when it comes to resulting commercial offers, but an open source foundation undoubtedly provides the most flexibility, transparency and scalability over time. However, it’s not just about the technology — also key to ecosystems is striking a balance between risk and reward.

Ecosystems in the natural world are composed of organisms and their environments — this highly complex network finds an equilibrium based on ongoing interactions. The quest for ecosystem equilibrium carries over to the business world, and especially holds true for IoT solutions at the edge due to the inherent relationship of actions taken based on data from tangible events occurring in the physical world.

Several years back, I was working with a partner that piloted an IoT solution for a small farmer, “Fred,” who grows microgreens in greenhouses outside of the Boston area. In the winter, Fred used propane-based heaters to maintain the temperature in his greenhouses and if these heaters went out during a freeze he would lose thousands of dollars in crops overnight. Meanwhile, “Phil” was his propane supplier. Phil was motivated to let his customers’ propane tanks run almost bone dry before driving his truck out to fill them so he could maximize his profit on the trip.

As part of the overall installation, we instrumented Fred’s propane tank with a wireless sensor and gave both Fred and Phil visibility via mobile apps. With this new level of visibility, Fred started freaking out any time the tank wasn’t close to full and Phil would happily watch it drop down to near empty. This created some churn up front but after a while it turned out that both were satisfied if the propane tank averaged 40% full. Side note: Fred was initially hesitant about the whole endeavor but when we returned to check in a year later, you couldn’t peel him away from his phone monitoring all of his farming operations.

Compounding value through the network effect

Where it starts getting interesting is compounding value-add on top of foundational infrastructure. Ride sharing is an example of an IoT solution (with cars representing the “things”) that has created an ecosystem network effect in the form of supplemental delivery services by providers (such as Uber Eats) partnering with various businesses. There’s also an indirect drag effect for ecosystems, for example Airbnb has created an opportunity for homeowners to rent out their properties and this has produced a network effect for drop-in IoT solutions that simplify tasks for these new landlords such as remotely ensuring security and monitoring for issues such as water leaks.

The suppliers and workers in a construction operation also represent an ecosystem, and here too it’s important to strike a balance. Years back, I worked with a provider of an IoT solution that offered cheap passive sensors that were dropped into the concrete forms during the pouring of each floor of a building. Combined with a local gateway and software, these sensors told the crew exactly when the concrete was sufficiently cured so they could pull the forms faster.

While the company’s claims of being able to pull forms 30% faster sounded great on the surface, it didn’t always translate to a 30% reduction in overall construction time because other logistics issues often then became the bottleneck. This goes beyond the supply chain — one of the biggest variables construction companies face is their workforce, and people represent yet another complex layer in an overall ecosystem. And beyond construction workers, there are many other stakeholders during the building process spanning financial institutions, insurance carriers, real estate brokers, and so forth. Once construction is complete then come tenants, landlords, building owners, service providers and more. All of these factors represent a complex ecosystem that continues to find a natural equilibrium over time as many variables inevitably change.

In a final ecosystem example, while IoT solutions can drive entirely new services and experiences in a smart city, it’s especially important to balance privacy with value creation when dealing with crossing public and private domains. A city’s internally-focused solution for improving the efficiency of their waste services isn’t necessarily controversial, meanwhile if a city was to start tracking citizens’ individual recycling behaviors with sensors this is very likely to be perceived as an encroachment on civil liberties. I’m a big recycler (and composter), and like many people I’ll give up a little privacy if I receive significant value, but there’s always a fine line.

In closing

The need to strike an equilibrium across business ecosystems will be commonplace in our increasingly connected, data-driven world. These examples of the interesting dynamics that can take place when developing ecosystems highlight how it’s critical to have the right foundation and holistic approach to enable more complex interactions and compounding value, both in the technical sense and to balance risk and privacy with value created for all parties involved.

Stay tuned for part two of this series in which I’ll walk through the importance of having an open foundation at the network edge to provide maximum flexibility and scale for ecosystem development.

Pushing AI to the Edge (Part Two): Edge AI in Practice and What’s Next

By Blog, LF Edge, Project EVE, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member, Project EVE leader and VP of Ecosystem at ZEDEDA

Image for post

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

In Part One of this two-part Q&A series we highlighted the new LF Edge taxonomy that publishes next week and some key considerations for edge AI deployments. In this installment our questions turn to emerging use cases and key trends for the future.

To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.

What do you see as the most promising use cases for edge AI?

As highlighted in Part One, the reasons for deploying AI at the edge include balancing needs across the vectors of scalability, latency, bandwidth, autonomy, security and privacy. In a perfect world all processing would be centralized, however this breaks down in practice and the need for AI (and ML) at the edge will only continue to grow with the explosion of devices and data.

Hands down, computer vision is the killer app for edge AI today due to the bandwidth associated with streaming video. The ability to apply AI to “see” events in the physical world enables immense opportunity for innovation in areas such as object recognition, safety and security, quality control, predictive maintenance and compliance monitoring.

Considering retail — computer vision solutions will usher in a new wave of personalized services in brick and mortar stores that provide associates with real-time insights on current customers in addition to better informing longer term marketing decisions. Due to privacy concerns, the initial focus will be primarily around assessing shopper demographics (e.g., age, gender) and location but increasingly we’ll see personalized shopping experiences based on individual identity with proper opt-in (often triggered through customer loyalty programs). This includes a trend for new “experiential” shopping centers, for which customers expect to give up some privacy when they walk in the door in exchange for a better experience.

While Amazon Go stores have led the trend for autonomous shopping environments, the use of computer-vision enabled self-service kiosks for grab-and-go checkout is growing rapidly overall. Given the recent health concerns with COVID-19, providers are rapidly shifting to making these solutions contactless by leveraging gesture control, instead of requiring interaction with a keypad or touch screen.

Computer vision use cases will often leverage sensor fusion, for example with barcode scans or radio-frequency identification (RFID) technology providing additional context for decision making in retail inventory management and point of sale (POS) systems. A camera can tell the difference between a T-shirt and a TV, but not the difference between large and medium sizes of the same shirt design. Still, perhaps eventually an AI model will be able to tell you if you have bad taste in clothing!

Another key vertical that will benefit from computer vision at the edge is healthcare. At ZEDEDA, we’ve worked with a global provider that leverages AI models in their medical imaging machines, located within hospitals and provided as a managed service. In this instance, the service provider doesn’t own the network on which their machines are deployed so they need a zero-trust security model in addition to the right tools to orchestrate their hardware and software updates.

Another example where bandwidth drives a need for deploying AI at the IoT Edge is vibration analysis as part of a use case like predictive maintenance. Here sampling rates of at least 1KHz are common, and can increase to 8–10KHz and beyond because these higher resolutions improve visibility into impending machine failures. This represents a significant amount of continuously streaming data that is cost-prohibitive to send directly to a centralized data center for analysis. Instead, inferencing models will be commonly deployed on compute hardware proximal to machines to analyze the vibration data in real time and only backhauling events highlighting an impending failure.

Analysis for predictive maintenance will also commonly leverage sensor fusion by combining this vibration data with measurements for temperature and power (voltage and current). Computer vision is also increasingly being used for this use case, for example the subtle wobble of a spinning motor shaft can be detected with a sufficient camera resolution, plus heat can be measured with thermal imaging sensors. Meanwhile, last I checked voltage and current can’t be checked with a camera!

An example of edge AI served up by the Service Provider Edge is for cellular vehicle-to-everything (C-V2X) use cases. While latency-critical workloads such as controlling steering and braking will always be run inside of a vehicle, service providers will leverage AI models deployed on compute proximal to small cells in a 5G network within public infrastructure to serve up infotainment, Augmented Reality for vehicle heads-up displays and coordinating traffic. For the latter, these AI models can warn two cars that they are approaching a potentially dangerous situation at an intersection and even alert nearby pedestrians via their smartphones. As we continue to collaborate on foundational frameworks that support interoperability it will open up possibilities to leverage more and more sensor fusion that bridges intelligence across different edge nodes to help drive even more informed decisions.

We’re also turning to processing at the edge to minimize data movement and preserve privacy. When AI inferencing models are shrunk for use in constrained connected products or healthcare wearables, we can train local inferencing models to redact PII before data is sent to centralized locations for deeper analysis.

Who are the different stakeholders involved in accelerating adoption of edge AI?

Edge AI requires the efforts of a number of different industry players to come together. We need hardware OEMs and silicon providers for processing; cloud scalers to provide tools and datasets; telcos to manage the connectivity piece; software vendors to help productize frameworks and AI models; domain expert system integrators to develop industry-specific models, and security providers to ensure the process is secure.

In addition to having the right stakeholders it’s about building an ecosystem based on common, open frameworks for interoperability with investment focused on the value add on top. Today there is a plethora of choices for platforms and AI tools sets which is confusing, but it’s more of the state of the market than necessity. A key point of efforts like LF Edge is to work in the open source community to build more open, interoperable, consistent and trusted infrastructure and application frameworks so developers and end users can focus on surrounding value add. Throughout the history of technology open interoperability has always won out over proprietary strategies when it comes to scale.

In the long run, the most successful digital transformation efforts will be led by organizations that have the best domain knowledge, algorithms, applications and services, not those that reinvent foundational plumbing. This is why open source software has become such a critical enabler across enterprises of all sizes — facilitating the creation of de-facto standards and minimizing “undifferentiated heavy lifting” through a shared technology investment. It also drives interoperability which is key for realizing maximum business potential in the long term through interconnecting ecosystems… but that’s another blog for the near future!

How do people differentiate with AI in the long term?

Over time, AI software frameworks will become more standardized as part of foundational infrastructure, and the algorithms, domain knowledge and services on top will be where developers continue to meaningfully differentiate. We’ll see AI models for common tasks — for example assessing the demographics of people in a room, detecting license plate numbers, recognizing common objects like people, trees, bicycles and water bottles — become commodities over time. Meanwhile, programming to specific industry contexts (e.g. a specific part geometry for manufacturing quality control) will be where value is continually added. Domain knowledge will always be one of the most important aspects of any provider’s offering.

What are some additional prerequisites for making edge AI viable at scale?

In addition to having the right ecosystem including domain experts that can pull solutions together, a key factor for edge AI success is having a consistent delivery or orchestration mechanism for both compute and AI tools. The reality is that to date many edge AI solutions have been lab experiments or limited field trials, not yet deployed and tested at scale. PoC, party of one, your table is ready!

Meanwhile, as organizations start to scale their solutions in the field they quickly realize the challenges. From our experience at ZEDEDA, we consistently see that manual deployment of edge computing using brute-force scripting and command-line interface (CLI) interaction becomes cost-prohibitive for customers at around 50 distributed nodes. In order to scale, enterprises need to build on an orchestration solution that takes into account the unique needs of the distributed IoT edge in terms of diversity, resource constraints and security, and helps admins, developers and data scientists alike keep tabs on their deployments in the field. This includes having visibility into any potential issues that could lead to inaccurate analyses or total failure. Further, it’s important that this foundation is based on an open model to maximize potential in the long run.

Where is edge AI headed?

To date, much of the exploration involving AI at the edge has been focused on inferencing models — deployed after these algorithms have been trained with the scalable compute of the cloud. (P.S. for those of you who enjoy a good sports reference, think of training vs. inference as analogous to coaching vs. playing).

Meanwhile, we’re starting to see training and even federated learning selectively moving to the Service Provider and User Edges. Federated learning is an evolving space that seeks to balance the benefits of decentralization for reasons of privacy, autonomy, data sovereignty and bandwidth savings, while centralizing results from distributed data zones to eliminate regional bias.

The industry is also increasingly developing purpose-built silicon that can increase efficiencies amid power and thermal constraints in small devices and even support either training or inference and this corresponds with the shift towards pushing more and more AI workloads onto edge devices. Because of this, it’s important to leverage device and application orchestration tools that are completely agnostic to silicon, compared to offers from silicon makers that have a vested interest in locking you into their ecosystem.

Finally, we’ll see the lower boundary for edge AI increasingly extend into the Constrained Device Edge with the rise of “Tiny ML” — the practice of deploying small inferencing models optimized for highly constrained, microcontroller-based devices. An example of this is the “Hey Alexa” of an Amazon Echo that is recognized locally and subsequently opens the pipe to the cloud-based servers for a session. These Tiny ML algorithms will increasingly be used for localized analysis of simple voice and gesture commands, common sounds such as a gunshot or a baby crying, assessing location and orientation, environmental conditions, vital signs, and so forth.

To manage all of this complexity at scale, we’ll lean heavily on industry standardization, which will help us focus on value on top of common building blocks. Open source AI interoperability projects, such as ONNX, show great promise in helping the industry coalesce around a format so that others can focus on developing and moving models across frameworks and from cloud to edge. The Linux Foundation’s Trust over IP effort and emerging Project Alvarium will also help ease the process of transporting trusted data from devices to applications. This notion of pervasive data trust will lead to what I call the “Holy Grail of Digital” — selling and/or sharing data resources and services to/with people you don’t even know. Now this is scale!

In Closing

As the edge AI space develops, it’s important to avoid being locked into a particular tool set, instead opting to build a future-proofed infrastructure that accommodates a rapidly changing technology landscape and that can scale as you interconnect your business with other ecosystems. Here at ZEDEDA, our mission is to provide enterprises with an optimal solution for deploying workloads at the IoT Edge where traditional data center solutions aren’t applicable, and we’re doing it based on an open, vendor-neutral model that provides freedom of choice for hardware, AI framework, apps and clouds. We’re even integrating with major cloud platforms such as Microsoft Azure to augment their data services.

Reach out if you’re interested in learning more about how ZEDEDA’s orchestration solution can help you deploy AI at the IoT Edge today while keeping your options open for the future. We also welcome you to join us in contributing to Project EVE within LF Edge which is the open source foundation for our commercial cloud offering. The goal of the EVE community is to build the “Android of the IoT Edge” that can serve as a universal abstraction layer for IoT Edge computing — the only foundation you need to securely deploy any workload on distributed compute resources. To this end, a key next step for Project EVE is to extend Kubernetes to the IoT Edge, while taking into account the unique needs of compute resources deployed outside of secure data centers.

The success of AI overall — and especially edge AI — will require our concerted collaboration and alignment to move the industry forward while protecting us from potential misuse along the way. The future of technology is about open collaboration on undifferentiated plumbing so we can focus on value and build increasingly interconnected ecosystems that drive new outcomes and revenue streams. As one political figure famously said — “it takes a village!”

If you have questions or would like to chat with leaders in the project, join us on the LF Edge Slack  (#eve or #eve-help) or subscribe to the email list. You can check out the documentation here.

Exploration and Practices of Edge Computing: Cloud Managing Containerized Devices

By Blog, EdgeX Foundry, Industry Article, Trend

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website.

Introduction

The previous article introduced the cloud management virtualization device solution. This article will describe the Nebula project, a unified management of containerized devices and edge applications and data analysis cloud services.

Nebula Architecture

Project Nebula is designed based on the following key ideas:

  • Agnostic to device CPU architecture, supporting both x86 and ARM;
  • Agnostic to edge application frameworks, supporting EdgeX Foundry and other frameworks that can be packaged and run;
  • Agnostic to data analytics services, supporting on-premise and cloud deployment;
  • Support small to large scale deployment;
  • Support end-to-end multi-tenant operation model from device to cloud.
EdgeX Foundry Architecture

Nebula supports EdgeX Foundry framework, and we already published a live test bed at https://18.189.42.126/. Those who are interested in Nebula could contact yixingj@vmware.com to register for a trial, installation and user guides with detailed information.

Nebula Demo

Installation

Nebula is designed in containerized micro-service architecture, and is installed by default in OVA format. Similar to Pallas architecture introduced in the previous article, although Nebula package is encapsulated in OVA, it does not depend on any specific virtualization infrastructure or cloud platform to be installed. Technically, it could completely be converted to other formats, or install on any cloud platform that supports OVA format.

The basic resource requirement of Nebula is:

  • CPU: 2 virtual CPU cores
  • Memory: 8GB
  • Storage: 150GB

Its installation process is similar to other normal OVA, and users can log in as the administrator after completion.

Nebula Service Console

Vendor Portal

After the installation is complete, users can log in to the vendor portal as an administrator according to the prompt address in VM console as above and perform user management.

Nebula Management Portal

In Nebula, edge application services are defined as following: A Service can contain multiple Versions, and a Version contains multiple Service Components.

Edge Service Hierarchy

For each service created, it is necessary to determine parameters and resource requirement such as version, CPU platform, memory, storage, network, etc., to facilitate verification in full life cycle management.

Vendors can upload a set of EdgeX Foundry applications packaged in container images, and define categories, dependencies between containers, resource parameters, startup order, and parameters of connected data analysis cloud services.

After the release, users can see and deploy these edge services.

Device Registration

Before users actually deploy EdgeX Foundry applications, they must first register the device they would use into their Nebula accounts.

Users need to download Nebula agent program nebulacli.tar by themselves and run it on the device to complete the registration. This registration step could be manual, or it can be automated in batch operations for OEM.

./install.sh init -u user-acccount -p user-account-password -n user-device-name

User Portal

After completing the device registration, users can install and manage EdgeX Foundry or other edge applications released in advance on Nebula service by vendors. Users can find proper applications in the catalog.

After selection, users can further specify parameter settings of the deployment in the drag-and-drop wizard, which maps to parameter values defined by the vendor before.

After all parameters are set, the actual deployment can be carried out, either in batch or multiple times to multiple devices. After deploying EdgeX Foundry applications, users can monitor device resources and application run time status in real time.

Nebula provides complete Restful API documentation, with which users can automate operations to deploy EdgeX Foundry applications in a large scale.

Next

From the second article to this article, I introduced the basic method of building and managing virtualized devices and containerized devices from the cloud. But I did not answer the question of how to deal with single-point device failure. Compared with the traditionally inflexible and inefficient full redundancy or external NAS solution, the next article will introduce device clusters on hyper-convergence architecture.

Pushing AI to the Edge (Part One): Key Considerations for AI at the Edge

By Blog, LF Edge, Project EVE, State of the Edge, Trend

Q&A with Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – visit their website for more content like this.

This two-part blog provides more insights into what’s becoming a hot topic in the AI market — the edge. To discuss more on this budding space, we sat down with our Vice President of ecosystem development, Jason Shepherd, to get his thoughts on the potential for AI at the edge, key considerations for broad adoption, examples of edge AI in practice and some trends for the future.


Chart defining the categories within the edge, as defined by LF Edge

Image courtesy of LF Edge