Monthly Archives

January 2021

You Can Now Run Windows 10 on a Raspberry Pi using Project EVE!

By Blog, Project EVE

Written by Aaron Williams, LF Edge Developer Advocate

Ever since Project EVE came under the Linux Foundation’s LF Edge umbrella, we have been asked about porting (and we wanted to port) EVE to the Raspberry Pi, so that developers and hobbyists could test out EVE’s virtualization of hardware.  Both were looking for an easy way to evaluate EVE by creating simple PoC projects, without having to buy a commercial grade IoT gateway or another device.  They wanted to just get started with something they already had on their desk.  And we are excited to announce that we have completed the first part of the work needed to run Windows on a Raspberry Pi 4!  We have posted the tutorial on our community wiki and it takes less than an hour to get it up and running.

The RPi

The Raspberry Pi was first released in 2012 with the goal of having a cheap and easy way to teach high school students how to code.  It had USB ports to attach a keyboard and mouse, HDMI to hook up to your TV, GPIO (General Purpose Input/Output) pins for IoT, and a networking cable for internet access.  Thus for $35 you had a great, cheap computer that ran Linux.  The Raspberry Pi Foundation sold a lot of these devices to schools, but the RPi really took off as developers and home hobbyists discovered them, thinking “Wow a $35 Linux computer, I wonder if I could do that home IoT project I have been planning?”  Plus, in many companies, the RPi became a great way to create “real” demos and PoCs cheaply.

Fast forward 7 years and we knew that we wanted to port EVE to the RPi, because it was such a large part of the IoT world, especially demos explaining IoT concepts.  (It is much easier to go to your manager or spouse and ask for $50 RPi vs. $500+ for some hardware.)  But, according to Erik Nordmark, TSC Chair of Project EVE, “the GIC (Global Interrupt Controller) and the proprietary RPI boot code on the RPi3 (and earlier models) prevented it from booting into a Type 1 Hypervisor like Xen without a hacking up strange emulation code.”  Thus, while it was possible, it would take a lot work and might not work well.

This changed with the release of the RP4.  Roman Shaposhnik, also of Project EVE, and Stefano Stabellini, of the Xen Project, saw that the RPi 4 had a regular GIC-400 interrupt controller that Xen supports right of out the box.  Thus, getting Xen to work on the RPi should be pretty easy, right?  And as they documented in their article from Linux.com, it wasn’t. “We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.”  But soon their hard work paid off and they were able to get Xen working and submitted a number of patches that will be part of the Linux 5.9 release.  (To learn about Roman and Stefano’s adventure, see their article Xen on Raspberry Pi 4 Adventures at Linux.com).  With that done, the EVE team turned their attention to see what work would be needed to complete the virtualization of the RPi, which went pretty smoothly.

Why Windows?

We have been saying since the start of Project EVE, that EVE is to IoT as Android is to phones.  Android allows you to write your code and push it to the device without having to worry about the hardware underneath.  EVE works much the same way.  It virtualizes the hardware, allowing you to push your code across devices.  And since EVE is open source, everyone benefits from the “plumbing” being handled by the community.  The plumbing in this case is the addition of a new device or family of devices. Thus, when a device is added, the whole community benefits.  IoT devices do have an extra security concern, namely they not usually housed in a locked building and instead are out in the open.  Therefore, the physical security of the device cannot be guaranteed and it must be assumed that there is no physical security.  Because of this, the default for EVE is turn off all external ports, such as USB ports.

This brings us to the question, why Windows?  Simply, why not?  Windows doesn’t belong on a Raspberry Pi, so we figured that it would be fun to see if it would work.  And it worked right out of the box, we just needed to find a containerized version of Windows and then we just deployed it (it is really that easy).  And it is a lot of fun using Windows knowing that it is running on a Raspberry Pi!

What work is left

We haven’t done a lot of testing and for us this is a PoC, so we won’t have a full list of limitations.  But here are a couple of things that we have found.  We will update our tutorial page as the community finds and fixes them.

Asking Petr Fedchenkov and Vladimir Suvorov, lead developers on Project EVE about issues that they have run into, Petr mentioned, “The biggest issue is that there are no drivers for GPU and Windows 10 ARM64 doesn’t have virtio-gpu support.  So, we are using ramfb (RAM Framebuffer), which is much more limited.”  In layman’s terms, this means that today if you plug a keyboard and monitor into the RPI, you will interact with EVE, not the Windows desktop.  The easy workaround is to run RDP (Windows Remote Desktop) or VNC, but in our testing RDP works much better.

The native WiFi does work , but you do need to turn it on via EVE.  Remember EVE is designed to give you control of your devices, remotely, yet securely so, the WiFi is turned off by default.  As you build EVE for your RPi, you have the option of passing in a SSID, which turns of the WiFi.

Bluetooth is currently not working.  USB ports should work, but there needs to be some configuration.  We are also not sure about the GPIO pins, we haven’t tested them yet.  (see below on how to help out to get these working and tested.

Call for Help

While it is pretty amazing what we have accomplished, but we need a lot of help.  If have any interest, in part of this, please let us know.  We need help with getting the USB’s fully working plus the items mentioned above.  Is there device that you would like to see EVE work on?  Please help us port it to that device.  We also could use tech writers, bloggers, or anyone that can help us improve our documentation and/or can help us get the word out about EVE.  If you are interested, please visit our GitHub, Wiki, or slack channel (#eve).

Kaloom’s Startup Journey (Part 1)

By Akraino, Blog

Written by Laurent Marchand, Founder and CEO of Kaloom, an LF Edge member company

This blog originally ran on the Kaloom website. To see more content like this, visit their website.

When we founded Kaloom, we embarked upon our journey with the vision of creating a truly P4-programmable, cloud- and open standards-based software defined networking fabric. Our name, Kaloom comes from the Latin word “caelum” which means “cloud.”

Armed solely with our intrepid crew of talented and experienced engineers, we left our good-paying, secure jobs at one of the world’s largest incumbent networking vendors and set off on our adventure. This two-part blog is the story of why we founded Kaloom and where we stand today.

In our previous blog, “SDN’s Struggle to Summit the Peak of Inflated Expectations,” Sonny outlined the challenges posed by the initial vision of SDN and the attempt to solve them with VM-based overlays. Upcoming 5G deployments will only amplify those challenges, disruptively creating an extreme need for lightweight container-based solutions that satisfy 5G’s cost, space and power requirements.

5G Amplifying SDN’s Challenges

With theoretical peak data rates up to 100 times faster than current 4G networks, 5G enables incredible new applications and use cases such as autonomous vehicles, AR/VR, IoT/IIoT and the ability to share real-time medical imaging – just to name a few. Telecom, cloud and data center providers alike look forward to 5G forming a robust edge “backbone” for the industrial internet to drive innovative new consumer apps, enterprise use cases and increased revenue streams.

These new 5G-enabled apps require extreme low latency which demands a distributed edge architecture that puts applications close to their data source and end users. When we founded Kaloom, incumbents were still selling the same technology platforms and architectural approach as they had with 4G, however, the economics for 5G are dramatically different.

The Pain Points are the Main Points – Costs, Revenues and Latency

For example, service providers’ revenues from smart phone users running 3G/4G were about $50 per month. However, connected cars will only generate about $1 or $2 per month and installing 5G requires extreme amounts of upfront investments (CAPEX), not only in antennas but also in the backend servers, storage and networking switches required to support these apps. So, it was clear to us that the cost point for deploying 5G had to be much cheaper than for 4G. Even if 5G costs were the same as 4G rollouts, service providers still could not economically justify their revenue models throttling down to $1 or $2 per month.

The low latency requirement is another pain point, mainly because 5G-enabled high speed apps won’t work properly with low throughput and high latency. Many of these new applications require an end-to-end latency below 10 milliseconds, unfortunately, typical public clouds are unable to fulfill such requirements.

To take advantage of lower costs via centralization, most hyperscale cloud providers have massive regional deployments, with only about six to 10 major facilities serving all North America. If your cloud hub is 1,000 miles away, even the speed of light (fiber optics) does not transfer data fast enough to meet the low latency requirements of 5G apps such as connected cars. For example, the latency from New-York to Amazon Web Services or Microsoft Azure in Northern -Virginia is greater than 20 milliseconds.

Today, the VM-based network overlay has dumb, or unmanaged, switches; then all the upper layer functions such as firewalling, load balancing and 5G UPF run on VMs in servers. Because you couldn’t previously run these functions at Tbps speeds on switches, they were put on x86 servers. High-end servers cost about $25,000 and consume 1,000 watts of power each. With x86 servers’ costs and power consumption, we could see that the numbers were way off from what was needed to run a business-viable environment for 5G providers. Rather than have switching and routing run via VMs on x86, Kaloom’s vision was to be able to run all networking functions at Tbps speeds to meet the power- and cost-efficient points needed for 5G deployments to generate positive revenues. This is why Kaloom decided to use containers, specifically open source Kubernetes, and the programmability of P4-enabled chips as the basis for our solutions.

Solving the latency problem required a paradigm shift to the distributed edge, which places compute, storage and networking resources closer to the end user. Because of these constraints, we were convinced the edge would become far more important. As seen in the figure below, there are many edges.

Many Edges

Kaloom’s Green Logo and Telco’s Central Office Constraints

While large regional cloud facilities were not built for the new distributed edge paradigm, service providers’ legacy Central Office (CO) architectures are even more ill-suited for the shift. Containing an odd mishmash of old and new equipment from each decade going back at least 50 years, these facilities are also typically near the limit of their space, power and cooling requirements.

There are about 25,000 legacy COs in the US and they need to be converted, or updated in some way, to sustain low latency networks that support 5G’s shiny new apps. Today, perhaps 10 percent of data is processed outside the CO and data center infrastructure. In a decade, this will grow to 25 percent which means telcos have their work cut out for them.

This major buildup of required new 5G-supporting edge infrastructure will have an extremely negative impact on the environment in terms of energy consumption. According to the Linux Foundation’s State of the Edge 2020 report, by 2028 it will consume 102,000 megawatts of power and over $700 billion in cumulative CAPEX will be spent within the next decade on edge IT infrastructure and data center facilities. The power consumption of a single 5G base station alone is three times that of its 4G LTE predecessor; 5G needs three times the base stations for the same coverage as LTE due to higher frequencies; and 5G base stations costs 4 times the price of LTE, according to mobile executives and an analyst quoted in this article at Light Reading. Upgrading a CO’s power supply involves permits, closing and digging up streets and running new high capacity wiring to the building, which is very costly. Unless all of society’s power sources suddenly turned sustainable, our founding team could see we were going to have to create a technology that could dramatically and rapidly reduce the power needed to provide these new 5G services and apps to consumers and enterprises.

Have you noticed that Kaloom’s logo is green?

This is the reason why. We have set out to solve all the issues facing telcos as they work to bring their infrastructure into the 21st century, including power.

P4 and Containers vs. ASICs and VM-Overlays

Where we previously worked, we were using a lot of switches that were not programmable. We were changing hardware every three to four years to keep pace with changes in the mobile, cloud and networking industries that constantly require more network, compute and storage resources. So, we were periodically throwing away and replacing very expensive hardware. We thought, why not bring the same flexibility by building a 100 percent programmable networking fabric. This, again, was driven by costs.

Before we left our jobs, we were part of a team running 4G Evolved Packet Gateway (EPG) using virtual machines (VMs) on Red Hat’s software platforms. Fortunately, we were asked to investigate the difference in the performance and other characteristics of containers versus VMs. That’s when we realized that, by using containers, the same physical servers could sustain several times more applications as compared to those running VM-based overlay networks.

This meant that the cost per user and cost per Gbps could be lowered dramatically. Knowing the economics of 5G would require this vastly reduced cost point, containers were the clear choice. The fact that they were more lightweight in terms of code meant that we could use less storage to house the networking apps and less compute to run them. They’re also more easily portable and quicker to instantiate. When a VM crashes it can take four or five minutes to resume, but you can restore a container in a few seconds.

Additionally, because VM-based overlays sit on top of the fabric so they provide no direct interaction with the network underlay below in terms of visibility and control. If there is congestion or a bottleneck in the lower hardware’s fabric then the overlays may not know when, where, or how bad the problem is. Delivering Quality of Service (QoS) for service providers and their end users is a key consideration.

Kaloom’s Foundations

As a member of LF Edge, Kaloom has demonstrated innovative containerized Edge solutions for 5G along with IBM at the Open Networking and Edge Summit (ONES) 2020. Kaloom also presented further details of this optimized edge solution to the Akraino TSC as an invited presentation. Kaloom is participating in the Public Cloud Edge Interface Blueprint work as it is interesting and well aligned with our technology directions and customer requirements.

We remain committed to address the various pain points including networking’s costs, the need to reduce 5G’s growing energy footprint so it has a less negative impact on the environment, and to deliver much-needed software control and programmability. Looking to solve all the networking challenges facing service providers’ 5G deployments, we definitely had our work cut out for us.

EdgeX Foundry China Day 2020

By Blog, EdgeX Foundry

Written by Ulia Sun, Project Specialist, Ecosystem & Communications of  VMware China R&D

EdgeX Foundry, a Stage 3 project under LF Edge, is a leading framework on edge computing with open and vendor neutral architecture. Lead by VMware and Intel, they launched the EdgeX Foundry China Project in late 2019. The group quickly gained attention and began hosting meetups and online meetings. Within the year, their efforts have helped EdgeX become the largest edge computing community in China.

With the mission of increasing collaboration in the edge computing community and expanding the impact of edge computing technologies, the EdgeX Foundry China Project hosted EdgeX Foundry China Day 2020 last year on December 22-23.

The event was a huge success and included: 

  • 9 Ecosystem Partner:Intel 英特尔,IBM, ThunderSoft 中科创达,JiangXing Intelligence江行智能, IO Tech, Linux Foundation, EMQ 杭州映云科技,Agree Technology赞同科技, Celestone 北京天石易通信息技术有限公司
  • 4 Live broadcast channels:VMware Official Live Streaming Platform (VMware大会官方直播间) Bilibili – EdgeX Foundry China  (EdgeX中国社区B站官方直播间),CSDN Website Home Page Recommendation (开发者专区官网首页推荐直播间), OpenVINO Community Wechat Live (OpenVINO社区直播)
  • A global reach: 10,000+ touch points across the country
  • New collaborators: 249 new group members of EdgeX Foundry China Community
  • 4 interactive workshops led by LF Edge member companies
    • “AI/Open VINO”  by Intel
    • “ EdgeX Coding Camp”  by Thundersoft & Jiangxing Intelligence
    • “ How to use Rule Engine in EdgeX”  by EMQ
    • “Deep Dive into Open Horizon”  by IBM
  • 2 Sessions Related to VMware:
    • Alan Ren, General Manager of VMware China R&D, attended and gave the opening speech with a recognition of VMware’s contributions to EdgeX Foundry China community 
    • Gavin Lu,  Technical Director of VMware China R&D gave the keynote speech  with the theme of “EdgeX Foundry China 2020 Yearly Review and 2021 Plan”

Partners & Keynote speakers

Live Steaming Platforms

Highlights

Watch the video here: https://live.csdn.net/room/edgexfoundry/Wp2vsTYG 

For more information, visit the EdgeX Foundry China Project Wiki: https://wiki.edgexfoundry.org/display/FA/China+Project.

LF‌ ‌Edge‌ ‌Ecosystem‌ ‌Grows,‌ ‌Adds‌ ‌New‌ ‌Members‌ ‌and‌ ‌New‌ ‌ Cross-Community‌ ‌Integrations

By Announcement

  • LF Edge welcomes Foxconn Industrial Internet (FII), HCL Technologies, OpenNebula, Robin.io, and  Shanghai Open Source Information Technology Association to robust roster of community members working to advance open source at the edge
  • FII sponsors Smart Agriculture SIG within the Open Horizon Project 
  • Home Edge’s Coconut release leverages EdgeX Foundry to enable more data storage options across different devices

SAN FRANCISCO January 14, 2021 LF Edge, an umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced continued ecosystem momentum with the addition of four new general members (FII, HCL, OpenNebula, and Robin.io) and one new Associate member (Shanghai Open Source Information Technology Association). Additionally, Home Edge has released its third platform update with new Data Storage and Mult-NAT Edge Device  Communications (MNDEC) features. 

“We are pleased to see continued growth among the LF Edge ecosystem,” said Arpit Joshipura, general manager, Networking, Edge and IOT, the Linux Foundation. “Open edge technology crosses verticals and is poised to become larger than the cloud native ecosystem. We are honored to be at the forefront of this budding industry and welcome all of our new members.”

Approaching it’s two-year mark as an umbrella project, LF Edge is currently comprised of nine projects – including Akraino, Baetyl, FledgeEdgeX Foundry, Home Edge, Open Horizon, Project EVE, Secure Device Onboard (SDO) and State of the Edge – that support emerging edge applications across areas such as non-traditional video and connected things that require lower latency, and faster processing and mobility. LF Edge helps  unify a fragmented edge market around a common, open vision for the future of the industry.

One of the project’s newest general members, FII, plans to sponsor a Smart Agriculture Special Interest Group (SIG) within the Open Horizon project and soon, an LF Edge Smart Agriculture End User Solution Group (EUSG).  Open Horizon SIGs are designed to identify use cases within an industry or vertical, develop end-to-end reference solutions, and promote the solutions, use cases, and the group itself.  LF Edge EUSGs gather user requirements through cross-project discussion with end users and generate demos, white papers, and messaging in order to prioritize project features and inform project roadmaps.

Home Edge Issues Coconut Release

Home Edge’s  third release, Coconut,  is now available. Coconut includes new features such as Multi-NAT communications (which enables discovery of devices in different NATs) and Data Storage.  In collaboration with EdgeX Foundry, a centralized device can be designated as a primary device to store the data from different devices. Home Edge expects its next release will be available in 2021 and will include real time data analytics features. More information on Home Edge Coconut is available here: https://wiki.lfedge.org/display/HOME/Release+Notes+for+Coconut .

Open Networking and Edge Summit Sessions, Project Webinars Available On-Demand

LF Edge sessions from the 2020 Open Networking and Edge Summit (ONES) virtual experience are available on-demand: https://bit.ly/2JPAyOd

LF Edge’s “On the Edge with LF Edge” webinar series, launched this year, has produced seven episodes to date, with overview discussions of each of the projects. Episodes are available for viewing on-demand here: https://www.lfedge.org/news-events/webinars/ 

More details on LF Edge, including how to join as a member, details on specific projects and other resources, are available here: www.lfedge.org.

Support From New LF Edge Member

“HCL Technologies is excited to join the Linux Foundation Edge community. We look forward to collaborating with industry leaders to help create and promote Edge Computing frameworks for wider adoption, leveraging our expertise and investments in IoT, 5G and Cloud technologies”, said GH Rao, resident, Engineering R&D Services, HCL Technologies.

“We are delighted to join the LF Edge community as part of our ONEedge initiative”, said Constantino Vazquez, Chief Operations Officer, OpenNebula. “OpenNebula, which has traditionally been used for Private Clouds, has now become a True Hybrid Cloud platform with a number of powerful features for making deployments at the edge much easier for organizations that want to rely on open source technologies and retain the freedom of being able to use on-demand the providers, locations and resources that they really need.”

“Robin.io is excited to join the Linux Foundation Edge,” said Partha Seetala, CEO and founder of Robin.io. “Open edge technologies is a large and exciting opportunity  for Robin technology and cloud-native ecosystem; we are honored to be at the forefront of the enabling and promoting of Kubernetes-based Edge Computing frameworks for wider adoption.”

“The Shanghai Open Source Information Technology Association has joined LF Edge to implement open source innovation in China, promote the establishment of an open source ideological and cultural system that is compatible with the innovative application of the Digital Economy, and promote the design of laws, regulations and institutional frameworks that are conductive to the ecological development of open source.”

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Open Horizon Mentorships with LFX

By Blog, Linux Foundation News, Open Horizon

Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM

Late this summer, a representative from the Linux Foundation’s LFX Mentorship program attended the LF Edge Technical Advisory Committee (TAC) bi-weekly meeting and gave a presentation about their mentorship program.  The program potentially gives open-source projects a turn-key mentorship program with minimal work needed to get it started, which sounded great to me!  At the end of the presentation, they offered to speak to each project’s leadership teams if they would like to learn more.

The Open Horizon project eagerly accepted the offer, since we had plenty of opportunities for interesting work and few volunteers. We were told that once we began accepting applications, we would be flooded with more than enough well-qualified mentee candidates.

Easy onboarding experience

The project sign-up process was ridiculously easy. We filled out the forms and specified that we wanted to see each candidate’s CV/resume, a link to their GitHub repo, and a cover letter explaining in a single paragraph why they wanted to be a mentee of our project.

Incredible applicant pool

The applications began to come in. At first two (pretty easy to handle), then two more (great!), then four more (OK, this is quite a few). Within two weeks, we had over 15! By the end of our open application period, we had over 30 which was more than enough to handle. In fact, we ended up identifying both a primary and a secondary applicant for each open slot in our initial fall term.

Choosing our mentors

To select our project’s mentors, we looked for maintainers in our project with a natural teaching ability, approachability, maturity, and availability. We also assembled a pool of potential items for our mentees to work on. The four mentors selected were:

  • David Booz (Dave) – Chief Architect and founder of the project, Chair of the Agent Working Group, and a member of the team from the start almost six years ago. Dave understands the project code from both the macro level as well as the specific details about how the client software — the Agent — functions.
  • Liang Wang (Los) – Technology Advocate and Chair of the China Manufacturing Special Interest Group (SIG). Los is building up the SIG, overseeing translation of the project materials, and creating a community around manufacturing and industry 4.0 use cases.
  • Troy Fine – Software Engineer and Chair of the Developer Examples Working Group. Troy oversees creation and maintenance of all code and services that demonstrate all project solution functionality from the simple to the complex.
  • Joseph Pearson (Joe) – Project Chair and Chair of the Documentation Working Group. Joe ensures that people looking to use Open Horizon, develop it, extend or port it, and to build solutions with it have the information they need.

Choosing our mentees

We created a set of filtering criteria to shorten the list down to candidates who were not only qualified, but also exhibited a natural curiosity about our project, and were eager to get started. We also approximately matched the geographic location of the candidates as well as their linguistic abilities.

The mentors then interviewed those candidates on the short list over Slack, email, and Zoom or WebEx. It was not an easy task, and we wish that we could have accepted double our limit of four mentees.

For the work the mentees would be completing, the mentors identified tasks that fit the candidate’s natural abilities and strengths and yet would stretch them a bit. We also aimed for tasks that could be completed within the timeframe of the fall term.

Anukriti Jain

Dave selected Anukriti Jain as the mentee for his Working Group. Anukriti is a Computer Science Engineering student in the last year of her undergraduate program. She is based in India.

Han Gao

Los chose Han Gao as mentee. Han Gao is pursuing his doctorate in the Netherlands.  His goals are:

*  Become an effective contributor to the Open Horizon project with a deep understanding at the code level around its architecture and key policy-based management flow.

*  Contribute by translating Open Horizon technical documentation into Chinese, as well as creating new hands-on guidance for getting started with an end-to-end example.

*  Broaden his view and build insight across other LF Edge projects.

Clement NG

Troy went with Clement Ng for the Examples Working Group. Clement, like Troy, is based on the US west coast.

Edidiong Etuk

Joe tapped Edidiong Etuk to assist with the Documentation Working Group. Eddie is from Nigeria and is pursuing a degree in Computer Science.  He has natural leadership qualities and is an eager self-starter with a gift for intuition.

Lessons learned from the mentorship program

Now that we’re about 2/3 of the way through the fall term, we have gained some perspective through the benefit of hindsight.  It turns out that four mentees is just about the right number for our small project to handle.

The tasks have been adjusted to ensure that each mentee can complete the work in the allotted time without too much stress.  And several of them have already gained such a sense of belonging and ownership that they plan to continue working with the project after their mentorship term is complete.  We highly recommend that other projects take advantage of this opportunity.

For more details about Open Horizon, click here. Stay tuned here for updates about the next mentorship.

EdgeX Foundry and Forbes

By Blog, EdgeX Foundry

Written by Andrew Foster, EdgeX Foundry Contributor and Product Director and IOTech Co-Founder

I read a very interesting article recently in Forbes on the Future of Industrial IoT Gateways. The thrust of the piece is that IoT gateways play a key role as the main integration point between the OT and IT worlds in an industrial IoT system. They provide important functions such as protocol translation for the myriad of industrial protocols that users often have to deal with, enable edge processing for latency sensitive applications and provide a secure firewall between the open internet and possible vulnerable devices/sensors.

However, unlike more general IT environments, the problem with IoT gateways is that they typically require a lot of customization and bespoke software configuration to support each specific use case, and this is slowing down IoT adoption.

The article identifies the solution to the problem as being the need for service-oriented gateway frameworks in the form of plug and play microservices that are not tied to any specific hardware or operating system. The key benefit is that these gateways can be delivered with very little customization and it’s much easier for the platform to be configured using IT native techniques (e.g. docker, Kubernetes), or new microservices (e.g. to support a new device protocol) written in cloud friendly languages instead on having a heavy reliance on specialized embedded development skills. It also facilitates an ecosystem of reusable components, many of which are open source but also encourages a thriving commercial marketplace for new microservices.

In particular, it was great to see that alongside ARM’s Pelion open source initiative, the Linux Foundation’s EdgeX Foundry is highlighted. EdgeX is an open-source, vendor-neutral, project hosted by the Linux Foundation under the LF Edge umbrella focused on the development of a service-oriented edge software framework that is being adopted by a number of the major the gateway vendors. The project was established in 2017 and 6 releases later with over seven million downloads, adoption is still growing fast.

In fact, EdgeX is already incorporated into a number IoT gateway products from Accenture, HP, Jiangxing, ThunderSoft, TIBCO and Home Edge (another LF Edge project). Dell also provides IoT solutions with EdgeX for Dell gateway hardware. My company IOTech develops a commercial implementation of EdgeX called Edge Xpert that is used on some of the gateways mentioned including the Dell 3000 and 5000 series ruggedized Industrial IoT gateways.

Check out the article here.  For additional information on EdgeX, visit the new EdgeX Foundry website.

Scaling Ecosystems Through an Open Edge (Part Two)

By Blog, Industry Article, Project EVE, Trend

Why an Open, Trusted Edge is Key to Realizing the True Potential of Digital Transformation

By Jason Shepherd, LF Edge Governing Board member and VP of Ecosystem at ZEDEDA

This content originally ran on the ZEDEDA Medium Blog – click here for more content like this.

In  of this series, I walked through various approaches to ecosystems and highlighted how business value tends to find a natural equilibrium across stakeholders. In this second installment I’ll walk through the importance of an open edge for scaling ecosystems and realizing the true potential of digital transformation, as well as providing some tips on building ecosystems that I’ve picked up over the past years.

Enabling increasingly connected Intranets of Things

Imagine if the overall internet was built as a closed ecosystem, controlled by a small set of organizations, much less one. Of course, there are browsing restrictions placed at a company level and in some countries, but the internet simply wouldn’t have made the same massive impact on society without fundamental openness and interoperability.

All data is created at the edge, whether it be from user-centric or IoT devices. A few years ago, the number of devices on the internet surpassed the global population and the growth for IoT devices is expected to continue at an exponential rate. This will not only unlock new paths to value, but also .

However, as it turns out, the term “Internet of Things” is actually a bit of a misnomer. It’s really about a series of increasingly connected Intranets of Things. Starting with simple examples like the story of Fred and Phil from part one of this series, ecosystems will get increasingly larger and more interconnected as the value to do so exceeds the complexity and risk. So how do we carry this out at scale?

The importance of an open edge

Previously, I’ve outlined the  when deploying applications closer to devices in the physical world, outside the confines of a traditional data center. Edge and IoT solutions inherently require a diverse collection of ingredients and expertise to deploy and over the past five years the emerging market has attempted to address this fragmentation with a dizzying landscape of proprietary platforms — each with wildly different methods for data collection, security and management.

That said, having hundreds of closed, siloed platforms and associated ecosystems is definitely not a path to scale. As with the internet itself, it is important to have an open, consistent infrastructure foundation for IoT and edge computing, and from there companies can decide how open or closed they want to build their business ecosystems on top.

The diversity of the IoT edge makes it impractical to develop one catch-all standard to bind everything together. Open source software frameworks are an excellent way to bridge together various ingredients, unifying rather than reinventing standards, accommodating legacy installations, and enabling developers to focus on value creation.  is an example of a collaborative effort to build an open edge computing foundation in partnership with other open source and standards-oriented initiatives. The key to this being the base for a truly open edge ecosystem is the vendor-neutral governance offered by the Linux Foundation.

In this recent  I highlighted that the winners in the end will be the ones creating differentiated value through their domain knowledge, building necessarily unique software and hardware and offering great services — not those that are reinventing the middle over and over again. AI models for common tasks such as identifying a person’s demographic, detecting a license plate number or determining if an object is a water bottle or weapon will be commonplace, meanwhile there will always be room to differentiate with AI when specific industry context is in play.

Trust is essential

I’ve written in the past about the “” being selling or sharing data, resources and services across total strangers, all while maintaining privacy on your terms. This is the ultimate scale factor, but we need to turn to technology for help because it simply isn’t feasible to take people out to dinner fast enough one by one to build the necessary trust relationships.

Consumers are often comfortable pledging allegiance to specific brands and giving up a little privacy as long as they trust the provider and get value. However, in order to build more complex ecosystem relationships that span private and public boundaries at scale we not only need open interoperability but also ensure that no single entity owns the trust.

As such, we also need to collaborate on a technology foundation that automates the establishment of trust as data flows across heterogeneous systems. We’re seeing the industry increasingly step up here, from distributed ledger efforts like IOTA and Hyperledger that provide smart contracts to the  and the emerging  which aims to build out the concept of data confidence fabrics by layering trust insertion technologies with a system-based approach. Both the ToIP Foundation and Alvarium are focused on facilitating trust in both machine-generated data and across human relationships.

While these efforts don’t replace the need to build business relationships (and take people out to dinner!), they will provide necessary acceleration. Moreover, beyond helping us scale complex ecosystems these tools can also aid in combating the increasing issue with deepfakes and ensuring ethical AI solutions, in addition to accelerating workload consolidation at the edge and dealing with regulatory requirements like GDPR… but these are blog topics for another time!

Tips on building an ecosystem (hint, hint: domain knowledge rules)

When I started building the IoT ecosystem with the team at Dell back in 2015 it was clear that the market was going to go vertical before going horizontal. This is a typical pattern in any new market and proprietary offers often get the initial traction while solutions based on an open foundation always win in the end when it comes to sheer scale (recall the Apple — Android discussion from part one…).

So, our ecosystem strategy starting in 2015 was to go super broad before going deep, partnering widely and enabling the “cream to rise to the top”. I joked with the team back then that we were going to do “Tinder” and then “D-Harmony”. Sure enough, the partners that had the most initial traction were those that were laser-focused on one use case, meanwhile the horizontal “peanut-butter platforms” were stalling. Case in point, if you do a little bit of everything you rarely do one thing really well.

But it was really about these vertically-focused platforms’ domain knowledge that resonated with a specific customer need and desired outcome. They built their technology platforms to get to data but their real value was that domain knowledge. Over time as we get to a more consistent, open foundation for Edge and IoT solutions these providers will either need to pivot to being more like system integrators, or build differentiated software and/or hardware. So, in 2017 we purposely switched from going broad to being more deliberate in partnerships, focusing more on partners that have realized the importance of separating domain knowledge from the underlying technology foundation and helping with matchmaking across the partner landscape.. Enter “D-Harmony”!

In closing

Deploying IoT and edge computing solutions takes a village and it’s important to establish an entourage of partners that have a “go to dance move” rather than working with those that are trying to do too much and as a result not doing anything particularly well. Five years have passed and a lot of providers have felt the pain of trying to own everything and have since realized the importance of having focus and establishing meaningful partnerships. So, now I joke with the team at ZEDEDA as we build up our ecosystem of market-leading pure-play solutions and domain experts that we’re going straight to “Z-Harmony”!

Thanks for reading. In the third and final part of this series, I’ll provide more insights on how we get to “advanced class” and more on what we’re doing at ZEDEDA to help build the necessarily open foundation to facilitate ecosystem scale. In the meantime, feel free to drop me a line with any comments or questions.