Category

Blog

Working towards moving the industry forward together

By Akraino, Akraino Edge Stack, Blog

By Alex Reznik, Chair of MEC ISG, HPE Distinguished Technologist and LF Edge member

This content original ran on the ETSI blog.

Quite some time has passed since my last blog entry, and while I thought about a new blog a number of times, a good topic – i.e. one which is appropriate for discussion in a short, informal and public format – just did not seem to present itself. That’s not for the lack of interest or activity in MEC. 2019 is shaping up to be a critical year in which many operators are now public about their plans for edge computing, initial deployments are appearing and, as expected, holes in what the industry has been working on are beginning to be found (witness the much publicized and excellent Telefonica presentation at last month’s Edge Compute Congress). It’s just that it’s hard to blog about on-going work, even when it is very active, much less about internal efforts of various players in MEC. After all, what would that look like “this is hard and we are working hard on it…”

Nevertheless, the time has come. Those of you who follow my random MEC thoughts on a semi-regular basis might recall the subject of that last post ages ago (I mean in February): the need for both a vibrant Open Source community and Standards development in a space like MEC; and how the two are complimentary in that the focus is, by definition, on complimentary problems. And, if you don’t follow me religiously, here is the link: https://www.etsi.org/newsroom/blogs/entry/do-we-still-need-standards-in-the-age-of-open-source, grab a coffee, tea, a…. whatever … and read up!

In a significant way, that post was in part a response to comments and questions like: “There is a lot of confusion in the market, what’s up with ETSI and Akraino and EdgeXFoundry. You guys all seem to compete.” To that, the post provides a rather direct answer of “no we do not – we address different needs.” However, these related questions often followed: “OK, but how well DO YOU actually work together?” To date, one failing of the various players in MEC space has been the lack of a more convincing answer than “well, we all talk and we are all aware of each other.

That’s now changed in a very significant way. On the heels of a formal cooperation agreement between ETSI and LF Edge (see, e.g. https://www.linuxfoundation.org/press-release/2019/04/etsi-and-the-linux-foundation-sign-memorandum-of-understanding-enabling-industry-standards-and-open-source-collaboration/), the MEC ISG within ETSI and LF Edge’e Akraino project have been working towards moving the industry forward together. The first fruit of this labor is about to ripen – an Akraino Mini-Hackathon, endorsed by ETSI, to be held in San Diego the day before KubeCon.

This event was designed to highlight the work Akraino is doing in putting forward solutions which take advantage of ETSI’s Standards, and to allow developers an experience in developing for MEC. The most notable thing about this Hackathon is the model of cooperation – Akraino (and Open Source community) provides implementations to the industry, while the use of ETSI MEC (Standards Org.) standards ensures interoperability across other standards-compliant implementations.

So… that’s it then. We have arrived at a working, standardized solution for MEC, right??? Well, no. If you are looking for that, the mini-hackathon will sorely disappoint you. However, it is a step – an important first step in a growing cooperation between two organizations which should be, over time delivering those operational, standardized components for MEC. The work ahead of ETSI MEC and Akraino is significant, and much of it will lack opportunities for fanfare – as work in our industry often does. Still, there will be more coming from our two organization, so stay tuned…

EdgeX Foundry F2F: Sun, Community and Code

By Blog, EdgeX Foundry

By Jim White, Co-Chair of the EdgeX Foundry Technical Steering Committee and CTO of IOTech Systems

Earlier this month, the EdgeX Foundry Technical Steering Committee (TSC) met in Phoenix, Arizona to finalize our Fuji release (due out at the end of this week – November 15th) and scope the effort of our next release – code named Geneva – that we expect to release in the spring of 2020.  While officially this was a TSC meeting, we are an open project and always invite our developer community to participate in the discussions and planning effort whenever possible.

Hosted by Intel in downtown Phoenix, this season’s meeting was attended by more than 60 people in person and another dozen on line – our largest meeting yet.  Intel did a fabulous job of providing a great meeting space and allowing the community to enjoy the surrounds of the Phoenix area.  If they were responsible for the picture-perfect weather, they couldn’t have made it any nicer!  80 degrees and sunny each day.

Fuji is the community’s fifth release of EdgeX and a minor dot release (version 1.1) to our 1.0 Edinburgh release launched this past spring.  We have developed a well-established cadence that is two years in the making.  We have two releases a year (spring and fall) with the planning of the next release right on the heels of completed release.

If you are wondering what is in Fuji that is about to be released, the highlights of this release include:

  • New and improved security services – fully integrated with existing micro services (API Gateway, secure storage)
  • Application services and application functions SDK as full replacements for older export services (we expect to deprecate the export services with the next release)
  • System management improvements to include ability to set configuration
  • Improved testing and quality assurance procedures and tools
  • Addition of an many more device services (actually to be released independently in mid-December with the device service SDK release).

The fact that Fuji is a dot release on top of our 1.0 release is an important.  As the project matures and stabilizes, more companies are building products on top of EdgeX and the community wanted to provide them with affirmation that EdgeX is fit for purpose without changing everything each release – while still adding new features in a way that supports backward compatibility.  Companies like IOTech, Beechwoods, RSA and others (see https://www.edgexfoundry.org/edgex-in-market/) continue to leverage EdgeX and need the stability in order to avoid lots of thrashing each six months.  ObjectBox is releasing their version of core EdgeX services (replacements for the EdgeX reference implementation of these services) that incorporate their embedded database with Fuji this month to coincide with the EdgeX release.  These are prime examples of the community that now rely on EdgeX Foundry.

Looking ahead – what’s in the Geneva release?  Based on our planning this past week, Geneva will be version 2.0 – a major release.  It will include many new features and some changes.  Specifically, the Geneva release will feature:

  • Automatic and dynamic device/sensor provisioning and on-boarding – allowing for more zero touch deployments
  • Interoperability testing – allowing the community and users to get a better appreciation that all the micro services are working together properly
  • Replacing the reference implementation rules engine service – the last of the legacy EdgeX Java micro services (allowing the footprint of the overall system to be significantly reduced)
  • Use of Redis as the default reference implementation database (replacing MongoDB)
  • Improved security services (more secret store usage, per service token use, token revocation/expiration/rotation)
  • User guidance on platform needs (more performance statistics, number of devices allowed per service/EdgeX instance recommendations, platform and hardware recommendations per edge use case)
  • Exploration of an alternate message support (offering an alternative to Zero MQ)
  • Archive of Export Services (the new Application Services and App Functions SDK taking its place)
  • Use of Jenkins Pipelines to improve and enhance our CI/CD processes
  • A refactor of the EdgeX APIs to provide more flexible and more nimble request and response bodies in the message exchange, which should improve testing and improve some performance aspects
  • A number of system and sub-system designs are also forecast to be completed with this release. These designs are pre-cursor work toward the Hanoi (fall 2020) and beyond releases

It’s starting to look like winter outside in the northern hemisphere.  Holiday season right around the corner.  EdgeX is heating up.  Good time to stay inside and curl up next to EdgeX and try out the latest features and explore the future roadmap.  Come join the fastest growing open source edge platform project.

LF Edge Member Spotlight: ZEDEDA

By Blog, Project EVE

The LF Edge community is comprised of a diverse set of member companies that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Aaron Williams, Developer Community Lead, and Erik Nordmark, Chief Architect and Co-Founder, at ZEDEDA to discuss the importance of a growing ecosystem, their IoT framework, the impact LF Edge has made and what the future holds for the company.

Can you tell us a little about your organization?

ZEDEDA delivers visibility, control, and security to enterprise IoT and edge deployments through edge virtualization. Ours is the only cloud service for edge management built on the open sourced Edge Virtualization Engine (EVE). By bringing virtualization to the edge, we allow businesses to deploy and manage any application on any hardware and connect to any cloud, breaking down IT silos and simplifying IoT strategies. Customers can easily dropship gateways at distributed sites without needing on-site IT expert personnel, and can launch greenfield and brownfield applications at scale with a single click of a button. 

With ZEDEDA, organizations easily eliminate the complexity of today’s IoT solutions at the edge and gain deeper insights into their operations by more effectively leveraging sensor data, including through AI-powered analysis in the cloud.

Why is your organization adopting an open source approach?

Today at the edge, there is a heterogeneous mix of hardware and applications, which makes it difficult to coordinate an IoT strategy and make the most out of all the available data. As a result, many enterprises can become mired in vendor lock-in. Embracing open standards gives the whole community a common foundation to work from, increasing interoperability, lowering the barriers to entry in this space, and promoting innovation. 

ZEDEDA adopted open source right from the beginning because we saw the value in providing a shared standard for edge virtualization technology. We think of it as being similar to what Android did for mobile phones, in terms of creating a single template for developers to follow that then ensures operability across a variety of hardware. Additionally, by making EVE open to community contributions, we’re committing to building the best possible solution with experts around the world.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

The reason why we joined LF Edge is simple: we believe that the fastest route to innovation and success in edge computing is by working together with other companies to create universal standards that we can all build off of. It’s been a great opportunity to come together with like-minded organizations, contribute our expertise, and work collaboratively to build the best ecosystem possible for the future of edge computing. By hosting several key open source projects and making them available to the community, LF Edge is making it simpler for the industry at large to adopt IoT strategies as part of their IT portfolios. We believe that the rising tide will lift all boats, so to speak.

What do you see as the top benefits of being part of the LF Edge community?

There are many benefits of being part of this community. For one thing, it allows us to be at the table with other edge companies (both large and small) so that we can help shape the future of the edge in a way that benefits everyone. It also provides a learning opportunity when we all come together to better understand the different parts of the edge stack. Additionally, building our solution on top of code (EVE) that is open sourced through the Linux Foundation helps give our customers confidence that we’re working to the highest technical standards. Truly, we feel that we receive much more than we give as active participants in LF Edge.

What business/industry problems are you collaboratively working to solve?

Current solutions for edge deployments often leave several challenges unaddressed, and these are all problems that we help to solve with EVE and the ZEDEDA controller. For instance, typical edge management software has little-to-no interoperability, meaning that customers are locked into using a limited number of compatible apps, hardware, or clouds. By contrast, one of the main benefits of building our solution on top of the open-sourced EVE is that it gives all vendors a common foundation to work from: as long as their apps and APIs are compatible with EVE, they can run on any EVE-approved hardware. In the same vein, modern hardware and firmware isn’t generally suited to run legacy applications; however, many businesses still rely on legacy apps as a key part of their technology stack. By making use of virtual machines (VMs), edge virtualization allows legacy and modern apps to co-exist seamlessly on the same device. Security is also a critical part of edge deployments, with traditional solutions leaving businesses exposed to many vulnerabilities. By managing their edge deployments with EVE and the ZEDEDA controller, companies can mitigate against many of these vulnerabilities: EVE ensures that the device and data traveling to and from it is secure by leveraging the hardware root of trust, and the controller makes it easy to keep firmware and applications up-to-date with the latest software patches rolled out with a single push of a button.

 

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

ZEDEDA has been a major contributor to the Project EVE code base. In addition, we have worked hard to encourage our hardware and software partners to contribute their expertise to build out the hardware devices that EVE runs on.

How will LF Edge help your business?

Building on top of LF Edge’s Project EVE allows us to concentrate on what separates us from our competitors, secure in the knowledge that the foundation of our technology is solid. It also gives our customers confidence in our solution because it is built on code that meets the Linux Foundation’s high standard of technical excellence.

Can you give us an example of your LF Edge project in production and what problem it is solving?

A good example of EVE in production can be found on wind turbine farms. The operators of these farms face many challenges, including that the farms are remote, complex, expensive to maintain, and very large. There is limited IT staff on site, and a truck roll to do unscheduled maintenance can cost over $100,000. At the same time, downtime can cost $1,000 to $2,000 per day, which means that it is very important to the operators to have as much uptime as possible, but avoid unplanned maintenance.   

EVE works in conjunction with the ZEDEDA cloud-based controller to allow the operators to overcome these challenges. With EVE shipped on the device, the operator can take advantage of zero-touch provisioning and having a single pane through which to manage all devices. Since EVE is open sourced and works across a variety of hardware devices, the operator has the freedom of updating the hardware for new installs without making the previous installs obsolete. And with the 100% visibility and remote control of the devices, they are able to update their applications on the edge from anywhere at anytime.    

Project EVE Wind Turbine Demo Video by ZEDEDA

What advice would you give to someone considering joining LF Edge?

To borrow from Nike, “just do it.” Being a member of LF Edge allows you to be part of the conversation that is shaping the IoT revolution.  The Edge is too big and too complex of an industry for any one company to dominate, so the only way to create common standards and functionality is by working together. If you are not part of the LF Edge, you will be continually following the cutting “edge” of Edge development!  

 

Edge Computing at IoT Solutions World Congress 2019

By Blog, EdgeX Foundry, Home Edge, Project EVE

Every year one of the world’s largest Internet of Things trade shows, IoT Solutions World Congress, is held in Barcelona, Spain. It brings together device manufacturers, service providers, AI & ML companies and solutions integrators from around the world to share information about their products and the state of IoT ecosystems. Filling multiple convention halls at the Fira Barcelona center, and featuring the biggest names in IoT and technology, you can spend days walking the expo hall and talking to vendors.

Crowd at the LF Edge Booth

This wasn’t the first time the EdgeX Foundry has had a booth at IOTSWC, but this year they were joined by other LF Edge projects, specifically Home Edge and Project EVE, to present solutions across the edge landscape. Our booth was staffed by project contributors from all over the world, from the US and Europe to India and Taiwan, and featured real world examples of the open source technology that is being developed under the LF Edge umbrella.  Not only did our members get a chance to learn about each other’s projects during this time, they were able to explain those other projects to the visitors to our booth. It was truly a community coming together to support and promote the LF Edge as a whole.

EdgeX Smart Building Demo EVE deployments on a wind turbine

We spoke with thousands of people over the 3 days of conference, and gave countless demonstrations. One notable change in conversations from a year ago is that most attendees we spoke to this year already knew and understood the importance of edge computing, and were looking for specific solutions to the problems that they are now facing. And while many vendors at the show offered some of these solutions, only the LF Edge projects offered open, vendor agnostic platforms that prevent lock-in and promote an ecosystem of 3rd party development around commonly developed core.

Selfie of the LF Edge booth staffIf you missed us at IOTSWC, you can join our projects online where we have a public Slack, mailing lists and host our meetings in the open. You can also look for us at events in 2020!

The Inaugural EdgeX Open Hackathon

By Blog, EdgeX Foundry

Written by Jason Shepherd, LF Edge Board Member

On October 7th and 8th, I had the pleasure of attending the inaugural EdgeX Open Hackathon in Chicago. EdgeX Foundry is focused on facilitating interoperability between devices and applications, regardless of underlying hardware, OS, and connectivity protocol. The project is ultimately about creating a de-facto standard open API that binds together a preferred mix of open source and commercial value-add.

Once EdgeX hit 1.0 status in June with the Edinburgh release, the community started planning the EdgeX Open Hackathon series for leveraging the framework in customer-valued use cases.

The inaugural EdgeX Open, hosted at the Tech Nexus coworking space in Chicago, hosted by LF Edge, EdgeX Foundry and RILA

Six teams participated in this inaugural event, representing HAVI, Intel/Intuiface, Johnson Controls, UST Global and Volteo in addition to a small independent team of registered individuals. Pre-work was allowed but a “secret ingredient” (Iron Chef-style) was introduced the day of the event to level the playing field a bit. In this case, it was a camera using Intel’s OpenVINO computer vision and image recognition model to output an object recognition algorithm to an API. Teams were able to earn extra points by integrating this technology into their solution stack.

Read on for how the hackathon went down!

Day 1:

The first day began with pitches from top sponsoring companies Dell, HP and Intel on what EdgeX Foundry means to the IoT market. Nicholas Ahrens of the Retail Industry Leaders Association (RILA) talked about the importance of innovation in retail and then Brad Corrion of Intel walked the teams through the objectives and introduced the secret ingredient.

Brad Corrion walking through the event rules

During my pitch, I had the pleasure of announcing that the EdgeX Foundry project recently hit a million total microservice downloads since the April 2017 launch. To get a sense of the accelerating momentum, half of these downloads occurred in the past two months!

With intros completed, the developers got to work. Most of the teams brought a plethora of devices to work with including cameras, sensors, scanners and a thermal printer and many had done some pre-work. Numerous developer attendees had never used EdgeX before and all were able to get the framework running quickly. Once the set-up was completed, they started with their integration of various devices and application stacks to build out their use cases.

Teams diligently working on their solutions

The participants had their choice of one of three use cases, or an open category:

  1. Advanced loss prevention – leveraging EdgeX to correlate computer vision events with telemetry from sensors such as RFID and transaction log data from Point-of-Sale systems for improved loss prevention/theft detection
  2. Dynamic personalized retail experience – leveraging computer vision and sensor data to drive an improved in-store customer experience based on individual shopper preference
  3. Inventory management – using data from sensors and scanners (hand-held and/or drone-based) to improve inventory accuracy
  4. Open category – any retail-centric use case deemed valuable by end users

While it appeared that most of the users picked the open category, the reality is that they did variants of the three main use cases. The teams wasted no time getting to work (actually many were working during the intro pitches…) and the room was bustling all day with activity.

From top (counter clockwise): Team Havi, Team Johnson Controls, Team Skanna

The UST Global team came up with an innovative solution for highly perishable food using Augmented Reality (AR) and RFID to enable grocery store associates to point a smartphone at fresh meat and produce displays to see products that are nearing or beyond the expiration date, i.e. “How fresh is the meat?” This would benefit the grocery store in terms of streamlining how they find, sort and discount expiring food, and benefit the consumer in terms of choosing between either the freshest products or benefiting from dynamic price markdowns or food nearing the expiration date. They came prepared, complete with props including fake chicken nuggets and lamb chops, making me hungry every time I walked by their table.

The UST Global team working on their solution for “How Fresh is the Meat?”

Day one wound down with a whiskey tasting which reminded me of the face-to-face TSC meeting in Edinburgh a year prior where we also ended the first day with a whiskey tasting. I sense a theme here!  Whiskey sommelier Anthony Dina selected three boutique whiskeys from the “edges of the earth” – one from a small batch distiller in the US, one from Scotland and one from New Zealand.

Whiskey from the edges of the earth

We then had some tasty Chicago deep dish pizza, followed by the “EdgeX Open Mic” – a community music jam.  Tony Espy (Canonical) and I worked up a variety of singalongs and managed to get a handful of folks belting out Neil Diamond’s Sweet Caroline and John Denver’s Country Roads, plus he and I did some pretty decent versions of Radiohead’s High and Dry and Jeff Buckley’s “Halleluiah”… especially considering this was only the second time we’ve played together (the first being an open mic night in Edinburgh during the Fall 2018 EdgeX F2F TSC).

Tony and I belting out the jams

Several folks came up to us afterwards and talked about how they played instruments spanning guitar to saxophone. At the next EdgeX Open, we clearly need to prepare more in advance and get a full band together, or perhaps we should just serve a little more whiskey beforehand! Or, both?

Day 2:

Day two started early with the teams continuing to build out their solutions unified by the EdgeX framework. Around 11am the judges we’re introduced – Eran Harel from AppCard, Nicholas Ahrens from RILA, Juan Santos from Tavistock Group, Mark Stutzman from Area 15, and Scott Gregory from HP. The judges subsequently spent time with each team to learn about their solutions before deliberating.

Hackathon Judges

Each team was then able to formally pitch to the judges and all participants in a final presentation that walked through their solution, what ingredients they used and how EdgeX was used for integration. The judges deliberated a bit more and then we had the closing ceremony, during which the winners were announced… and… drum roll… here they are!

The Intel/Intuiface team meeting with the judges

1st Place – Team Volteo: An inventory management and loss prevention solution that used RFID sensors and networked cameras for accurate inventory management and to record door exit events by combining POS transaction data together with merchandise movement tracked by RFID sensors and cameras.

Team Volteo

2nd Place – Team Intel/Intuiface: A predictive self-service checkout assistance solution that used RFID and computer vision to trigger an alert when a customer failed to scan items within a reasonable time and/or based on OpenVINO emotion detection during customer interaction with the kiosk.

Team Intel/Intuiface

3rd Place – Team UST Global: An AR and RFID-based solution to track ultra-perishable goods in grocery stores and allow for dynamic pricing, e.g. how fresh is the meat? They developed an AR mobile app to scan food aisles and provide details on freshness, current price, etc.

Team UST Global

And this wasn’t just for bragging rights – 1st, 2nd and 3rd place winners received $5000, $2500, and $1000 in cash respectively!  As co-founders of the EdgeX Foundry project, Jim White and I had the pleasure of handing out big checks during the award ceremony, summarily checking off a to-do on my bucket list!

Congrats to Team Volteo for winning the Grand Prize!

Thanks to our sponsors!

It has been an honor for me to participate in the EdgeX community over the years and this event was no exception. Big thanks to the LF Edge and the Retail Industry Leaders Association (RILA) for hosting the event, and to sponsors Canonical, Deep Vision, Dell Technologies, HP, Intel, IoTium and ZEDEDA who also provided their commercial offerings for the developers to use in their solutions as well as tech support on-site. Also, a shout-out to the planning committee which included volunteers from Dell, HP, IBM, Intel and the Linux Foundation.

EdgeX Foundry community leaders helped answer questions and more during the hackathon

Looking forward

The concept behind the EdgeX Open Hackathon is for it to be a global franchise series. Other markets in discussion for future events include manufacturing, smart cities, oil and gas, utilities/energy, agriculture, healthcare and beyond.

Stay tuned for more detail on the next installment, for which we also plan to include more about the overall LF Edge value prop by pulling from other enabling projects within the umbrella, combined with sponsor content. We’re thinking manufacturing in the Spring 2020 timeframe, likely aligned with an industry event such as Hannover Messe in a nearby town in Europe.

In the meantime, download EdgeX and build something great!

 

End User Case Study: Monitoring industrial equipment using EdgeX Foundry

By Blog, EdgeX Foundry

By Jason Shepherd, LF Edge Governing Board member and IoT and Edge Computing CTO, Dell Technologies

Founded in 1996, Technotects is an IoT technology consulting firm with broad domain expertise in industrial use cases. When one of their industrial equipment OEM customers independently realized the power of the EdgeX Foundry framework, Technotects planned and executed a Proof-of-Concept (PoC) with one of their customer’s typical process skid use cases.

This blog walks through the successful PoC, including what the solution entailed and Technotects’ experience working with the EdgeX platform. We’re seeing more and more of these types of real-world applications go public in the community on the heels of the EdgeX Foundry “1.0” Edinburgh release.

The use case

The use case for the PoC is monitoring sensor and pump information from a process skid used in a variety of applications, including agricultural, flavors and fragrances, pharmaceutical, petro-chem and food/beverage industries. Technotects’ goal of the effort was to prove that the EdgeX Foundry platform, combined with commercial value-add from the ecosystem, can provide an Internet of Things (IoT) solution stack that can address the unique interoperability challenges that industrial applications present, such as a mix of connectivity protocols and near real-time I/O processing, all while giving them the freedom of choice by being decoupled from proprietary, single-vendor solutions. In turn, this would allow their customers to improve their overall solution architectures, reduce runtime royalties and accelerate their time-to-market.

EdgeX-enabled Dell gateway utilized in the Technotects PoC

Technotects’ initial interest in EdgeX Foundry was the flexibility offered by the open ecosystem and the potential of reducing excessive runtime licensing fees per deployed host node, based on the combination of a proprietary edge application framework, edge historian and both southbound and northbound connectivity. In addition, they were attracted to the ability to make build-or-buy decisions with EdgeX without being locked into any specific choice for connectivity or applications value-add services.

The PoC basics

For the PoC, Technotects leveraged a Dell Edge Gateway 3002, Photon OS and VMware Pulse IoT Center, Edge Xpert from IOTech, RedisEdge from Redis Labs, Project Iris from RSA Labs, both AWS and Azure cloud platforms (hosting Redis for data backup) and a custom-built edge management console. Refer to the figure below for a block diagram of the setup and read on for more detail on how the effort came together.

IIoT Edge Stack Block Diagram for the Technotects PoC

Solution deep dive

For the EdgeX foundation of the stack, Technotects chose to work with IOTech’s Edge Xpert offering – a commercially-supported variant of the open source code that is available from the project GitHub within the Linux Foundation. Their use of Edge Xpert enabled them to focus on the integration with their customer’s preferred value-add software components rather than dealing with the open source code. They found IOTech’s documentation to be clear and the initial installation to be quick and straightforward – benefits when using a commercial variant that has additional hardening and packaging. Of course, using a commercially-supported variant versus simply downloading the open source code is a matter of personal preference.

EdgeX is completely neutral to OS, underlying hardware, protocol and programming language, and for this PoC, Technotects chose to leverage the Dell Edge Gateway 3002 and both Ubuntu from Canonical and Photon OS. Photon OS is an open source, container-optimized Linux distribution that has been nested within VMware’s vSphere offering for some time. Technotects was able to run their console, communication drivers, Edge Xpert and all the other referenced value-add software components in Docker containers in both operating systems, all without issue. They find that having the flexibility to deploy on any combination of hardware (x86 or ARM) and operating systems (Linux or Windows) in the field depending on customer need is valuable.

For southbound connectivity, Technotects leveraged a hybrid model. The process skids leverage a programmable logic controller for process control and for this PoC, Technotects used a commercially available Ethernet/IP driver to communicate with it. In turn, they connected this off-the-shelf, licensed driver package into the OPC-UA Device Service native to EdgeX. To connect to other devices on the skid, Technotects used the Modbus TCP protocol from IOTech’s Edge Xpert offer, written using the native EdgeX Device Service SDK. With the plug-in Device Service model, any combination of devices and protocols can be readily added in the future as needs evolve.

The solution architecture is a great example of both 1) how existing connectivity stacks can be used alongside native EdgeX Device Services in a hybrid model and 2) that in the EdgeX model, even connectivity written with the open EdgeX Device Service SDK can be monetized. Commercially-supported variants of EdgeX Device Services are likely to be attractive to end users with mission-critical use cases that involve bespoke and/or proprietary protocols in that support for this southbound connectivity often requires institutional knowledge gleaned from reverse engineering.

Meanwhile, end users can benefit from a growing number of open source Device Service options available within the community and increasingly Device Services that are supported by sensor makers for greenfield applications, coming with the sensor just like a keyboard comes with a driver for a PC. (Side note: There are numerous additional opportunities in development in the EdgeX ecosystem, although I must be sensitive to NDAs until they’re made publicly available). Net-net, the value of the open, vendor-neutral EdgeX ecosystem is in providing developers and end users with choices based on whatever is most valuable for their business.

Technotects leveraged VMware’s Pulse IoT Center to manage and monitor the underlying gateway hardware, OS and the EdgeX application framework above. VMware Pulse is a massively scalable, platform and application-independent solution for onboarding, managing, securing and monitoring IoT devices and gateways. System update campaigns can be applied in bulk and admins are alerted to any issues with their devices deployed out in the field, all in real-time. While VMware Pulse can be used standalone with its embedded device agent, it’s especially powerful when used in concert with the EdgeX framework. Any preferred console to manage applications and the underlying host system can be used with the EdgeX framework, with application-level functionality being enhanced by taking advantage of the EdgeX System Management Agent (SMA).

Technotects found the northbound connectivity to both Azure and AWS available in IOTech’s Edge Xpert to be very easy to configure. This highlights a key benefit of the EdgeX framework – decoupling investments in southbound data ingestion from any given cloud to enable choice over the long term, including realizing true-multitenancy from the edge. With the addition of support for multiple application services in the recent 1.0 Edinburgh release, data related to infrastructure monitoring and management can be sent to their management console of choice (in this case VMware’s Pulse IoT Center), whereas data related to the process for data analytics and taking action can be sent to any combination of on-prem or cloud-based application stacks of choice.

For local data persistence, Technotects chose RedisEdge over the MongoDB reference database that has been the baseline in the project until the recent 1.0 Edinburgh release. Technotects found it very easy to replace MongoDB with RedisEdge with no functionality differences, thanks to the work of Redis Labs in terms of making it an available plug-in within the EdgeX ecosystem. This is yet another example of how EdgeX is truly open and vendor-neutral, enabling users to leverage any enhancing functionality of their choice.

Finally, the PoC explored RSA Labs’ Project Iris active threat monitoring solution. Iris is a container that plugs into the EdgeX framework (and any other stack that supports containers) to profile baseline behavior for the stack and connected devices and then uses machine learning to detect anomalies. In turn Iris create alerts linked back to RSA’s popular Netwitness offering.

Conclusion

In closing, Technotects found EdgeX Foundry easy to work with and was able to successfully replicate their customer’s use case for process skid monitoring by leveraging the commercially supported framework from IOTech and value-add from Canonical, Dell, Redis, RSA and VMware. The flexibility to simply plug value-add into the open, vendor-neutral EdgeX foundation will provide both Technotects and their customers with more options in the future and help mitigate lock-in to proprietary edge platforms.

The EdgeX project has matured over the past two years to the current 1.0 state, and if you’re one of the thousands of end users that has been quietly prototyping with the platform, we welcome you to come forward and share your story on the project website through a blog or simple testimonial statement. The more people that come forward to share their success stories, the faster EdgeX will become the de-facto standard interoperability framework for the IoT Edge and the more we can all focus on innovation rather than reinvention!

Thank you for your time, and now’s your time to download the code or contact any of the providers in the ecosystem and build something great! Perhaps, as many have, even start your own business model around the EdgeX framework – just think of what Android did for scaling out an application and services ecosystem for mobile devices.

f you have questions or comments, visit the EdgeX Foundry Slack Channel and share your thoughts in the #community channel. Or, join the LF Edge Slack Channel and share your thoughts in the #EdgeX channel.

An Akraino Developer Use Case

By Akraino, Akraino Edge Stack, Blog

Written by technical members members of the Akraino Edge Stack project including Bruce Lin, Rutgers; Robert Qiu, Tencent; Hechun Zhang, Baidu; Tina Tsou, Arm; Ciprian Barbu, Enea; Allen Chen, Tencent; Gabriel Yang, Huawei; Feng Yang, Tencent; Jeremy Liu, Tencent ; Tapio Tallgren, Nokia; Cristina Pauna, Enea

More intelligent edge nodes are deployed in 5G era. This blog shares a developer use case out of the Akraino project that the community is working on, specifically Telco  Appliance Blueprint Family, Connected Vehicle Blueprint, ELIOT: Edge Lightweight and IoT Blueprint Family, Micro-MEC, The AI Edge Blueprint Family, and 5G MEC/Slice System to Support Cloud Gaming, HD Video and Live Broadcasting Blueprint.

Telco Appliance Blueprint Family

Telco Appliance blueprint family provides a reusable set of modules that will be used to create sibling blueprints for other purpose tuned appliances.

Appliance model automates the installation, configuration and testing of:

  • Firmware and/or BIOS/UEFI
  • Base Operating System
  • Components for managing containers, performance, fault, logging, networking, CPU

The first blueprint integrated with TA is REC. Other appliances will be created by combining other applications with the same underlying components to create additional blueprints.

TA produces an ISO that, after being installed in the cluster, provides the underlying tools needed by the applications that come on top of it. The build of this ISO is automated in Akraino’s CI and it includes:

  • Build-tools: Based on OpenStack Disk Image Builder
  • Dracut: Tool for building ISO images for CentOS
  • RPM Builder: Common code for creating RPM packages
  • Specs: the build specification for each RPM package
  • Dockerfiles: the build specifications for each Docker container
  • Unit files: the systemd configuration for starting/stopping services
  • Ansible playbooks: Configuration of all the various components
  • Test automation framework

The image provides the following components, used during installation:

  • L3 Deployer: an OpenStack Ironic-based hardware manager framework
  • Hardware Detector: Used to adapt L3 deployer to specific hardware
  • North-bound REST API framework: For creating/extending blueprint APIs
  • CLI interface
  • AAA server to manage cloud infrastructure users and their roles
  • Configuration management
  • Container image registry
  • Security hardening configuration
  • A distributed lock management framework
  • Remote Installer: Docker image used by Regional Controller to launch deployer

The image provides the following components, usable by the applications:

  • CPU Pooler: Open Source Nokia project for K8s CPU management
  • DANM: Open Source Nokia project for K8s network management
  • Flannel: K8s networking component
  • Helm: K8s package manager
  • etcd: K8s distributed key-value store
  • kubedns: K8s DNS
  • Kubernetes
  • Fluentd: Logging service
  • Elasticsearch: Logging service
  • Prometheus: Performance measurement service
  • OpenStack Swift: Used for container image storage
  • Ceph: Distributed block storage
  • NTP: Network Time Protocol
  • MariaDB, Galera: Database for OpenStack components
  • RabbitMQ: Message Queue for Openstack components
  • Python Peewee: A Python ORM
  • Redis

Radio Edge Cloud (REC)

Akraino Radio Edge Cloud (REC) provides an appliance tuned to support the O-RAN Alliance and O-RAN Software Community‘s Radio Access Network Intelligent Controller (RIC) and is the first example of the Telco Appliance blueprint family.

  • RIC on Kubernetes on “bare metal” tuned for low latency round trip messaging between RIC and eNodeB/gNodeB,
  • Support for telco networking requirements such as SRIOV, dual POD interfaces, IPVLAN
  • Built from reusable components of the “Telco Appliance” blueprint family
  • Automated Continuous Deployment pipeline testing the full software stack (bottom to top, from firmware up to and including application) simultaneously on chassis based extended environmental range servers and commodity datacenter servers
  • Integrated with Regional Controller (Akraino Feature Project) for “zero touch” deployment of REC to edge sites
  • Deployable to multiple hardware models

In Akraino, the work on this blueprint will fully automate the deployment and testing on multiple hardware platforms in a Continuous Deployment system.

SDN Enabled Broadband Access (SEBA):

Here is SEBA user story.

As a Service Provider, I want to setup SEBA environment so that I can use ONF SEBA platform.

As an Administrator, I want to validate HOST OS environment so that I can install Kubernetes/SEBA.

As an Administrator, I want to validate Software environment so that I can install Kubernetes/SEBA

As an Administrator, I want to validate Virtual Machines for my Kubernetes/SEBA environment so that I can validate environment

As a Administrator, I want to validate services for my Kubernetes/SEBA environment so that validate working environment

SEBA validation on ARM – IEC Type 2

SEBA stands for SDN Enabled Broadband Access. It is an ONF/Opencord project which implements a lightweight platform, based on R-CORD. It supports a multitude of virtualized access technologies at the edge of the carrier network, including PON, G.Fast, and eventually DOCSIS and more.

SEBA is also an Akraino Blueprint, but in the context of the IEC Blueprint, the SEBA usecase is deployed and verified on an IEC Type 2 platform. IEC is also the main Akraino sub-project/blueprint which enables ARM architectures for Telco/Enterprise Edge applications.

For the purpose of validating SEBA on IEC Type 2, a SEBA compliant ARM64 lab has been setup at the Akraino Community Lab at the University of New Hampshire (UNH). The lab consists of two development PODs, using Marvel ThunderX2 networking processor equipped servers and Ampere HR330A servers.

The connectivity is provided by Edgecore 7816-64X TOR switches which fulfill the role of Fabric Switches in the SEBA Architecture.

For the validation work we have first ported and deployed BBSim on IEC for ARM64 and right now there is ongoing work for porting and verifying PONSim.

PONSim is also a central piece of SEBA-in-a-Box (SIAB) which is one of the main testing CI/CD projects in upstream Opencord Jenkins infrastructure.

There is ongoing work in Akraino IEC for replicating the SIAB setup by utilizing the same testing environments and facilities provided by Opencord.

Finally, recent collaboration with Opencord resulted in forming a Multiarch Brigade with the purpose of making SEBA seamlessly work on multiple architectures apart from the x86 servers. So far there has been some evident interest from the community and other parties for enabling multiarch support and perpetual verification in a CI/CD manner by adapting the Opencord infrastructure to support other architectures and ARM64 in particular.

Connected Vehicle Blueprint

Connected Vehicle Blueprint focuses on establishing an open source MEC platform, which is the backbone for V2X application.

From use cases perspectives,  the following are the use cases we have tested in our companies. More is possible in the future.

  • Accurate Location: Accuracy of location improved by over 10 times than today’s GPS system.Today’s GPS system is around 5-10 meters away from your reallocation, <1 meter is possible with the help of Connected Vehicle Blueprint.
  • Smarter Navigation: Real-time traffic information update, reduce the latency from minutes to seconds, figure out the most efficient way for drivers.
  • Safe Drive Improvement: Figure out the potential risks which can NOT be seen by the driver.
  • Reduce traffic violation: Let the driver understand the traffic rules in some specific area.For instance, change the line prior to narrow street, avoiding opposite way drive in the one way road, avoiding carpool lane when single driver etc.

From technology architecture perspective,  the blueprint consists of four major layers.

  • Hardware Layer: Connected Vehicle Blueprint runs on top of community hardware. Both Arm and x86 server are well supported.
  • IaaS Layer: Connected Vehicle Blueprint can be deployed in virtual environment as well.  Virtual Machine , Container as well as other IaaS mainstream software(like openstake, kubernates et al)  are supported.
  • PaaS Layer:Tars is the microservice framework of Connected Vehicle Blueprint. Tars can provide high performance RPC call, deploy microservice in larger scale-out scenario efficiently, provide easy-to-understand service monitor feature.
  • SaaS Layer: The latest v2x application depicted in upper use cases perspective.

From network architecture perspective,  the blueprint can be deployed in both 4G and 5G network. Two key points should be paid special attention to. One is offloading data to edge MEC platform.  The policy of data offload is configurable based on different applications.  The other is the ability that letting the edge talks to the other edges as well as remote data center. In some use cases, date process in one edge can NOT address the application’s requirements. We need to collect the data from different edges and figure out a “conclusion” for the application.

ELIOT: Edge Lightweight and IoT Blueprint Family

ELIOT AIoT in Smart Office Blueprint

This blueprint provides edge computing solutions for smart office scenarios. It manages intelligent devices, delivers AI training models and configures rules engine through cloud-edge  collaboration.

Services provided by cloud side:

  • Devices management
  • AI models training and deliver
  • Rules engine configuration
  • Third-party applications

Services provided by edge side:

  • Sending/Receiving devices data
  • Data analysis
  • AI models/Functions execution

Deployment environment:

  • Kubernetes
  • Container
  • VM
  • Bare Metal

Recently, we are still developing this project and Tencent will keep devoting great efforts to the research and development of this blueprint. In the near future, the project will also be open source. Any partners interested in this project are welcome to contact us and let’s work together to promote the application of edge computing in the field of smart office.

The AI Edge Blueprint Family

With this blueprint, teachers and school authorities could conduct a full evaluation of the overall class and the concentration of individual students, helping to fully understand the real time teaching situation. According to the concentration data of each course, teachers and school authorities can conduct knowledge test and strengthen.

The AI Edge Blueprint mainly focuses on establishing an open source MEC platform combined with AI capacities at the Edge, which could be used for safety, security, and surveillance sectors.

5G MEC/Slice System to Support Cloud Gaming, HD Video and Live Broadcasting Blueprint.

The purpose of the blueprint is to offer a 5G MEC networking platform to support delay sensitive, bandwidth consuming services, such as cloud gaming, HD video and live broadcasting. In addition, network slicing will be utilized to guarantee excellent user experience.

As a service provider of cloud gaming, I want to set up a computing platform to execute game rendering as well as video encoding, and deliver the compressed video to the player via a 4G/5G network.

As cloud gaming is sensitive to latency, the computing platform is required to be as close as possible to the network edge. Besides that, a mechanism to discover the service and direct players’ traffic to the platform is demanded.

To direct players’ traffic to the platform, I install an edge connector which is responsible for enabling flexible traffic offloading by interacting with EPC or 5GC, and subscribing the edge slice, which provisions a certain level of QoS between UE and edge application.

To interface with the user plane of 4G or 5G, I install an edge gateway (GW). In 5G SA, the edge GW is connected to a UPF, to manage the traffic from the mobile network. In 5G NSA or 4G, the edge GW is connected to an eNB or SGW, with an additional function of handling GTP-U, the tunneling protocol adopted by 4G and 5G.

When a player accesses a cloud gaming service hosted on the platform, the edge connector will establish an appropriate route between the mobile network and the edge GW. The compressed video and the player’s control data will be transferred between the gaming server and the user equipment, traversing the edge GW.

As a service provider of live broadcasting or HD video delivery, I want to set up a platform to transcode the video source to fit the bandwidth of the transport.

Similar to the cloud gaming use case, an edge connector is installed to direct the traffic to the computing platform, and an edge GW is installed to manage the mobile traffic and terminate GTP-U under 4G or 5G NSA. The computing platform will transfer the format of the video to be adapted to either the wireless link towards the mobile user or the transport towards the internet. The computing platform may be connected to a CDN to optimize the live broadcasting or HD video delivery.

Micro-MEC (µMEC)

 µMEC is a low-power, low-cost server at the edge of the network that supports Smart City use cases with different sensors (including cameras), is connected to Internet and telco networks, and allows third-party developers to create ETSI MEC applications to it.

So the µMEC focuses on Smart City use cases. If you had different smart sensors in a city, collecting data that is available for free, what could you make possible? That was the challenge that we gave in a developer hackathon in May. We used the Akraino uMEC platform and created a model city. Different sensors collected information and made it available through a database.

 

For the next hackathon, we want to make it very easy for developers to create applications that run on the µMEC device itself. For this, we are implementing ETSI MEC compliant interfaces and leverage the OpenFAAS serverless platform.

The µMEC platform that we are developing in Akraino will have a trusted computing stack, from trusted hardware to secure networking and signed containers.

Akraino will be hosting a MEC Hackathon on November 18 at Qualcomm. For more information or to register, visit the Akraino MEC Hackathon wiki.  For more informations about Akraino or blueprints, click here.

EdgeX Open Hackathon & Snaps

By Blog, EdgeX Foundry

Snaps are self-contained application packages designed to run on any system that supports them. Each snap is a compressed SquashFS package (bearing a .snap extension), containing all the assets required by an application to run independently, including binaries, libraries, icons, etc.

Snaps are managed by snap, the command-line userspace utility of the snapd service. With the snap command, users can query the Snap Store, install and refresh (update) snaps, remove snaps, and perform many other tasks such as managing application snapshots, setting command aliases, enabling or disabling services, etc.

The EdgeX Foundry project publishes a  snap named edgexfoundry, which includes all of the EdgeX core, export, security and support services, Mongo/Redis, and all of the 3rd party runtime dependencies (Consul, Vault, Kong, …). Hackathon participants should use the version from the latest/stable channel in the Snap Store, which is the 1.0.1 release (Edinburgh). See the snap’s README for details. To install from latest/stable run:

sudo snap install edgexfoundry

After installing the edgexfoundry snap view the services by issuing the snap services command.

sudo snap services edgexfoundry

Notice that after running this one simple snap install command, all of the services required to runEdgeX will have been installed and are already running in the background on your system. Note which services are enabled and which are active by running the snap services command.

Upon installation, the following EdgeX services are automatically enabled:

  • cassandra (persistent storage for Kong)
  • consul (aka ‘the registry’)
  • core-command
  • core-config-seed (one-shot)
  • core-data
  • core-metadata
  • edgexproxy (one-shot)
  • kong-daemon
  • mongod
  • mongo-worker (one-shot)
  • pkisetup (one-shot)
  • sys-mgmt-agent
  • vault
  • vault-worker (one-shot)

Note – some of the above services are “one-shot” services which run once and then exit. These will show up as “enabled” but “inactive”.

The following services are disabled by default:

  • support-notifications
  • support-logging
  • support-scheduler
  • export-client
  • export-distro
  • device-virtual
  • device-random

Any disabled service can be enabled and started using snap set. For example:

sudo snap set edgexfoundry support-notifications=on

To turn a service off (thereby disabling and immediately stopping it) set the service to off:

sudo snap set edgexfoundry support-notifications=off

On install, all services in a snap have corresponding systemd service units dynamically generated by snapd, thus if enabled, each will automatically start running when the system boots.

To keep things simple for EdgeX Open the security services in the edgexfoundry snap should be disabled by running:

sudo snap set edgexfoundry security-services=off

Currently, all log files for the snaps can be found inside $SNAP_COMMON, which is /var/snap/edgexfoundry/common. Additionally, logs can be viewed using the systemd journal or snap logs. To view the logs for all services in the edgexfoundry snap use:

sudo snap logs edgexfoundry

Individual service logs may be viewed by directly specifying the service name:

sudo snap logs edgexfoundry.consul

Once the EdgeX snap has been installed and the desired services have been successfully started, we are ready to set up an application service and a device service.

As a quick test the Mosquitto snap can be installed and a test topic can be published using the mosquitto_pub command. Use the following command to install Mosquitto:

sudo snap install mosquitto

Edgex-device-mqtt is a device service which supports importing device/sensor data readings via the MQTT protocol. It requires a running MQTT broker, as well as a peer MQTT application. The device service listens on a specific topic for incoming readings which it injects into EdgeX via the device service SDKs’ asynchronous readings feature. A peer MQTT application will be provided for the hackathon which will generate a steady stream of incoming readings related to the hackathon retail scenarios. This peer application will be setup to publish multiple scenario-specific streams on different topics. The device-mqtt service configuration provides a configuration value called “IncomingTopic” which can be used to change the topic the service listens on for incoming readings. Use the following command to install edgex-device-mqtt from the stable channel:

sudo snap install edgex-device-mqtt

After installation, check the status of the edgex-device-mqtt device service:

sudo snap services edgex-device-mqtt

Notice that the edgex-device-mqtt device service is disabled and inactive. This was done intentionally as the configuration and device profile need to be specific to each installation. You’ll find the configuration and device profile in the writable area of the snap (aka $SNAP_DATA):

/var/snap/edgex-device-mqtt/current/config/device-mqtt/res

After making the necessary modifications to configuration.toml (e.g. setting the “IncomingTopic” or changing “Port”) and mqtt.test.device.profile.yml (e.g. modifying device-resources, device-commmands, or core-commands), start and enable edgex-device-mqtt like this:

sudo snap start --enable edgex-device-mqtt

You should now be ready to receive peer MQTT application readings.

Now let’s publish a topic with a randfloat32 value of 1.2:

mosquitto_pub -t DataTopic -m '{"name":"MQTT test device","cmd":"randfloat32","randfloat32":"1.2"}'

You can check the readings from core-data with the following command:

curl -s localhost:48080/api/v1/reading | jq

Note – jq is a lightweight and flexible command-line JSON processor. In its simplest form with no command-line arguments specified, it simply outputs JSON its been given in a human-readable format. If not available on your system, it can be installed using sudo snap install jq.

If you see something similar to the following output then edgex-device-mqtt is working correctly. Float values are encoded base64 by default. The value “P5mZmg==” is 1.2 in base64.

[
   {
     "id": "4f922fd7-5c0d-48e0-8a06-6218d4c05fa9",
     "created": 1568926981304,
     "origin": 1568926981290,
     "modified": 1568926981304,
     "device": "MQTT test device",
     "name": "randfloat32",
     "value": "P5mZmg=="
   }
 ]

Once the device service has been started the configuration information is now in Consul. If you need to make further modifications to either the configuration or device profile, you’ll need to make the modifications in Consul which can be accessed from a browser at http://localhost:8500. After making modifications in Consul, the device service needs to be restarted like this:

sudo snap restart edgex-device-mqtt

If you would like to manage your EdgeX instance using a web browser, adding devices, device profiles, as well as visualizing data sent through the export service, you can do this through the EdgeX Web management UI. To install the edgex-ui-go snap run the following:

sudo snap install edgex-ui-go --channel=latest/beta

Now navigate with your browser to http://localhost:4000. The default user credentials are username: admin password:admin. Add your gateway and you’re ready to manage your EdgeX instance through the web UI.

Extending EdgeX

This section gives some hints as to how you can build your hackathon entry by creating or using additional EdgeX services.

App-Service-Configurable

Edgex-App-Service-Configurable – this is a new snap built from the current development release of EdgeX (Fuji). It allows a variety of use cases to be met by simply providing configuration (vs. writing code). For more information about this service, please refer to the README. As with device-mqtt, this service is disabled when first installed, as it requires configuration changes before it can be run. As with the device-mqtt snap, the configuration.toml file is found in the snap’s writable area:

/var/snap/edgex-app-service-configurable/current/config/res/

Profiles

In additional to base configuration.toml in this directory, there are a number of sub-directories that also contain configuration.toml files. These sub-directories are referred to as profiles. The service’s default behavior is to use the configuration.toml file from the /res directory. If you want to use one of the profiles, use the snap set command to instruct the service to read its configuration from one of these sub-directories. For example, to use the push-to-core profile you would run:

$ sudo snap set edgex-app-service-configurable profile=push-to-core

In addition to instructing the service to read a different configuration file, the profile will also be used to name the service when it registers itself to the system.

Note, as this service is based on the latest development release of EdgeX, not all use cases are supported, in particular integration with the EdgeX rules-engine will not work when used in conjunction with the Edinburgh release of EdgeX. Perform the following steps to install the edgex-app-service-configurable application service using the mqtt-export-configuration example and Mosquitto to test:

sudo snap install edgex-app-service-configurable

sudo snap set edgex-app-service-configurable profile=mqtt-export

sudo snap start --enable edgex-app-service-configurable.app-service-configurable

mosquitto_sub -t "edgex-events"

Multiple Instances

Multiple instances of edgex-app-service-configurable can be installed by using snap Parallel Installs. This is an experimental snap feature and must be first be enabled by running this command:

sudo snap set system experimental.parallel-instances=true

Now you can install multiple instances of the edgex-app-service-configurable snap by specifying a unique instance name when you install the snap. The instance name is made of the name of the snap plus a unique suffix which starts with the character “_”. This name only needs to be specified for the second and susbequent instances of the snap.

sudo snap install edgex-app-service-configurable edgex-app-service-configurable_http

or

sudo snap install edgex-app-service-configurable edgex-app-service-configurable_mqtt

Note – you must ensure that any configuration values that might cause conflict between the multiple instances (e.g. port, log file path, …) must be modified before enabling the snap’s service.

New Application or Device Services

Participants can also choose to construct their entry by building new device services (using the device SDKs) or new application services (using the application SDK). As the hackathon is time-boxed, most participants may just build new services natively vs. choosing to create Docker containers and/or snaps. For those interested in creating snaps for their new services, the following services can be useful as templates:

Running K8s Conformance Tests in Akraino Edge Stack

By Akraino, Blog

By Cristina Pauna, Bin Lu, Howard Zhang

Due to the large scale f its projects and various application scenarios, Akraino Edge Stack is complex when it comes to testing logistics;  normal integration or unit tests are not enough for testing the projects. To ensure that the full deployment is both functional and reliable, it needs to be tested at all layers of the stack, from hardware to application. To verify deployments based on Kubernetes at the platform layer, K8s conformance is a good way to do this job.

The k8s conformance tests are used in the CNCF vendor  Certified Kubernetes program. It integrates end-to-end (e2e) testing in Kubernetes which contains a wide range of test cases, and passing them proves a high level of common functionality of the platform.

K8s Conformance Tests in Akraino

General testing in Akraino is done using the Blueprint Validation Framework. The framework provides tests at different layers of the stack, such as hardware, operating system, cloud infrastructure, security, etc. Each layer has its own container image built by the validation project. The full list of images provided can be found in the project’s DockerHub repo. The validation project has an multi-arch approach. All images currently support both x86_64 and aarch64 architectures, and can be easily extended to other platform architectures in the future. This is implemented using docker manifest list so the same image name is used to pull regardless of the architecture of the System Under Test. Therefore the steps described in this article apply regardless of the architecture of the SUT.

The Kubernetes conformance tests are included in the k8s layer. The test suit can be run in a couple of ways, either by directly calling the e2e tool or though sonobuoy, which is a diagnostic tool that gathers logs and and stats in one archive. Regardless of how the tests are ran, it is possible to customise what test cases are included, which versions and even the container images within the tooling.

In Akraino, the conformance suite is run from a separate container, using the validation image built for the k8s layer: akraino/validation:k8s-latest. This image contains all of the tooling necessary for running tests on the k8s layer in Akraino, as well as the tests themselves. The tools that go into the image can be found in its Dockerfile. The container is designed to be started on a remote host that has access to the k8s cluster. Check the validation project User guide for more information on the setup.

The conformance suite is customized using the sonobuoy.yaml file. Some of the images used by sonobuoy in Akraino are built by the validation project in order to customize them to fit the community’s needs (e.g. akraino/validation:sonobuoy-plugin-systemd-logs-latest is based on the official gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest but with added aarch64 support).

Prerequisites

Before running the image, copy the folder ~/.kube from your Kubernetes master node to a local folder (e.g. /home/ubuntu/kube) that will later be mounted in the container. Optionally, the results folder can be mounted in order to have the logs stored on the local server.

Run the tests

The Akraino validation project has a variety of tests for the k8s layer. To run just the conformance tests, follow the steps below:

  1. Clone the validation repo.
  2. Create a customized blueprint that will be passed to the container. The blueprint will contain just the conformance suite in it, like in the example below.

3. Fill in volumes.yaml file with the data that applies to your setup. The volumes that don’t have a local value set will be ignored. Do not modify the target part of each volume. These paths will be automatically created inside the container when started, and are fixed paths that the tools inside the container expect to exist as is.

In the example below, only the following volumes are set:

  • Location to the kube config files needed for access to the cluster
  • Location to the customized blueprint file
  • Location to where to store the results

4.  Run the tests

To run the tests, the blucon.py command is used which needs the name of the blueprint as input. The full suite takes around one hour to run, depending on the setup.

The sonobuoy containers can be seen running on the cluster on the heptio-sonobuoy namespace.

Get the results

The results are stored inside the container in the /opt/akraino/results folder. If a volume was mounted to this folder (e.g. /home/ubuntu/results) then they can be viewed from the host where the test was ran.

The results can be viewed with a browser, using the report.html file that the robot framework provides. In case of failures, the sonobuoy archive can be used to inspect all the logs.

 

EdgeX Open: Training

By Blog, EdgeX Foundry

What is EdgeX Foundry?

The EdgeX Foundry is a Linux Foundation project with the goal of providing an open and extensible platform for IoT edge computing. As a platform, EdgeX doesn’t try to solve business problems directly, but rather to provide a solid base for such a solution to be built using off the shelf components, provided by the EdgeX Foundry project or 3rd parties, which all interoperate using the same open APIs.

EdgeX is built as a collection of microservices which communicate with each other over a set of standard, open REST APIs. This design gives it a few  distinct advantages over other approaches to edge computing. First, it allows you to run the microservices on different machines, or even on different parts of the network. For example, you can run device controlling services on the devices themselves, with the data collection and processing services on a computer with more storage and CPU capability. This design also means that individual components can be substituted with alternative implementations that fill a specific need, without having to replace or modify any of the other services. Finally, it allows you to add new services that bring in new capabilities. The most common types of services to add into EdgeX are custom Device and Application services.

How EdgeX fits into a smart edge solutions

Every IoT project needs to address the difficulties of connecting multiple, disparate devices together and to controlling software, often over slow or unreliable connections. As such, managing that communication, providing fault tolerance, security, and near-device compute capabilities are a functions any successful IoT project must provide. This common plumbing is precisely what EdgeX Foundry provides.

By using EdgeX components in your solution, you get a mature, tested, stable foundation on which you can build your own value-add products or services.

  • If you are building smart IoT devices, you can use EdgeX to connect to existing cloud offerings or datacenter software without having to build those connections into your device.
  • If you are developing analytics, automation or other kinds of business intelligence software, EdgeX lets you ingest data from devices that speak any number of protocols and use many different data models, without having to build those translations and transformations yourself.
  • If you provide custom deployments, EdgeX gives you an ecosystem of plug-and-play components that you can use to build up complex solutions quickly and reliably.

Running EdgeX in a development environment

You can build the EdgeX Foundry services using the open source code on Github, but more often than now you just need to get these services running so that you can connect your own services to them. To support that, the project publishes Docker images based on the latest stable release of the open source code, as well as docker-compose.yml files that will run all the necessary services together on your development machine.

  1. Install Docker and Docker Compose
  2. Download the latest stable docker-compose.yml file
  3. Run `docker-compose up -d`
  4. Verify that the services are all working with `docker-compose ps`
  5. Test the APIs with http://localhost:48080/api/v1/ping

Once you have the EdgeX services running, you can take our Quick Start guide, or more in-depth API Walkthrough to get a feel for how EdgeX services work together.

Extending EdgeX to hardware with Device Services

In order to connect new devices to your EdgeX instance you will need to use a Device Service. These specialized services are responsible for communicating to devices over their own protocol, sending data from them into the EdgeX core-data service, and taking commands from the EdgeX core-command service and using them to control the device.

The EdgeX Foundry provides reference device services for a number of common IoT and smart device protocols, including MQTTModbus and plain HTTP, and you can use those pre-written services if your device supports those protocols. But if you need something more, the EdgeX Foundry Device SDKs make it easy for you to build a new device service capable of supporting your device.

Using either the Go or C SDKs, you can bootstrap a new Device Service that will already handle all of the setup and configuration for your new service, including uploading your Device Profiles into EdgeX core-metadata, provisioning any devices you have pre-defined, and scheduling automatic readings from your devices. The SDK also provides functions for calling all the common EdgeX APIs in a language-friendly way. All you have to do is add your device-specific connection and control code.

Training Tip: You can do end-to-end testing of your new Device Service by connecting your local EdgeX deployment up to a cloud service or NodeRed instance, and watch the readings flow in and commands go out.

Extending EdgeX to the cloud with Application Services

Since EdgeX Foundry itself doesn’t try to do anything other than efficiently moving data off the edge, you’re eventually going to want to send that data to some software for further processing. To do that, EdgeX provides a couple of ways to publish data to software running either on the edge, in a data center, or the cloud.

The first method is to use the pre-built Export Services, which are capable of sending data as JSON or XML over simple HTTP or MQTT to another service. This is a good option if you are using a service that already knows how to handle these common protocols, and can do any data transformations you need.

But if those aren’t suitable, the EdgeX Foundry also provides an Application Functions SDK for writing custom handlers, which can do anything from publishing data to a remote host, to performing local transformations, analytics and responses right from the edge. The SDK allows you to trigger you application functions whenever data comes into EdgeX, or on-demand, and even provides functions for publishing your data (via HTTP or MQTT) when you’re done handling it locally. Application Functions can even be chained together to build complex pipelines, reusing the same Application Functions in different ways.

Training Tip: You can do end-to-end testing of your new Application Service by connecting the Virtual Device Service to your local EdgeX deployment, which can simulate device data coming in.