Category

Blog

What is Baetyl?

By Baetyl, Blog

Baetyl is an open edge computing framework of LF Edge that extends cloud computing, data and service seamlessly to edge devices. It can provide temporary offline, low-latency computing services, and include device connect, message routing, remote synchronization, function computing, video access pre-processing, AI inference, device resources report etc.

About architecture design, Baetyl takes modularization and containerization design mode. Based on the modular design pattern, Baetyl splits the product to multiple modules, and make sure each one of them is a separate, independent module. In general, Baetyl can fully meet the conscientious needs of users to deploy on demand. Besides, Baetyl also takes containerization design mode to build images. Due to the cross-platform characteristics of docker to ensure the running environment of each operating system is consistent. In addition, Baetyl also isolates and limits the resources of containers, and allocates the CPU, memory and other resources of each running instance accurately to improve the efficiency of resource utilization.

Advantages

  • Shielding Computing Framework: Baetyl provides two official computing modules(Local Function Module and Python Runtime Module), also supports customize module(which can be written in any programming language or any machine learning framework).
  • Simplify Application Production: Baetyl combines with Cloud Management Suite of BIE and many other productions of Baidu Cloud(such as CFCInfiniteEasyEdgeTSDBIoT Visualization) to provide data calculation, storage, visible display, model training and many more abilities.
  • Service Deployment on Demand: Baetyl adopts containerization and modularization design, and each module runs independently and isolated. Developers can choose modules to deploy based on their own needs.
  • Support multiple platforms: Baetyl supports multiple hardware and software platforms, such as X86 and ARM CPU, Linux and Darwin operating systems.

Components

As an edge computing platform, Baetyl not only provides features such as underlying service management, but also provides some basic functional modules, as follows:

  • Baetyl Master is responsible for the management of service instances, such as start, stop, supervise, etc., consisting of Engine, API, Command Line. And supports two modes of running service: native process mode and docker container mode
  • The official module baetyl-agent is responsible for communication with the BIE cloud management suite, which can be used for application delivery, device information reporting, etc. Mandatory certificate authentication to ensure transmission security;
  • The official module baetyl-hub provides message subscription and publishing functions based on the MQTT protocol, and supports four access methods: TCP, SSL, WS, and WSS;
  • The official module baetyl-remote-mqtt is used to bridge two MQTT Servers for message synchronization and supports configuration of multiple message route rules. ;
  • The official module baetyl-function-manager provides computing power based on MQTT message mechanism, flexible, high availability, good scalability, and fast response;
  • The official module baetyl-function-python27 provides the Python2.7 function runtime, which can be dynamically started by baetyl-function-manager;
  • The official module baetyl-function-python36 provides the Python3.6 function runtime, which can be dynamically started by baetyl-function-manager;
  • The official module baetyl-function-node85 provides the Node 8.5 function runtime, which can be dynamically started by baetyl-function-manager;
  • SDK (Golang) can be used to develop custom modules.

Architecture

../_images/design_overview.pngArchitecture

Contributing

If you are passionate about contributing to open source community, Baetyl will provide you with both code contributions and document contributions. More details, please see: How to contribute code or document to Baetyl.

Contact us

As the first open edge computing framework in China, Baetyl aims to create a lightweight, secure, reliable and scalable edge computing community that will create a good ecological environment. In order to create a better development of Baetyl, if you have better advice about Baetyl, please contact us:

LF Edge Member Spotlight: Arm

By Akraino Edge Stack, Blog, Member Spotlight

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Tina Tsou, Enterprise Architect at Arm and Co-Chair of the Technical Steering Committee for the Akraino Edge Stack, to discuss the importance of open source software, collaborating with industry leaders in edge computing, security, how they contribute to the Akraino Edge Stack project and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

I work for Arm, a semiconductor IP company, as Enterprise Architect in the Infrastructure line of business. 

What are your interests in open source projects?

The open source approach allows me to get requirements from customers, understand the solutions from partners, so I can get prepared well for internal design and products, having open source in mind.

What are your thoughts on LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

I joined the Akraino Edge Stack per a customer’s ask and then the project was merged under the LF Edge umbrella. LF Edge is the de facto standard for the mainstream product and deployment of edge computing including edge servers, network edge, and device gateway.

What do you see as the top benefits of being a community participant of the LF Edge community?

The top benefits include having direct connection with customers that keep me updated on the latest trends of edge computing and collaboration among end to end ecosystems and supply chain as well as validation in CI/CD environments.

What are your top contributions as a community member?

I serve as LF Edge TAC member, Akraino TSC co-chair, contributed Integrated Edge Cloud (IEC) platform (Arm enabler) used by multiple blueprints like the 5G MEC System, the AI Edge, etc.  Sample blueprints that I’ve contributed to include:

  1. The purpose of Public Cloud Edge Interface (PCEI) Blueprint family is to specify a set of open APIs for enabling interworking between multiple functional entities or Domains that provide services needed for implementation of Edge capabilities/applications that require close integration between the Mobile Edge, the Public Cloud Core and Edge as well as the 3rd-Party provided Edge functions. In Release 3 we choose to expose the set of location APIs as defined by the MTSI MEC to showcase the PCEI capability.
  2. KubeEdge Edge Service Blueprint showcases end-to-end solution for edge services with KubeEdge centered edge stack. The first release will focus on the ML inference offloading use case. It supports multi-arch.
  3. The Radio Edge Cloud Blueprint is a member of the Telco Appliance blueprint family which is designed to provide a fully integration tested appliance tuned to meet the requirements of the RAN Intelligent Controller (RIC). Changes since Release 2 (2019-11-18), include New Features like Arm64 Support! – REC Release 3 supports Arm based CPUs for the first time. Support includes Ampere Hawk and Falcon servers.
  4. IEC (Integrated Edge Cloud) is a platform that enables new functionalities and business models on the network edge.
  5. The IEC type 2 mainly focuses on the high-performance computing system of medium and/or large deployment at the data center.
  6. IEC Type 3 mainly focuses on Android Application running on edge Arm Cloud architecture with GPU/ vGPU Management.

What do you think sets LF Edge apart from other industry alliances?

LF Edge has a pragmatic approach for Practice, Project, Production in a closed loop, which is at production quality and deployable.

What advice would you give to someone considering joining LF Edge?

Start by questioning how you expect LF Edge will be used in your business and then read the end user stories. You will find the right project and perspective that you care about and then take baby steps, for example, try an Akraino blueprint you are interested in, and report any issue you find. The community always has your back. Have fun!

Members of the Akraino TSC at the March 2020 F2F

To learn more about Akraino Edge Stack, click here. To find out more about our members or how to join LF Edge, click here. Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #akraino-tsc channels.

EdgeX Foundry’s Geneva Release and Plans for Hanoi

By Blog, EdgeX Foundry

Written by Jim White, Co-Chair of the EdgeX Foundry Technical Steering Committee and CTO of IOTech

Every six months, the EdgeX Foundry technical community gets together to review its recent release (collecting lessons learned, assembling release notes, etc.) and to plan out its next release.  For the last three years, these meetings have been face-to-face events held in venues across the globe.  It has become an event which many in the community have come to enjoy as it allows us to celebrate our success, plan out the next release, and at the same time reconnect with people that have become close professional friends.

Alas, the global pandemic claimed another causality – albeit a far less important one in relation to world events – when we had to cancel our face-to-face meeting this spring.  We held a virtual event instead and, though different, it was still fun and successful – thanks to the incredible efforts of the development community that has been the life blood of the project since the project’s inception.  We had more than 35 members attending the online sessions each day and managed to get everything accomplished that we planned and even had time for a mini-educational conference (called the “Unconference”) that allowed community members to highlight new EdgeX-based solutions and new project ideas.

The Geneva Release

Our planning meeting is always at the tail end of our current release efforts.  We have an EdgeX development “freeze” period, which means no more feature coding can be done as we clean up bugs and tackle documentation.

Codenamed “Geneva” – this is is EdgeX Foundry’s sixth release but the first for 2020. The highlights of the Geneva release include:

  • Dynamic or automatic device on-boarding – for protocols where it makes sense, this allows EdgeX to automatically provision new sensors and have the sensor data start to flow through EdgeX
  • Alternate messaging support – where applicable, allows adopters to use any messaging implementation under the covers of EdgeX to include MQTT, 0MQ, or Redis Streams.
  • Better type information is associated to sensor data – allowing analytics packages and other users of EdgeX sensor data to more easily digest the data and aid in transformations on the data
  • REST device service
  • Batch and send export – allowing sensor data to be sent to cloud, on-prem or enterprise systems at designated times
  • Support for MQTTS and HTTPS export
  • Redis as the default DB – deprecating MongoDB for license, footprint/performance, and ARM support reasons
  • Adding the Kuiper rules engine – a new rules engine that is smaller and faster and written in Go which replaces EdgeX’s last Java micro service.
  • Improved Security
  • Interoperability testing
  • Improved DevOps CI/CD – now using Jenkins Pipelines to produce EdgeX Foundry project artifacts

The Hanoi Release

As for our semi-annual planning meeting, we set up the scope of our next release which is codenamed “Hanoi.”

There are some significant notable features in this upcoming release which is scheduled to arrive in the October/November 2020 timeframe.

  • Device service to application service message bus – allowing data to “stream” more directly (faster) through EdgeX without persistence.
  • Improve support for running device services on different hosts
  • Secure service to service communications – enabling EdgeX services to live on different hosts and communicate securely
  • Data filters in the device service – allowing repetitive or meaningless sensor data to be discarded sooner in the collection process
  • Performance guidance – giving adopters better understanding of hardware requirements and data flows which can help them architect edge solutions for their use case
  • Improved interoperability testing
  • Hardware Root of Trust abstraction – allowing EdgeX’s to get its secret store master key from hardware secure storage

EdgeX APIs and the EdgeX 2.0 Future

More importantly, this release will include work on a new API set for our microservices.  We call this the V2 API set.  As the name implies, it will be the EdgeX APIs that will be featured in version 2 of EdgeX in the near future (perhaps as soon as 2021).  A portion of the V2 APIs will be available for exploration with the Hanoi release – allowing developers to experiment and test the new APIs.

During the Hanoi release, as we work on V2 API, we are making the APIs easier to use, decoupling data elements used in the message communication from the operations, and generally allowing for better security, better testing, use of different transport infrastructure, and easier extension of the APIs going forward.  So, while Hanoi is expected to be a minor version, it provides the footing for a new major release (EdgeX 2.0) next year.

We invite you to have a look at the new EdgeX release – Geneva.  Learn more here: https://www.edgexfoundry.org/release-1-2-geneva/. Here is hoping that the next six months bring health and safety back to all of our lives and that we’ll be able to reconvene the community of EdgeX Foundry experts soon.

 

EdgeX Foundry IoT Innovation Challenge – Phase 1 Complete

By Blog, EdgeX Foundry

Written by Clinton Bonner, VP of Marketing for Topcoder

We recently told you about an awesome challenge series on Topcoder—focused on using on-demand talent for EdgeX Foundry IOT innovation. Often a new tech philosophy needs an enabling catalyst for it to gain traction. For the Industrial IoT, EdgeX Foundry from the Linux Foundation is that catalyst. EdgeX Foundry is a vendor-neutral open source project hosted by LF Edge building a common open framework for IoT edge computing.

In Phase 1 of this series, EdgeX Foundry tapped the Topcoder community to conceptualize new and unique industry use cases that leveraged the EdgeX Platform. The concepts were procured through this work.

Well, the winners are in! LF Edge, EdgeX Foundry and the teams from Dell Technologies, Intel, HP, IOTech, and Wipro were impressed with the wide variety of innovative use case ideas and technical explanations as to how the EdgeX stack would be leveraged.

Here are the top 5 winning submissions:

1. cunhavictor (Brazil) – Fuel Inventory Control for Oil & Gas Downstream Industry

Synopsis: Product losses during fuel tank truck’s loading and offloading operations, as well as thefts, are common problems that translate to costs estimated around $133 Billion for players in the oil & gas downstream industry. The winning idea is a system for inventory control and loss/theft assessment that joins data from different sensors in a distributed processing architecture. Data integration is usually not handled properly by fuel distribution companies and the system would solve that.  This integration not only has potential to reduce losses but help the fuel distribution company, gas station owners (retail outlets), the end user, and even refineries, to understand how inventory losses occur and to protect their assets.

“Excellent articulation of the business problem, instrumentation options, and how the data will be integrated. You have great domain expertise & clarity of thought.” – Challenge judge

2. wiebket (Netherlands)  – Clinic Occupancy Monitoring System

Synopsis: Traveling to a clinic for a checkup is expensive and time consuming for many people in South Africa. Clinic managers have little oversight into patient flow through the clinic. The clinic occupancy monitoring system uses video cameras and door sensors connected to the EdgeX platform to accurately monitor unique patient visits and real time clinic occupancy without sending patient image data to the cloud. The beneficiaries of such a system are patients, clinic managers and the National Department of Health.

3. vivkv (India) – Machinery and Power Consumption Management

Synopsis: Gather sensor data through EdgeX device services and create an application/algorithm for machinery and power consumption management through the use of advanced analytics and machine learning solutions. This would be done in an integrated manner to maximize ROI on the assets and remain closely bound to the overall business budget.

4. Gungz (Indonesia) – Factory Smart Bin

Synopsis: Create an intelligent smart bin that can send the information about its fill level (volume) and weight to EdgeX platform and then, based on the fill level (volume) of the bin, EdgeX platform can send commands to AGV (Assisted Guided Vehicle) to automatically come and empty the trash bin. Using this approach, a factory can automate its waste tracking and waste disposal without human intervention.

5. kavukcutolga (Turkey) – EdgeX Parking Management

Synopsis: This proposal aims to decrease the time, money, and pollution caused by drivers spending time trying to find a parking spot by using IoT Devices such as Object and Distance sensors.

Congratulations to the top five winners. This concludes Phase 1 of this challenge series and now we’re moving on to Phase 2. The second phase will be a challenge where members pick three of the top ideas and build a Technical Design Document detailing the Technical gameplan of how an idea would be brought to life.

We’d like to thank LF Edge, EdgeX Foundry, Dell Technologies, Intel, HP, IOTech and Wipro for pushing innovation and collaboration that accelerates the deployment of IoT solutions. If you’d like to use on-demand talent in a similar manner to accelerate your technology goals, contact Topcoder here.

For additional information about LF Edge or EdgeX Foundry:

Winners of the Annual EdgeX Foundry Community Awards

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

Traditionally, at our Spring Face to Face meeting, LF Edge’s EdgeX Foundry honors members of the community who have helped us over the last year.  Well, there is nothing traditional about 2020.  Our Face to Face, scheduled for the end of April in Limerick, Ireland became a virtual event.  Thus, like most of the world, we switched from meeting in person to having Zoom meetings at home, while trying to balance getting work done, keeping up with our kids schooling, and doing all of the other things of our lives without leaving the house. While the meeting was different, we stuck to tradition and honored our 3rd Annual EdgeX Awards  at a ceremony yesterday with EdgeX Foundry TSC Co-chairs Keith Steele and Jim White.

From top left to right: Keith Steele, Maemalynn Meanor, Bryon Nevis. Michael Estrin, Lisa Rashidi-Ranjbar, James Gregg, Aaron Williams, Jim White

Congratulations to our winners Michael Estrin, James Gregg, Lisa Rashidi-Ranjbar and Bryon Nevis! Learn more about the awards and their dedication below.

Innovation Awards: The Innovation Award recognizes individuals who have contributed (contributing back to open source) the most innovative solution to the open source EdgeX Foundry project over the past year.

Winners: Michael Estrin and James Gregg

Michael was nominated for his bootstrap code effort that reduced the duplication and maintenance efforts by using dependency injection.  He was also one of the main contributors of the V2 APIs. Not only did he tirelessly work on the code, but he also had one-on-one meetings with members of the community to get their buy in and made sure that security was always planned from the beginning

“My involvement with EdgeX Foundry was driven by my employer — Dell Technologies.  Dell provided the opportunity to fully immerse myself in the open source project — as a committer, the current System Management Working Group Chair, and a member of the project’s Technical Steering Committee.  I’ve enjoyed working on EdgeX and I appreciate the project’s public recognition of my contributions over the past year.”Michael Estrin

Michael Estrin, Dell Technologies, Winner of the EdgeX Innovation Award

James was nominated for his leadership of the EdgeX DevOps community and worked tirelessly to improve our CI/CD pipes, stabilize our development and testing environments, and greatly improve our release processes and procedures.  He and his team added in Snyk integration, Nexus Cleanup, the introduction of GitHub Org Plugin, and basically improving all parts of our support tools.

“Thank you to the EdgeX Foundry open source community for the Innovation award.  I’m truly honored to receive it and congratulate the other recipients that won EdgeX Foundry awards in 2020. I would also like to recognize the talented Intel DevOps team that worked to transform the build automation and who all worked many hours to make it such a success.  Thanks to Ernesto Ojeda, Lisa Ranjbar, Emilio Reyes, and Bill Mahoney for their dedication and commitment.  My involvement has been largely in support of the Open Retail Initiative where Intel along with other top technology companies collaborate on open source initiatives that will accelerate iteration, flexibility and innovation at scale. This project exposes the contributors to many subject areas (IoT, microservice development, security) so there’s always an opportunity to participate in an area of interest.  Working within the EdgeX Foundry project is very rewarding and I’d encourage others to participate even if it’s the smallest contribution.” – James Gregg

James Gregg, Intel, Winner of the EdgeX Innovation Award

Contribution Awards: The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.

Winners:  Lisa Rashidi-Ranjbar and Bryon Nevis

Lisa was nominated for her work on improving EdgeX Foundry’s build process.  She transformed it so that today the build process uses modern Jenkins Pipelines.  This and the other build performance and optimizations greatly reduced the time to market for all of EdgeX’s microservices. This is the dev ops backbone that allows for continuous delivery and automated release of the EdgeX artifacts.  Plus, she worked with the developer training team to improve EdgeX’s formal education project.

“EdgeX Foundry is the first open source community I’ve been involved with and it’s been an awesome experience. The community is really active and welcomes contributions of all kinds to the project. My contributions are around transforming the continuous integration infrastructure and automating the release process. My motivation is to make the development of the project go more smoothly. I’ve always felt welcome and appreciated in the EdgeX community. That feeling of appreciation really goes a long way when you are solving software problems.”Lisa Rashidi-Ranjbar

Lisa Rashidi-Ranjbar, Intel, Winner of the EdgeX Contribution Award

Bryon lent his extensive security experience and knowledge to EdgeX and vastly improved our security infrastructure.  Namely he developed a V2 API security implementation, which will be released in later releases and fixed many security issues.  The V2 API used an Issue Management Standard in which he worked with the community to make this a formal process and guidance.  Simply, EdgeX Foundry is much more security than it was before he started working for the community and because of his hard work and leadership in developing the processes, EdgeX’s security will continue to improve.

“I am a security architect and developer and have been involved with the EdgeX Security Working Group since Fall of 2018. Security touches everything.  Raising the IoT security bar is an industry-wide challenge that touches all parts of the hardware and software stack.  Adding security to an in-flight project such as EdgeX—without breaking anything—is especially hard and requires a sustained effort of small changes over a long period of time.  For me, the work transcends EdgeX: it is my opportunity to teach my fellow tradesmen (and tradeswomen) by example how to do security.”Bryon Nevis

Byron Nevis, Intel, Winner of the EdgeX Contribution Award

Michael, James, Lisa, and Bryon join our previous winners:  Andy Foster, Drasko Draskovic, Micael Johanson, Lenny Goodell, Tony Espy, Trevor Conn, and Cloud Tsai, in earning the heartfelt gratitude of the community.  EdgeX Foundry has had more than 100 contributors from dozens of companies working for our community.  This work has enabled EdgeX to continue to grow. In fact, EdgeX Foundry passed the 1 million total downloads in October 2019, and just 5 months later we have more than 6 million.

To learn more about EdgeX Foundry, visit our website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join.  Simply visit our wiki page and/or check out our Git Hub and help us get to the next 6 million and more downloads!

 

 

 

 

Creating the State of the Edge Report

By Blog, State of the Edge

This blog originally ran on Mission.org as part of their IT Visionaries podcast that shares conversations with Fortune 1000 Tech Leaders. In this post, IT Visionaries talks with Matt Trifiro, co-chair of State of the Edge, LF Edge member and CMO at Vapor IO. To listen to the podcast, click here

Do you know the current state of the edge? If you do, it’s most likely because of the work that Matthew Trifiro has done in recent years. Matthew is the CMO at Vapor IO and Co-Chair at State of the Edge  and on this episode of IT Visionaries, he tells us about how and why he wanted to create the State of the Edge report. Plus, he explains what is going on in the world of edge computing and what might come next.

Key Takeaways

  • The State of the Edge report is an authority on what is really going on in the world of edge computing
  • Moving to edge computing is going to be a necessary part of the re-architecture of the internet
  • The speed with which we need to analyze data is increasing, which means we need to use edge computing to analyze in real-time

Creating State of the Edge

Matt and one of his peers were looking for a way to cut through all the noise in the world of edge computing. There was so much going on with edge and people have always defined “edge” in different ways, so Matt wanted to bring experts together to come up with something everyone could agree on. Working with partners and funders, Matt and his team were able to create an unbiased report that many look to as the definitive authority on the true “state of the edge.”

“Edge is kind of in the eye of the beholder. Each person’s edge is different. But what was really important is that if you think of edge computing as an extension of the internet, the edge we really care about is the edge of the last mile network. And nobody was really talking about that.” 

Challenges of creating the report

Originally, the goal was to put out a State of the Edge report every year, but it took 18 months between the first report’s publication and the second. The delay is to ensure that the information provided by Matt and everyone working on the report is properly vetted, and as accurate and useful as possible. And this is all being done by Matt and now just two employees, who all also have day jobs. This is a true passion project for those involved.

“We took a lot of time to have peers in the industry review and evaluate what we were putting out before we published it. And I think the quality report is super high because of that. But it also took a lot of time.”

What’s in the most recent report?

Matt says that this year’s report is twice as good as the first and one of the reasons is because in the report they answer a lot of the questions that people ask about edge all the time. Those questions include: how real is edge? What is being deployed? How much and where are things being deployed? By answering those questions, companies with an interest in investing capital in edge can make much more informed decisions. To answer those questions, Matt hired analysts and researchers who worked to come up with real data and models.

The report also reflects what Matt is excited about – a new form of the internet rearchitected with edge to achieve things we cannot at this time. Matt describes the arc of the internet as a three-act play, and the act we’re in now is one in which we’re moving from humans talking to machines more to machines talking to machines.

“The transformation in the edge industry is moving from people with frothy ideas slapping edge on some old product to make it seem cool to real practitioners deploying real dollars, putting capital to work to build the platforms on which this next generation of compute is going to be built. I’m fond of saying the internet is broken. …There are lots of applications we want to consume or bring about that simply cannot be done with the internet today. And we have to rearchitect it.”

“The real transformation is we are going to have billions and billions and billions of devices that aren’t being looked at by a human that are going to be transmitting data that needs to be analyzed and acted upon in near real-time. We’re moving from a world where the internet needs to function in ones of seconds to a world that it needs to function in tens of milliseconds, ones of milliseconds, and in some cases, microseconds. And that’s just a very different world. And it’s not a billion devices. It’s tens of billions and maybe trillions of devices. It’s a huge transformation.”

State of the Edge merged with LF Edge last month and is now an integral part of the ecosystem. To learn more about the project or join the mailing list, click here.

Exploration and Practices of Edge Computing

By Blog, EdgeX Foundry, Industry Article

Written by Gavin Lu, LF Edge member, EdgeX Foundry China Project Lead and R&D Director in the VMware Office of the CTO

Exploration and Practices of Edge Computing 

Chapter 1:

Cloud computing and edge computing

After nearly ten years of rapid growth, cloud computing has become a widely accepted and widely used centralized computing method in various industries around the world. In recent years, with the rise of “new infrastructure construction” such as big data, artificial intelligence, 5G and Industrial Internet, the demand for edge computing has soared. No matter from the perspective of telecom carriers, public cloud service providers, or enterprise users in various industries, they hope to create an edge computing solution that is most suitable for their scenario in some form.

Cloud edge collaboration

Whether it is Mobile Edge Computing (MEC), Cloud Edge, or factory-side Device Edge, users want to be able to have an edge computing platform that cooperates with the cloud platform. Cloud edge collaboration is not only about opening up the data channel at run-time, but also the overall collaboration from the perspective of the entire life cycle management and the full-stack technology. In addition to the highly fragmented edge computing field of various industry platforms, it is undoubtedly more challenging to get through full-stack integration with multiple cloud computing platforms.

The dilemma of edge computing

At the bottom of the edge computing platform is the part about infrastructure, or device management. Due to the natural characteristics of the edge computing model, available device resources usually have considerable constrains. Whether it is from hardware specifications such as CPU, memory, storage, network, accelerator, or from OS, application framework and other software specifications, they are far from comparable to cloud computing platforms or even ordinary PCs. On the other hand, edge applications running on devices often have very different requirements for CPU and OS due to historical reasons and vendor heterogeneity. No matter from the cost of space, time, capital and other aspects, it adds more pressure.

Containerization and virtualization

In order to achieve cloud-edge collaboration, solve the problems of device management and edge application management, the most common practice in various industries now is to run containerized applications on devices and managed by cloud platforms. Its advantage is to make full use of the proven technology, platform, personnel and skills in cloud computing. The disadvantage is that it is more suitable for modern edge applications, and it is more difficult to implement containerization for legacy systems with different OSes. The representative of containerization is the EdgeX Foundry edge computing framework hosted under the LF Edge umbrella.



For legacy systems with different OSes, the realistic way is to package the application in a virtual machine and run it on the hypervisor platform on the device side. Virtualization platforms for edge applications may be further integrated with containerized platforms to create an unified operating mechanism across OSes. For example, the ACRN project under the Linux Foundation, and the EVE project under LF Edge are virtualization platforms specifically for devices.

Edge computing operation and monitoring

Whether it is containerization, virtualization or a combination of the two, the practice of operating edge computing will eventually and only be done on the cloud side. Otherwise, for large-scale production deployment, the accumulation of efficiency, safety, cost and other issues will become an inevitable nightmare. So from the perspective of operation and management, although these devices are not necessarily located on the cloud side, because they are all managed from the cloud side, they are operated and maintained in a manner similar to cloud computing. Device Cloud in the sense.
As an industry leader with vast experience and knowledge, Gavin has been writing a series of articles focused on edge computing. These articles are posted on his personal blog and are posted here with his permission. To read more content from Gavin, visit his website

Turning your Raspberry Pi 4 into an edge gateway – Part 2

By Blog, EdgeX Foundry

Written by Galem Kayo, EdgeX Foundry contributor and Product Manager for Ubuntu Core at Canonical, an LF Edge member company

This content original ran on the ubuntu blog. For more content like this, please visit their website

In the first part of this tutorial, we installed and configured EdgeX foundry on a Raspberry Pi 4, turning it into an edge gateway. In this tutorial, the gateway will be connected to dummy IoT devices. These dummy devices will be virtually simulated by software. They will send messages transmitting random measurements to an MQTT broker hosted in the cloud. These messages will be forwarded to MQTT clients that subscribes to receive them.

Connecting the Southbound

The IoT data flow from devices to the edge gateway is referred to as the Southbound. In EdgeX, communication between IoT devices and gateways is assured by device services. A device service is a microservice that implements an interface to an IoT communication protocol (Modbus, OPC-UA, REST, BLT, Zigbee, MQTT, BACNet, SNMP etc…).

The Virtual Device Service is convenient for configuring and testing  your Raspberry Pi 4 gateway. It allows you to carry out tests without any real IoT device. This way, microservices running on the gateway can be configured and troubleshooted without any IoT device connected. The Virtual Device Service also simulates dummy IoT devices that generate and transmit random numbers of different types: integer, unsigned integer, boolean and float.

The Virtual Device Service can be enabled very easily with the following command:

sudo snap set edgexfoundry device-virtual=on

Upon enablement, the service and the virtual devices it simulates become visible in the edgeX web UI.

Creating an MQTT server in the cloud

Edge gateways are the juncture between devices and servers. We will now configure the receiving end in the cloud: a server hosting a custom MQTT broker. The MQTT broker will receive the messages sent from the virtual devices. The messages will then be dispatched to subscribing client devices.

The MQTT broker is installed on a Ubuntu 18.04 (bionic) server instance running AWS cloud. MQTT is quite easily installed on the Ubuntu cloud instance in a snap, as follows:

sudo snap install mosquitto

The status of the broker can then be inquired:

sudo systemctl status snap.mosquitto.mosquitto.service

Connecting the Northbound

With the MQTT server hosted in the cloud, a Northbound connection needs to be established between the Raspberry Pi 4 gateway and the server. An EdgeX Export Service is a microservice that moves data collected by device services to a server in the cloud. An Export Service can be created as follows:

sudo snap set edgexfoundry export-client=on
sudo snap set edgexfoundry export-distro=on

Upon activation, data transfer to the MQTT server can be configured. Parameters like the server IP address, port, communication protocol, data format, topic and encryption are to be specified as described in the screenshot below.

Once saved, the Export Service will connect to the MQTT broker and create a topic. The Export Service will publish data collected from virtual devices through this topic.

Consuming the virtual IoT data flow

At this point, you have a Raspberry Pi 4 gateway listening to virtual IoT devices, and forwarding the data collected from devices to an MQTT server hosted on a public cloud.

Now, any MQTT client running on any device can consume the IoT data flow originating from virtual devices. Clients only need to subscribe to the associated topic via our MQTT broker.

To read this data flow, we will install an MQTT client on a Ubuntu desktop, and subscribe to the topic created above via the Export Service. It should be noted that this data flow can be accessed via any MQTT client software installed on any desktop device.

sudo snap install mosquitto
mosquitto_sub -h <your_ip_adress_here> -t <topic_name_here>

The virtual IoT data flow can be read in the terminal. The stream corresponds to the data created by the Virtual Devices, in the format specified by our Export Service.

Resources

Next steps

In the third part of this tutorial, the gateway will be connected to a physical IoT device. We will also test the device management capabilities of EdgeX Foundry.

Extending Kubernetes to the IoT Edge

By Blog, Project EVE

Written by Jason Shepherd, LF Edge member, VP of Ecosystem for Zededa and active leader in Project EVE

This post originally ran on the Zededa Medium blog. Click here for more articles like this one. 

When Kubernetes was first released onto the open source scene by Google in 2014, it marked an important milestone in application development for cloud environments. Docker had already begun to show the world the value of containers as a means to create modular, nimble applications that could be built using a microservices architecture. But containers were difficult to deploy at scale — particularly at the massive size of many cloud-native applications. Kubernetes filled that need by providing a comprehensive container orchestration solution.

Now, as the edge is emerging as the next frontier of computing, many businesses and developers are looking to replicate the benefits of cloud-native development closer to the data’s source. Kubernetes will play an important role in that process, but, for reasons we’ll explain in this post, it isn’t as easy as simply copy/pasting the traditional version of Kubernetes across the board.

Which edge are we talking about?
Of course, there are many different “edges” in edge computing, and it’s been easier to extend cloud-native principles to some more than others. For instance, for a service provider that creates a micro data center comprised of a rack of servers at the base of one of their cell towers, it’s fairly straightforward to use the same version of Kubernetes that runs in the cloud. These deployments are very similar to those found in a traditional data center, and so developing and running applications is also similar, including the way containers are used and orchestrated.

At the other extreme of the edge spectrum are devices like microcontrollers, which use deeply embedded code with tight coupling between hardware and software due to resource constraints in areas like memory and processing. These classes of devices also often leverage a Real-time Operating System (RTOS) to ensure determinism in responses. Given these constraints, it’s not technically feasible for these devices to leverage container-based architectures, regardless of whether there’s a benefit for modularity and portability.

In between these two ends of the edge computing spectrum are a large number of devices that make up what we consider to be the “IoT edge”: small- and medium-sized compute nodes that can run Linux and have sufficient memory (>512MB) to be able to support the overhead of virtualization. Compared to embedded devices that tend to be fixed in function, these compute nodes are used to aggregate business-critical data for enterprises and need to be able to run an evolving set of apps that process and make decisions based on that data. As app development for edge use cases continues to evolve and grow, companies will see the most benefit from making these IoT devices as flexible as possible, meaning they’ll want the capability to update software continuously and reconfigure applications as needed. This idea of being able to evolve software on a daily basis is a central tenet of cloud-native development practices, which is why we often talk about extending cloud-native benefits to the IoT edge.

Challenges of extending Kubernetes to the IoT edge
While Docker containers have been widely used in IoT edge deployments for several years, extending Kubernetes-based cloud-native development to the IoT edge isn’t a straightforward process. One reason why is that the footprint of the traditional Kubernetes that runs on servers in the data center is too large for constrained devices. Another challenge is the fact that many industrial operators are running legacy applications that are business-critical but not compatible with containerization. Companies in this situation need a hybrid solution that can still support their legacy apps on virtual machines (VMs), even as they introduce new containerized apps in parallel. There are data center solutions that support both VMs and containers today; however, footprint is still an issue for the IoT edge. Further, IoT edge compute nodes are highly distributed, deployed at scale outside of the confines of traditional data centers and even secure micro data centers, hence they require specific security considerations.

In addition to the technical challenges, there’s also a cultural shift that comes with the move toward a more cloud-native development mindset. Many industrial operators are used to control systems that are isolated from the internet and therefore only updated if absolutely necessary, because an update could bring down an entire process. Others are still accustomed to a waterfall model of development, where apps are updated only periodically, and the entire stack is taken down to do so. In either case, switching to a system of modularized applications and continuous delivery is a big transition, if not premature. It’s not always an easy change to make, and it’s common for organizations to find themselves stuck in a transitory state — especially for those that leverage a combination of partner offerings instead of controlling all aspects of app development and technology management in-house.

Net-net, there’s a lot of buzz today around solutions that promise to make cloud-native development and orchestration at the edge as easy as flipping on a switch, but the reality is that most enterprises aren’t ready to “rip and replace,” and therefore need a solution that provides a transition path.

Enter: Project EVE and Kubernetes
Simplifying the transition to a cloud-native IoT edge is the philosophy that underpins Project EVE, an open source project originally donated by ZEDEDA to LF Edge. EVE serves as an edge computing engine that sits directly on top of resource-constrained hardware and makes it possible to run both VMs and containers on the same device — thereby supporting both legacy and cloud-native apps. Additionally, EVE creates a foundation that abstracts hardware complexities, providing a zero-trust security environment and enabling zero-touch provisioning. The goal of EVE is to serve as a comprehensive and flexible foundation for IoT edge orchestration, especially when compared to other approaches which are either too resource intensive, only support containers, or are lacking security measures necessary for distributed deployments.

With EVE as a foundation, the next step is to bridge the gap between Kubernetes from the data center and the needs of the IoT edge. Companies such as Rancher have been doing great work with K3S, taking a “top-down” approach to shrink Kubernetes and extend the benefits to more constrained compute clusters at the edge. At ZEDEDA, we’ve been taking a “bottom-up” approach, working in the open source community to meet operations customers where they’re at today with the goal of then bridging Project EVE to Kubernetes. Through open source collaboration these two approaches will meet in the middle, resulting in a version of Kubernetes that’s lightweight enough to power compute clusters at the IoT edge while still taking the unique challenges around security and scale into account. Kubernetes will then successfully span from the cloud to the IoT edge.

A exciting road ahead, but it will be a journey
Kubernetes will play an important role in the future of edge computing, including for more constrained devices at the IoT edge. As companies look to continually update and reconfigure the applications running on their edge devices, they’ll want the flexibility and portability benefits of containerized app development, scalability, clustering for availability, and so forth. But for now, many companies also need to continue supporting their legacy applications that cannot be run in containers, which is why a hybrid approach between “traditional” and cloud-native development is necessary.

We at ZEDEDA recognize this need for transition, which is why we developed EVE to make sure that enterprises could support both technology approaches. With this flexibility, it affords companies that are still working through the technical and cultural shift to cloud-native development with the room to manage that transition.

As cloud-native applications begin to make up a greater percentage of workloads at the edge, though, we’ll need the tools to manage and orchestrate them effectively. This is why we feel it’s important to put a strong effort now behind developing a version of Kubernetes that works for the IoT edge. With EVE, we’ve created a secure, flexible foundation from which to bridge the gap, and we’re excited to continue working with the open source community to extend Kubernetes to this space. For more information on EVE or to get involved, visit the project page on the LF Edge site.

Zededa is a LF Edge member and active leader in Project EVE. For more details about LF Edge members, visit here. Join the LF Edge Slack Channel and interact with the project at #eve.

Building an End-to-End NFV & Applications Stack Powered by Kubernetes and ONAP

By Akraino Edge Stack, Blog

Written by Sagar Nangare, an Akraino community member and technology blogger who writes about data center technologies that are driving digital transformation. He is also an Assistant Manager of Marketing at Calsoft.

Introduction

In this article, we’ll look at how open source projects like Kubernetes, ONAP and many others can be stacked to build a Network Function Virtualization (NFV) framework that is identified as an Akraino Integrated Cloud Network (ICN) blueprint.  We’ll outline the step-by-step inclusion of open source projects for various purposes (security, monitoring, orchestration, etc.). Readers will leave with an understanding of the role each open source project will play in the stack.

Glossary

Here are some key terms that I use throughout this article:

NFV: Network Function Virtualization

ONAP: Open Network Automation Framework

CNF: cloud-native network functions

VNF: Virtual Network Functions

Akraino ICN Blueprint: Akraino Integrated Cloud Network or Akraino Edge Stack is a Linux Foundations project for edge computing infrastructure. It is one of the internal projects or blueprints dedicated to the cloud native NFV stack.

CNI: container-native interface

What is NFV (Network Function Virtualization)?

NFV is a concept of transforming hardware-based network functions into software-based applications. It simply decouples network functions from proprietary hardware without impacting performance.

Why do we need NFV?

  • Management and orchestration of thousands of network resources from central location
  • Enable network programmability
  • Allow dynamic scaling of network resources
  • High level automation in the network
  • Monitoring of resources and network connectivity
  • Ability to integrate new services
  • Optimization of network performance

The main reasons for enterprises and service to adopt NFV include: a vast scale of network resources containing different types of network equipment from different vendors, reducing the CAPEX and OPEX for network infrastructure, delivering services in an agile manner and enabling scalability and elasticity of network infrastructure to support rapid technology innovation.

Open Source for NFV

Kubernetes and ONAP are the key open source projects for this NFV stack. Using Kubernetes at the edge is something that leading networking solution companies are evaluating to provide dynamic capabilities managed from a central cloud. If you aren’t familiar with ONAP, it is a platform for real-time, policy-driven orchestration and automation of physical and virtual network functions.

We’ve already seen how Kubernetes is used in NFV as a backbone for cloud-native evolvement. This push for Kubernetes to dive into the NFV stack opens up possibilities for other open-source projects to make up the NFV backbone of enterprise and telecom networks. And further, Kubernetes will enable more automation and dynamic orchestration of application and infrastructure resources across the NFV-powered network.

Additionally, since various edges or hubs are involved in the typical 5G and enterprise network backed by NFV architecture, open-source projects like ONAP and Akraino blueprints are coming up with specific modules for critical orchestration and monitoring tasks.

Now that we’ve acknowledged the major role of Kubernetes and ONAP, let’s focus on the exact needs for a complete end-to-end NFV stack that will innovate a software-driven network. Let’s also look at how we can address those needs by combining different open source projects to power a 5G network and enterprise WAN.

Why Do We Need an End-to-End NFV Stack?

Currently, the IT infrastructure of enterprises and communication service providers is transitioning from centrally managed to geographically distributed. 5G presents the prominent use case for edge nodes or hubs as the initial level data center to communicate with IoT devices and host application services. And enterprises are deploying SD-WAN with edge-like capabilities.

In such architectures, workloads types like microservices, virtual network functions (VNFs) and cloud-native network functions (CNFs) are distributed. All those workload types are orchestrated by different solutions like Kubernetes and commercial solutions provided by VMware, Red Hat and Rancher.

In Figure 1, we can see that it’s difficult to manage all the workloads with different orchestrators, for several reasons. Security policy enforcement and monitoring of distributed nodes differs from centralized ones. You’ll need to manage all clouds and applications deployed on them and get insights about the performance of resources deployed on different edge sites. Additionally, this transition will cost the enterprises and telecom network providers.

Last year at ONS Europe (Open Networking Summit), Srinivasa Adeppalli (Intel) and Ravi Chunduru (Verizon) discussed using Kubernetes with ONAP4K8S, a sub-project in ONAP. This stack would manage all the applications and network functions spread across multiple Kubernetes clusters hosted on different edge nodes or data center infrastructures. They showed how a single pane of glass can orchestrate different edge sites, multiple clouds and network traffic.

Let’s See How it Works

In the presentation, Srinivas and Ravi showed an SD-WAN enterprise edge with microservices deployed in clusters and VNFs and CNFs present in an NFV stack. A request for data access generates the traffic that steers from pods to the internet through different VNFs, CNFs and external routers. Some of the microservices can be user-facing, with low latency needs. In this case, you need a multi-traffic orchestrator that controls the traffic flow and prioritizes the user-facing applications to deliver optimal performance.

Figure 3 shows the stack with the traffic orchestrator. In this scenario, Istio service mesh framework couples the microservices deployed at different edge sites. IPAM Manager ensures that each site is assigned a unique IP subnet and avoids overlapping addresses to Microservices/functions.

As we have seen, microservices, along with VNFs and CNFs, span multiple edge sites or clouds. So, we need a multi-zone manager (Figure 4).

Edge sites may face targeted attacks by external threats that can compromise network communication channels, microservices-based apps, software, and SSD/HDD. In Figure 5, we’ll use a new orchestrator called Multi-security orchestrator that uses CA Service, key distribution and attestation. You can integrate Istio, Vault project and Keycloak project with the Multi-security orchestrator to protect sensitive data as well as manage identity and access. In fact, you can run Istio from the central multi-cluster security orchestrator.

Further, a Multi-Cluster Security Orchestrator lets you monitor all the applications, VNFs and CNFs to check the health status and performance of multi-cluster app visibility and monitoring modules. Figure 6 shows how you can integrate PrometheusJaeger and Fluentd  on each edge site for data monitoring and collect the data logs for analysis at a central location.

Now we have a complete NFV stack, called an Akraino ICN (Integrated Cloud Network) blueprint.

The major frameworks used in the end-to-end NFV stack include ONAP4K8sOVN4K8sNFV, Kubernetes and Akraino SD-EWAN.

You can use ONAP4K8s as a Multi-Cloud/Cluster orchestrator to perform the following tasks:

  • Single-click applications deployment
  • Auto-configuration of service meshes
  • Auto-configuration of SD-WAN to facilitate connectivity among micro-services in multiple clusters
  • Parent CA and Child CA cert/private key enrollment for each edge/zone

You can use Kubernetes to orchestrate microservices-based applications and NFV-specific components like Multus, OVN4K8SNFV and SRIOVNIC as well as application components including Istio and Prometheus. OVN4K8sNFV works as a CNI (container-native interface) that supports multiple types of workloads, such as apps, CNFs, VNFs, etc.

Finally, Akraino SD-EWAN provides overlay connectivity among the Kubernetes clusters.

Conclusion

In this article, I’ve highlighted several open source projects that can be useful at various NFV tasks. Most of these projects are mature and widely integrated into different domains on a different scale. Each of the open source project addresses various features in NFV stack. You need to determine key capabilities from open source projects for each NFV features. You can get more information about implementation details and ongoing development from Akraino ICN and learn more about the NFV stack.

For more information about Akraino Blueprints, click here: https://wiki.akraino.org/. Or, join the conversation on the LF Edge Slack Channel. #akraino-blueprints #akraino-help #akraino-tsc