Skip to main content
Category

Blog

eKuiper Issues 1.10 Release Empowering Edge Computing with Advanced Analytics and Enhanced Features

By Blog, eKuiper, Project Release

eKuiper—a lightweight IoT data analytics and streaming software—is now available in its 1.10 release. eKuiper, an LF Edge project, migrates real-time cloud streaming analytics frameworks such as Apache Spark, Apache Storm and Apache Flink to the edge. eKuiper references these cloud streaming frameworks, incorporates any special requirements of edge analytics and introduces rule engine, which is based on Source, SQL (business logic) and Sink; rule engine is used for developing streaming applications at the edge.

We are thrilled to announce the highly anticipated release of eKuiper version 1.10.0, marking a significant milestone as our first double-digit release. This remarkable release boasts an extensive array of features and a growing community of contributors, for whom we express our deepest gratitude. 

In this release, our top priority was enhancing the product’s core dependencies. Notably, we upgraded the Go language version to 1.20, ensuring optimal performance and compatibility. Additionally, eKuiper now extends support for the latest major version of Minnesota (v3) from EdgeX, further expanding its interoperability capabilities. While driving advancements in expressiveness, connectivity, and usability, we diligently focused on maintaining a delicate balance between feature richness and the ability to cater to edge deployments. Our commitment to delivering a streamlined experience for edge computing remains unwavering.

The 1.10 release introduces several significant new features, including:

  • Rule scheduling:
    • Empower the edge to autonomously manage rule execution, allowing for enhanced flexibility and efficiency in edge computing environments.
  • EdgeX Minnesota support:
    • Seamlessly integrate with the latest major version of EdgeX.
  • Enhanced file sink:
    • Expand the capabilities of the file sink by supporting a wider range of file types, introducing rolling and compression features for efficient data storage and management.
  • Kafka sink:
    • Introduce a new sink to write rule results directly to Kafka, enabling seamless integration with Kafka-based data pipelines and stream processing workflows.
  • SQL source/sink enhancement:
    • Enhance the SQL source/sink functionality by introducing support for the new ClickHouse driver. Additionally, enable the configuration of maximum connections to optimize performance in scenarios with high connection volumes.
  • Batch strategy for sink:
    • Introduce common properties for sinks to configure batch strategies, enabling efficient data handling and reducing I/O operations as needed.
  • Complex data extraction for sink:
    • Simplify data transformation and processing by introducing common properties for sinks to extract specific data fields and optimize column selection, resulting in improved data processing performance.
  • Array type payload support:
    • Enhance the source to support array type payloads, allowing for seamless handling and processing of complex data structures.
  • Unnest function:
    • Introduce the unnest function, enabling the conversion of array types into multiple lines of data, facilitating more granular and detailed data processing.
  • Array and object functions:
    • Expand the array and object functions library with over 10 additional functions, providing extensive support for complex data processing scenarios.
  • Dot notation for nested field access:
    • Introduce dot notation to access nested fields within data structures, simplifying data retrieval and manipulation within complex nested hierarchies.
  • External state access in Redis:
    • Enable the ability to read external state information from Redis, facilitating efficient data retrieval and integration with external applications.
  • SQL syntax enhancements:
    • Enhance the SQL syntax by introducing support for expressions in array indexing, enabling more flexible and dynamic array operations within SQL queries.
  • Rule-related functions:
    • Introduce additional functions to retrieve the current rule ID and enable pipeline delay, providing greater control and flexibility in rule execution and pipeline management.
  • Graph API for lookup table:
    • Enabling efficient lookup table and lookup join in Graph API.

With this release, we reaffirm our dedication to empowering organizations with a cutting-edge, reliable, and user-friendly solution for harnessing the full potential of edge analytics and stream processing. Learn more about these and other features of eKuiper’s 1.10 release in the release notes.

What’s Next

In our upcoming version, we are thrilled to prioritize further advancements in the streaming and SQL runtime of eKuiper. Our focus will revolve around enriching the window functionality, particularly empowering the event window to cater to a wider range of scenarios. To ensure seamless integration and compatibility, we are committed to adding and refining the SQL syntax to align with standard SQL specifications. This will provide users with a familiar and intuitive experience when working with eKuiper, while enabling greater flexibility and ease of use.

Stay tuned for more updates and join us on this exciting journey as we continue to shape the future of edge computing with eKuiper.

Key New Usability and Security Features in Major EdgeX V3 Release Help It Stand Out as the Leading Open Source Edge Data Platform

By Blog, EdgeX Foundry, Project Release

I am delighted to announce the release of the latest and greatest version of EdgeX Foundry the open source edge data platform. Version 3.0 is our most significant upgrade to date and I firmly believe will bring much value to edge solution developers in a variety of industrial sectors.

As the EdgeX TSC chair and product manager of IOTech’s commercial EdgeX-based solutions, I’d like to explain what EdgeX V3 brings and describe how the platform continues to be the key framework upon which companies rely when developing their commercial edge software solutions.

A jump of major version is significant for EdgeX, not only because it indicates a new set of key features for its user base, but also because it shows the mature, yet evolutionary outlook of the base platform and the associated commercial edge products that are available in the ecosystem.

EdgeX Reminder

EdgeX Foundry is an open source edge data platform, managed and hosted under the Linux Foundation Edge umbrella. More specifically, EdgeX is a highly flexible and scalable architecture that allows easy interoperability between edge devices and applications. It is made possible thanks to an ecosystem of community collaborators (including Intel, IOTech, Canonical, Eaton, Beechwoods and many others) representing a variety of industries.

Minnesota

To help in its identification, each release of EdgeX is provided a codename based on a geographical location. The previous version was named Levski (located in Bulgaria). The next stopping point on our virtual world tour is Minnesota! Aptly named, in part for being a key industrial center of the United States, EdgeX Minnesota helps to further serve the needs of OT/IT integration at the edge.

The major jump in version number lets us make more significant updates to the framework and allows us the freedom to make changes that are otherwise prohibited by our minor version compatibility rules. Of course, we try to be backwards compatible where possible, but after two years of V2 minor releases, there are some key updates and usability benefits we can now bring in…

Easier, Common Configuration

The main theme of Minnesota is usability. In particular, the way the user configures EdgeX has been much simplified and improved. Version 3.0 of EdgeX introduces a common configuration pattern for the EdgeX microservices. Since many config options (logging, telemetry, security, database and message bus settings) apply across multiple microservices, we’ve added the ability to configure these in a single common location. Some microservices are likely to have common config requirements, e.g., the southbound device services or northbound application services, so we’ve provided a layered design where users can configure settings at their chosen level.

This will save developers and deployers of EdgeX systems a lot of time and reduce maintenance effort. It will be much easier to make config changes, rerun the system and be confident that the settings have all taken effect appropriately. Take a look at the latest EdgeX docs for the details.

Reduced Config Formats

The jump in major version has also allowed us to reduce the number of different config file formats in EdgeX. Removing TOML and using only YAML and JSON going forward helps reduce what a new user has to learn. YAML and JSON can be derived from each other, so we let users pick their favorite and work with that. Again, usage of the platform becomes easier and more maintainable.

Service Authentication

Another big feature for 3.0 is the addition of microservice authentication within EdgeX. When run in secure mode, the microservices now require a validated authentication token before they are allowed to communicate. Thanks to the efforts of the EdgeX Security Working Group, this is all built into the platform with tokens issued by the Secret Store service. The security advancements here also allow version 3 to switch to a much lighter-weight API Gateway service than was used previously, which helps a lot with our ongoing effort to minimize the footprint of the platform.

Other V3 Updates

Of course, we also have V3 versions of the EdgeX APIs. We do have some minor changes there including updates to help with the automatic discovery of devices, but the APIs are mature so there aren’t the major updates we had with the previous major jump. You can check out the full set of changes in the EdgeX V3 migration guide.  Full release notes are here.

Using EdgeX V3 and Joining in

We’re expecting that the next version after this (3.1 or Napa) planned for Q4 this year, will become the new Long Term Support (LTS) version of EdgeX. Those users wishing to jump from the previous LTS (2.1 or Jakarta) should start to become familiar with the new features. There are some breaking changes, so we encourage you to start to get to know V3 sooner rather than later. The features put into EdgeX Minnesota will serve as the basis for other features for the next few years. For example, using URLs to point to configuration and profiles that could come from anywhere and be shared is only possible because of the common configuration feature added now.

So please do go ahead and download EdgeX V3. A good place to get going is the Getting Started Guide and the lively EdgeX discussions board is a great place to chat and leave feedback.

LF Edge Member Spotlight: Edgenesis

By Blog, LF Edge, Member Spotlight

Author: Yongli Chen, founder & CEO at Edgenesis

About Edgenesis

Edgenesis is an open-source IoT interoperability and edge computing solution provider. 

Edgenesis is a technology company specializing in industrial edge solutions. We aim to drive standardization in device-driven IoT applications and offer flexible, efficient solutions to optimize production processes and reduce costs. Our team includes top talent from companies like Microsoft, McKinsey, Google, and Amazon, and we pride ourselves on our professionalism and efficiency. 

Using a Kubernetes-native development framework, we help clients deploy and manage devices and applications for various use cases. Our solutions provide critical data on devices and production processes, empowering clients to make data-driven decisions and stay competitive in the manufacturing industry. At Edgenesis, we are dedicated to innovation, constantly striving to improve our products and services to offer cutting-edge solutions. Our goal is to create a smarter, more connected world by leveraging our expertise in industrial edge solutions.

Why is your organization adopting an open source approach?

Mobile Internet was mostly about mobile phones connecting to the Internet to consume content. We’re now transitioning to the hyper-connected world era where digital intersects with every aspect of the physical world around us. Therefore digital solutions should mimic physical interactions. We believe in an ecosystem and the ability for solutions to interact across different vendors and providers. Now more than ever collaboration across the community is of importance. Therefore, we like to embrace an open-source model for any of our offerings.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We want to work with the LF Edge umbrella to facilitate the development of edge computing in a structured, collaborative way.

What do you see as the top benefits of being part of the LF Edge community?

One of the many benefits we see in being part of the LF Edge project community is the opportunity to collaborate with like-minded professionals across the globe. 

What sort of contributions has your team made (or plans to make) to the community, ecosystem through LF Edge participation?

Shifu, an open source Kubernetes-native industrial edge which enables IoT interoperability. Shifu virtualizes IoT devices into Kubernetes pods, and gives you the all-in-one Kubernetes cluster which orchestrates IoT devices and applications at the same time.

What sets LF Edge apart from other industry alliances?

It’s the only organization that focuses solely on all aspects of edge computing across verticals. 

 

Fledge’s Journey: Celebrating its Graduation to Impact Stage in LF Edge

By Blog, Fledge

Fledge was first introduced in 2017 as an open source project by the Linux Foundation. The Fledge project, an open source framework for Industrial Internet of Things (IIoT) applications, was initially created by Dianomic with the goal of developing a lightweight and scalable platform for collecting and analyzing data from industrial devices. Dianomic contributed the Fledge codebase to the Linux Foundation with the goal of establishing an open and collaborative community around the project, as a solution to the challenges faced by IIoT applications, such as the need for real-time data processing and analysis, the need for secure and reliable data transport, and the need for a scalable and extensible platform among others. 

Three Maturity Stages for LF Edge Projects

Since its initial contribution, the Fledge community has continued to support, develop and grow Fledge as an open source project under LF Edge. The LF Edge technical steering committee (TAC) defines three maturity stages for projects:

  • Stage 1: At Large Projects: projects the TAC believes are, or have the potential to be, important to the edge ecosystem as a whole. They are typically early-stage efforts looking to add capabilities to the LF Edge open edge platform as a whole in exchange for community support. 
  • Stage 2: Growth Stage: projects that are interested in reaching the Impact Stage, and have identified a growth plan for doing so. The Growth Stage is meant to harbor projects still working on their product or service and are working towards supporting adopters at scale using the product or service. Expectations of the project Growth Stage projects have well-formed and documented processes, procedures and practices for planning, designing, implementing and documenting their intended edge product or service. Growth Stage projects have the resources (leadership, people, tools, infrastructure) necessary to deliver their product or service.
  • Stage 3: Impact Stage: projects on a self-sustaining cycle of development, maintenance, and long-term support. Impact Stage projects are widely used in production environments with a significant number of public use cases. Moreover they have broad, well-established communities with a number of diverse contributors. Expectations of the project include publicly known end-user production deployments, active participation in TAC proceedings, and as such have a binding vote on TAC matters such as the election of a TAC Chair, have publicly documented release cycles, ability to attract a number of committers on the basis of its production usefulness.

Fledge is the first and only project to graduate all the way to Impact Stage from At Large Stage under the LF Edge.

Today, Fledge is widely used in IIoT applications across a variety of industries and continues to evolve and grow with contributions from a thriving community of developers and adopters. The Fledge project and community focuses on industrial data pipelines to and from industrial assets and systems, edge applications and edge machine learning. Our community, users and contributors are suppliers and integrators to industrial markets as well as industrial companies including:  AVEVA, Schneider Electric, OSIsoft, RTE, JEA, Google, Neuman-Aluminum, FLIR and Dianomic.   The Fledge project has expanded into LF Energy’s Project FledgePower with 67 energy companies and suppliers and OSDU (an Open Group Project)  with 167 Oil and Gas companies and suppliers.   Fledge is deployed in both process and discrete manufacturing helping produce: drone military aircraft, engines, aluminum car parts, food processing, chemical polymers, energy, oil and gas, paper products, premium wines, professional auto racing digital twins and more.

While Fledge is open source and freely available for use, there are companies that offer commercial support and services for Fledge. For example, Dianomic, one of the original contributors to the project, provides commercial support, training, and consulting services for Fledge. 

FLIR Systems, Inc. a company that specializes in the development and production of thermal imaging cameras and systems has contributed to the development of Fledge by integrating its thermal imaging technology with the Fledge platform. The integration of FLIR’s technology with Fledge enables industrial customers to analyze thermal data in real-time, allowing for early detection of potential issues and proactive maintenance. The integration of FLIR’s technology with Fledge has resulted in the creation of several use cases for the platform, including monitoring the temperature of equipment in manufacturing plants, detecting anomalies in electrical systems, and monitoring the thermal performance of buildings and infrastructure. Overall, FLIR’s involvement in the development of Fledge highlights the potential of combining edge computing and industrial IoT technologies with advanced imaging and sensing capabilities to enable early detection of potential issues, increased operational efficiency, and improved safety.

AVEVA, a leading provider of industrial automation and software solutions, announced a strategic partnership to enable the integration of FogLAMP with AVEVA’s industrial software solutions. As part of this partnership, AVEVA became a reseller of FogLAMP software, providing its customers with access to the open-source platform for use in their industrial applications. The integration of FogLAMP with AVEVA’s industrial software solutions enables customers to collect, process, and analyze data from industrial assets in real-time, providing insights into performance and enabling predictive maintenance. The partnership also provides customers with access to a broad range of industrial automation and software solutions, combined with the flexibility and customization of the open-source Fledge platform. Overall, the partnership between AVEVA and Dianomic aims to provide customers with a comprehensive solution for their industrial automation and IIoT needs, leveraging the strengths of both companies in the industrial software market.

Fledge is Not a General Purpose Edge Platform  

Fledge was developed by and for industrial suppliers and companies to address the specific needs of industrial markets.   Developers adopting Fledge get all the time to market, licensing and community advantages of Linux Foundation projects.  They get a community with deep industrial, data processing and machine learning understanding.  And, most importantly, they get a final product or service that meets the exacting requirements of industrial users.

AccuKnox joins mimik Technologies, IBM as Open Horizon project partner

By Blog, Open Horizon

The Open Horizon project, contributed by IBM to the Linux Foundation, developed a solution to automate complex edge computing workload and analytics placement decisions. Open Horizon also provides end-to-end security for the deployment process using security best practices. As a result of its rigorous adherence to recommended procedures, the Open Horizon project recently earned the OpenSSF Best Practices badge.

While Open Horizon provides secure container deployment, it cannot guarantee that a container is free of flawed code or other vulnerabilities that could put the system at risk, nor that a container is inherently safe from someone else’s malicious workloads running on the same host. That’s where a dynamic runtime security solution like KubeArmor comes in.

KubeArmor is an open-source project at CNCF (Cloud Native Computing Foundation) foundation that secures containerized workloads.  But until recently it only did so within Kubernetes clusters. AccuKnox, in conjunction with KubeArmor and Open Horizon, added additional coverage to KubeArmor to ensure the security of deployed workloads on both Kubernetes clusters and bare Linux hosts running a container engine like Docker or podman.

KubeArmor provides deep visibility into the behavior of the deployed workload, including network, process, and file operations. This information is vital when making policy decisions related to workload security. In the context of Open Horizon, it was interesting to observe the runtime behavior of the anax agent and the containerized edge workloads that it deployed.

After thorough evaluation and approval by IBM developers, AccuKnox contributed the integration code to the Open Horizon project. The contribution was significant enough to qualify AccuKnox for membership in the Open Horizon project as a partner and voting member. The Technical Steering Committee then voted to invite AccuKnox to join based on the value of their work and the strength of their contribution.

To learn more about the Open Horizon project and how anax automates workload placement, consider attending the project’s Agent Working Group meetings.  The KubeArmor integration code is available in the GitHub repository.

Leaders in LF Edge: Interview with Michael Maxey

By Blog, LF Edge

Gartner predicts that by 2025, more than 50% of enterprise-managed data will be created and processed outside the data center or cloud, with an over $500 billion increase in the edge computing market by 2030. As edge computing becomes a significant revenue opportunity for the technology and telecom industries, it’s even more important to have effective leaders to advance the future of edge computing industry.

Today we sat down with Michael Maxey, Vice President of Business Development at Zededa. Maxey tells us how he got involved in the edge computing industry and why leaders must plan for the growth of IoT and edge.

How did you get involved in the LF Edge community and what is your role now?

I joined ZEDEDA in 2022 as the Vice President for Business Development, to drive the company’s efforts in building out a rich ecosystem of partners, and ultimately to enable customers an easy way to compile the disparate solutions their projects required. Getting involved with LF Edge was a natural extension of these efforts. Today I sit on the LF Edge Governing Board.

What is your vision for the edge computing industry? Tell us briefly how you see the edge market developing over the next few years.

What we’re seeing today within the edge computing industry is just scratching the surface of what is possible. Edge deployments are largely bespoke, with each enterprise building a unique stack to solve specific business needs. We’re going to see this change as the industry develops. Deployment patterns will lead to standardization of the lower levels of the technology stack, providing a common set of services to help build and deploy applications, much like the LAMP stack in the early days of computing. This standardization of services will unlock massive developer communities, who have been building in the cloud for the past 10 years, and as a result, will drive a massive transformation of computing outside of the data center.

What impact do you see open source playing in the evolution of the edge market? And how has it shaped where we are today?

Open source brings a lot of benefits but the biggest one I see within the edge market is that of standardization. It’s hard to get your head around scale at the edge – it’s not just the sheer number of deployed devices and sites, but it’s that number split across many different types of hardware, running many different applications, and all in service to an endless number of use cases.  Standardization is key to enabling interoperability across all of this heterogeneity and provides an open foundation that ensures organizations can build something without falling into the silo trap that hinders the growth of so many projects.

Why is LF Edge important to advance the future of edge computing?

LF Edge is important for a number of reasons. First, the organization is driving the standardization that the heterogeneity and scale of the edge requires. Second, it has taken the fragmented edge landscape and given people a common taxonomy and framework to discuss, define, and understand it. Third, it has brought together existing efforts to address different parts of the edge landscape under a common umbrella, not only enabling interoperability between those projects but illustrating how the complexity of the edge requires an ecosystem of solutions in order to solve individual problems.

What is ZEDEDA’s role in edge computing and LF Edge?

ZEDEDA delivers a distributed, cloud-native edge management and orchestration solution, simplifying the security and remote management of edge infrastructure and applications at scale. Through ZEDEDA’s ecosystem of partners, customers can easily deploy any application at the edge via a Marketplace, enabling customers ease of use as they use different solutions for their edge projects. ZEDEDA leverages EVE, an LF Edge project, to provide an open, flexible and secure foundation while abstracting the complexity of the diverse hardware, connectivity and software at the distributed edge and eliminating any vendor lock-in. ZEDEDA customers include Fortune 500 companies from industries like energy, automotive, and manufacturing with thousands of edge nodes in production today across the globe.

ZEDEDA was a founding member of LF Edge and in May of 2019 donated the original code for what became EVE to LF Edge. Today ZEDEDA is active within LF Edge at multiple levels: within EVE from an ongoing code development perspective to sitting on the LF Edge Governing Board, as well as the Technical Advisory Council and Outreach Committee to continually evangelizing LF Edge to analysts, the media, customers, and more.

What advice do you give to organizations who want to get involved in the LF Edge community?

Now is the time to get involved! There’s a real need across the industry to develop architectural blueprints, build solutions, and in general demonstrate the transformative value that can be gained by making sense of all of the data that organizations generate across their distributed environments. By getting involved now, organizations are able to have real impact driving standardization, developing recommendations, and helping to create solutions applicable across industries and use cases.

 

Efficiently Collect, Transform and Transit Your Data With eKuiper 1.9 Release

By Blog, eKuiper, Project Release

eKuiper—a lightweight IoT data analytics and streaming software—is now available in its 1.9.0 release. eKuiper, an LF Edge project, migrates real-time cloud streaming analytics frameworks such as Apache Spark, Apache Storm and Apache Flink to the edge. eKuiper references these cloud streaming frameworks, incorporates any special requirements of edge analytics and introduces rule engine, which is based on Source, SQL (business logic) and Sink; rule engine is used for developing streaming applications at the edge.

eKuiper 1.9 release continues to enhance the source/sink connectors to make it easier to connect and transmit data with lower bandwidth. The community has also enhanced the data transformation ability to flexibly encode and compress any part of your data. The 1.9 release adds a number of significant new features, among them are

  • Multiple neuron connection to analyze collected data from multiple IOT gateways together
  • MQTT sink/source compression/decompression support, save bandwidth for edge cloud communication
  • HTTPPull source & REST sink support dynamic token based authentication, connect to more services out of box
  • More transformation and compression functions added, handle your data flexibly
  • Partial data export/import, migrate only interested rules and the dependencies
  • Run python plugin in conda virtual environment, separate the python runtime env

Learn more about these and other features of eKuiper’s 1.9 release in the release notes.

What’s next

In the next release, the community will adapt to the EdgeX Foundry‘s Minnesota version, while exploring the use of external states such as Redis states to achieve persistent states.

Akraino and CAMARA Communities Join Forces to Boost API Integration in Edge Computing

By Akraino, Blog

At the Hefei High-tech Integrated Circuit Incubation Center in Anhui Province, a panel discussion moderated by Guanyu Zhu from Huawei’s Cloud Network OSDT team brought together experts from the Akraino and CAMARA communities, along with industry professionals, to explore the significance of APIs in edge computing and the potential for collaboration between the two communities. Prominent panelists included Tina Tsou, LF Edge Board Chair, Leo Li from Akraino TSC, Gao Chen, the senior engineer from China Unicom, and Shuting Qing, the open-source ecology expert from Huawei’s Cloud Network OSDT.

Shuting Qing provided insights into the origins of the CAMARA project, explaining that “Capability exposure aims to encapsulate these capabilities into a simple interface and expose it to developers for everyone to use.” CAMARA is a vital initiative designed to generate new revenue streams for European operators that have made substantial investments in 4G/5G networks. Qing emphasized the need to clarify the value scenarios and focus on monetization.

Leo Li discussed Akraino’s eagerness to collaborate with emerging communities like CAMARA, emphasizing the need for standardized edge computing hardware modules and the adoption of novel hardware interface technologies such as Ethernet, PCIe, and UCIe. Li highlighted the importance of integrating standardized hardware with an API-based business development model, stating, “We hope to further enhance the business flexibility of operators while promoting hardware standardization, and reduce costs and power consumption through standardization in conjunction with the trend of service APIization.”

The panelists also shared their views on the commercial realization of telecom edge cloud. Gao Chen emphasized the standardization of east-west interfaces, highlighting the importance of providing a unified and user-friendly API to access network capabilities, while Tina Tsou pointed out the potential benefits for various stakeholders, noting that “Edge computing can provide better performance and services, thereby increasing customer loyalty and revenue.”

Addressing potential challenges in implementing commercialization ideas, Leo Li indicated that “We believe that cost will be a crucial factor in the monetization process of edge cloud.” He advocated for standardizing hardware specifications and implementation methods and adopting an API-based business development model to minimize costs.

Tina Tsou expressed her expectations for the CAMARA community, emphasizing the need for more flexible and customizable edge computing capabilities, stronger equipment and data management capabilities, and enhanced security and credibility guarantees. She also called for increased cooperation with other open-source projects.

Shuting Qing and Tina Tsou shared their thoughts on how Akraino and CAMARA could work together to create a more open, collaborative, and innovative edge computing ecosystem. Qing considered that since CAMARA is discussing capability exposure, it would be worth exploring the integration of edge computing open capabilities, such as ETSI MP1, into the CAMARA. Meanwhile, Tsou proposed combining edge computing framework, openness, security and privacy protection, and application scenario expansion.

The panel discussion offered valuable insights into the role of APIs in edge computing and the potential collaboration between Akraino and CAMARA. As the industry continues to develop, the joint efforts of these two communities could result in a more open, collaborative, and innovative edge computing ecosystem that benefits all stakeholders.

Baetyl New Release Integrates With eKuiper and Delivers Edge to More Devices

By Baetyl, Blog, eKuiper, Project Release

Baetyl—an LF Edge project that extends cloud computing, data and service seamlessly to edge devices— has released v2.4.3 release. With the efforts of many active contributors, new functions have been added, and some existing functions have been continuously optimized since the previous v2.3.0 release. These new features continue to follow the cloud-native philosophy and build an open, secure, scalable, and controllable intelligent edge computing platform.

Compared to the previous Baetyl v2.3.0 releases, the new features and optimizations in v2.4.3 include:

  • Device management functionality has been refactored with the addition of a device template interface, support for calculating OT data collection values, support for IEC-104 protocol, updated OPC-UA and Modbus drivers, and updated driver-node binding logic;
  • Support for Windows platform has been added with the ability to generate Windows – platform images for the Baetyl edge main module;
  • Remote invocation has been implemented, allowing for remote access to specified edge services with results returned from the cloud;
  • New container mode with eKuiper as an optional system application;
  • New baetyl-rule module supports HTTP Target;
  • Adaptation to the highest K3s 1.24.4 version;
  • Fix workload type creation failure bug;

These new features are available and now you can view the release note here. Other features can be further explored by the developers, and Baetyl will continue to improve and optimize its functionality.

Integration with eKuiper

Baetyl uses eKuiper as a system optional application on the edge-side for stream processing and data analytics. The collaboration and integration with eKuiper increase the linkage between LF Edge projects and promote innovation. Platform and version adaptation enables Baetyl to run on more devices. The optimization of device management and new driver support prepare for the access of more devices with different protocols. From the point of eKuiper, this means you can deploy eKuiper more quickly and conveniently.

In the new version, the integration between Baetyl and eKuiper makes the following changes, including:

  • set mqtt client as Baetyl broker; 
  • mount eKuiper’s data file to the host to ensure no configuration loss; 
  • add k8s service to the eKuiper application to enable calling eKuiper’s open API from the host as well as from within the cluster to enable edge configuration changes; 
  • built-in eKuiper as an optional application in the Baetyl framework, so that eKuiper can be installed directly through Baetyl, eliminating the need for separate installation; 

After the integration, the Baetyl framework enhances the ability of edge message processing, while users can use Baetyl’s ability to use eKuiper more easily and quickly.

What’s next?

For future releases, the project is working on strengthening the management of non-intelligent devices at the edge, including providing

  • more comprehensive management functions for device models, devices, and drivers on the cloud management platform, 
  • a software gateway management module on the edge to support the ability of devices to connect through various industrial drivers,
  • a unified northbound connection protocol blink for access implementation.

The Baetyl project is also further expanding and optimizing the implementation of cloud storage at the underlying level, providing support for database storage outside of k8s crds. It’ll also enhance the integration with K8s cloud-native, providing more edge cloud-native capabilities for access, such as providing the ability to view edge description information, and so on.

Webinar Recap: How LF Edge Projects Track CO2 Footprint with Secure Monitoring at the Edge

By Alvarium, Blog, Project EVE

With community members from over 50 organizations gathered on LinkedIn and Zoom last week, LF Edge kicked off its first webinar this year. This webinar is a continuation of the “On the Edge with LF Edge” webinar series where we invite community members and industry leaders to share production case studies, project demos, and the latest updates from the LF Edge project communities! 

For this webinar, we had distinguished speakers, Mathew Yarger, Advisor at IOTA and Co-Founder of DigitalMRV, Steve Todd, VP Data Innovation and Strategy at Dell Technologies, and Kathy Giori, Global Partnerships and Outreach at MicroBlocks, who shared their insights on the LF Edge use case of using Project Alvarium and EVE to monitor the carbon footprint in the world’s first BioGas Plant, which uses harvest waste as its only fuel.

To kick off the webinar, the speakers addressed the challenge of inaccurate emissions reporting in sustainability. In fact, “85% of organizations are concerned about reducing their emissions, but only 9% are able to measure their emissions comprehensively,” said Yarger. The VSPT Wine Group in Chile required a solution to process data from various sensors measuring water, solids, gases, and anaerobic digestion processes in real-time to provide reliable insights into their carbon footprint. This issue was tackled by leveraging the Data Confidence Fabric (DCF) framework of Project Alvarium and the cloud computing capabilities of Project EVE.

You can read the published use case on the LF Edge case studies page and watch the webinar recording below to learn more about how LF Edge projects enable organizations to take more informed and effective steps toward reducing their environmental impact.

Love this webinar? Make sure to subscribe to LF Edge on LinkedIn, so you won’t miss our next webinar and the opportunity to engage with the speakers live!

Get involved:

If you’re interested in getting involved in Project Alvarium and Project EVE, you can find the communities on the LF Edge Slack channels #eve and #alvarium (and related channels).

Project Alvarium:

You can learn more about Project Alvarium by visiting its wiki and GitHub. Have questions about the project? Subscribe to the project mailing list and Technical Steering Committee (TSC) mailing list and attend the TSC meetings every two weeks at 11 AM Eastern Time.

Project EVE:

You can learn more about Project EVE by visiting its wiki and GitHub documentation. Have questions about the project? Subscribe to the project mailing list and attend the TSC meetings that occur every four weeks on Thursday at 11:30 AM Eastern Time.

The developer program offered by ZEDEDA lets industry adopters run proof-of-concept (PoC) distributed edge orchestration programs at no cost. The Alvarium/IOTA teams have developed their applications and tools to be ready to deploy on EVE, so that you can remotely manage them no matter where your EVE edge node is located.