All Posts By

LF Edge

State of the Edge and Edge Computing World Announce Top Finalists For The Edge Woman of the Year Award 2021

By Blog, State of the Edge

Edge Computing Industry Comes Together to Recognize Top Ten Women Shaping the Future of Edge for 2021

“I am thrilled to participate in announcing this year’s Edge Woman of the Year 2021 finalist categories; together we have much to accomplish and the women nominated for this year inspire us all.” — Edge Woman 2020 winner, Fay Arjomandi, Founder of mimik Tech

CEDAR PARK, TX, UNITED STATES, August 25, 2021 /EINPresswire.com/ — Edge computing leaders from State of the Edge and Edge Computing World announce the Third Annual Women top finalists.

The Edge Woman of the Year 2021 nominees reflects a group of qualified industry leaders in roles impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems. The finalists were selected for their outstanding nomination and referred to the final panel of reviewers by the organizers. The final winner will be chosen by a panel of industry judges, including the previous Edge Woman of the Year 2020 winner, Fay Arjomandi, Founder of mimik Technology Inc. The winner of the Edge Woman of the Year 2021 will be announced during this year’s Edge Computing World, being held virtually October 12-15, 2021.

“The Edge Woman of the Year Award 2021 was created as part of an industry commitment of time and resources to highlight the growing importance of the contributions and accomplishments of women in edge computing,” said Candice Digby, Partnerships and Events Manager at Vapor IO. “The award is presented annually at the Edge Computing World event, which offers the finalists and winners one of the most visible platforms for the entire edge computing ecosystem to highlight the advancements of their efforts in this field.”

The State of the Edge and Edge Computing World are proud to sponsor the annual Edge Woman of the Year Award, which is presented to an outstanding female and/or non-binary professional in edge computing across network, cloud, applications, developers, and infrastructure end-users.

“I was honored to have been chosen as Edge Woman of the Year 2020 and to be recognized alongside many inspiring and innovative women across the industry,” said Fay Arjomandi, Founder and CEO, mimik Technology Inc. “I am thrilled to participate in announcing this year’s Edge Woman of the Year 2021 finalist categories; together we have much to accomplish and the women nominated for this year inspire us all to continue our work in building a sustainable digital economy.”

The annual Edge Woman of the Year Award is presented to outstanding female and non-binary professionals in edge computing for outstanding performance in their roles elevating Edge. The 2021 award committee selected the following seven finalists for their excellent work in the named categories:

Leadership in Edge Startups
Eva Schonleitner, CEO at Crate.io

Leadership in Edge Open Source Contributions
Dr. Stefanie Chiras, Senior Vice President, Platforms Business at Red Hat

Leadership in Hyperscale Edge
Prajakta Joshi, Group Product Manager, Edge Cloud for Enterprise and Telecom at Google

Leadership in Network Edge
Rita Kozlov, Director of Product at Cloudflare, Inc.

Leadership in Edge Innovation and Research
Azimeh Sefidcon, Research Director at Ericsson

Leadership in Edge Best Practices
Lily Yusupova, Strategic Account Executive, Schneider Electric

Leadership in Rural Edge
Nancy Shemwell, Chief Operating Officer, Trilogy Networks, Inc.

The 2021 submissions continue to be incredibly impressive and the list of Edge Woman of the Year finalists represents a premier group of women taking the reins of leadership across the edge computing ecosystem. Edge computing continues to be one of the fastest growing industries, and we hope these women inspire the industry as well as encourage more women to pursue careers in Edge.

“Visibility of female leadership is so important to the potential growth and innovation in Edge Computing,” said Gavin Whitechurch of Topio Networks and Edge Computing World, “Recognizing this group of elite technologists inspires and encourages continuous forward thinking in our diverse industry.”

For more information on the Women in Edge Award visit: http://www.edgecomputingworld.com/edgewomanoftheyear.

About State of the Edge
The State of Edge (http://stateoftheedge.com) is a member-supported research organization that produces free reports on edge computing and was the original creator of the Open Glossary of Edge Computing, which was donated to The Linux Foundation’s LF Edge. The State of the Edge welcomes additional participants, contributors and supporters. If you have an interest in participating in upcoming reports or submitting a guest post to the State of the Edge Blog, feel free to reach out by emailing info@stateoftheedge.com.

About Edge Computing World
Edge Computing World is the only industry event that brings together the entire edge ecosystem.The industry event will present a diverse range of high growth application areas – including AI, IoT, NFV, Augmented Reality, video, cloud gaming & self-driving vehicles – are creating new demands that cannot be met by existing infrastructure. The theme will cover edge as a new solution required to deal with low latency, application autonomy, data security and bandwidth thinning, which all require greater capability closer to the point of consumption.

Join us at Edge Computing World October 12-15, 2021 for the world’s largest virtual edge computing event.

Jessica Rees
Publicity.IM
+1 4158897444
email us here

Visit us on social media:
Twitter

It’s Here! Announcing EdgeX 2.0 – the Ireland Release

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry Technical Steering Committee Chair

 

 

 

 

 

 

 

When you think of Ireland, most people envision bright green, fresh landscapes.  An island cleaned regularly by plentiful rain and always made new.  The Ireland name is appropriate to our latest release – which is EdgeX made fresh,  a major overhaul.  

EdgeX Foundry 2.0 is a release over a year in the making!  

This is the EdgeX community’s second major release and our 8th release overall.  Indeed, the Ireland release contains both new features and non-backward compatible (with any EdgeX 1.x release) changes.  In general, this release took a huge step to eliminate, or at least significantly reduce, about four years of technical debt accumulation, while also adding new capabilities.

Why Move to EdgeX 2.0?

Whether you are an existing EdgeX user or new to the platform and considering adoption, there are several reasons to explore the new EdgeX 2.0 Ireland release:

  • First, we’ve completely rewritten and improved the microservice APIs.  I’ve provided some more details on the APIs below, but in general, the new APIs allow for protocol independence in communications, better message security, and better tracking of data through the services.  They are also more structured and standardized with regard to API request / response messages.
  • EdgeX 2.0 is more efficient and lighter (depending on your use case).  For example, Core Data is now entirely optional.
  • EdgeX 2.0 is more reliable and supports better quality of service (depending on your setup and configuration).  For example, you can use a message bus to shuffle data from the sensor collecting Device Services to exporting Application Services.  This will reduce your reliance on the REST protocol and allow you to use a message broker with QoS levels elevated when needed.
  • EdgeX 2.0 has eliminated a lot of technical debt and includes many bug fixes.  As an example, EdgeX micro service ports are now within the Internet Assigned Numbers Authority (IANA) dynamic/private port range so as not to conflict with well-known system or registered ports.

Introducing the V2 APIs

During one of our semi-annual planning meetings back in 2019, the community agreed that we needed to address some challenges in the way that EdgeX microservices communicated – both with each other and with third- party systems.  We identified some technical debt and knew that it had to be addressed in order to provide the platform the base from which new features, better security, and additional means of communications could be supported in the future.  This meant changing the entire API set for all EdgeX microservices.

What was “wrong” with the old V1 APIs?  Here were some of the issues:

  • Intertwined data model/object information with poor URI paths that exposed internal implementation details and made the APIs difficult to use.  Here is an example comparing the v1 and v2 calls to update a device.

V1: PUT /device/name/{name}/lastconnected/{time}

V2:  PATCH /device [with JSON body to specify the device info]

  • In many cases, requests often lacked any formal response object.  The HTTP status code was all that was returned.
  • The APIs and model objects were REST specific and didn’t support other communication protocols (such as message bus communications via MQTT or the like).
  • Used a model that was very heavy – often containing full embedded model objects instead of model object references.  For example, in v1, when requesting device information, the device profile and Device Service were also returned.
  • Used inconsistent naming and included unnecessary elements.  This made it hard for users to figure out how to call on the APIs without checking the documentation.  
  • Lacked a defined header that could be used to provide security credentials/tokens in the future.

Changing the entire API set of a platform is a monumental task.  One that required the collective community to get behind and support.  Given the size of the task, the community decided to spread the work over a couple of release cycles.  We began the work on the new APIs (which we called the V2 APIs) last spring while working on the Hanoi release and completed the work with this release.

The new APIs come with common sense request and response objects.  These objects share new features such as a correlation ID which allows requests and responses among services to be associated, and it also allows the flow of data through the system to be tracked more easily.  When a piece of sensor data is collected by a Device Service, the correlation ID on that data is the same all the way through the export of that data to the north side systems.  Below is a simple example of the new request and responses – in this case to add a new Device Service.

 

Importantly, the V2 APIs set us up for new features in the future while also making EdgeX easier to interact with.

EdgeX 2.0 Feature Highlights

Technical debt removal and the new V2 APIs aside, what else was added in EdgeX 2.0?  Plenty!  Here is a sample of some of the more significant features.

Message Bus and Optional Core Data

In EdgeX 1.0, all communications from the Device Services (the services that communicate with and collect data from “things”) and Core Data was by REST.  In EdgeX 2.0, this communication can be performed via message bus (using either Redis Pub/Sub or MQTT implementations).  In addition to more “fire and forget” communications, the underlying message bus brokers used in message bus communications offer guarantees of delivery of the message.

Moreover, EdgeX 2.0 takes the message bus implementation a step further by allowing Device Services to send the data directly to Application Services via message bus.  Core data becomes an optional secondary subscriber in this instance.  For organizations that do not need to persist sensor data at the edge, this option allows the entire Core Data service to be removed from deployment helping to lighten EdgeX’s footprint and resource needs.

REST communications from Device Services to Core Data can still be done, but it is not the default implementation in EdgeX 2.0.  The diagrams below depict the old versus new message bus service communications.

Improved security

In the 2020 IoT developer survey, security remained one of the top 3 concerns of people building and fielding IoT/edge solutions.  It also remains a prime concern of the members of our community.  The EdgeX project, with each release, has worked to improve the security of the platform.  In this release, there were several additions.

  • We created a module and a pattern that provides a common means for all the services to retrieve secrets from the secret store (EdgeX uses Vault as the secret store by default).  We call this feature “secret provider for all.”
  • EdgeX uses Consul for service registry and configuration.  In this release, the API Gateway is used to allow access to Consul’s APIs.  Access to the Consul APIs in EdgeX 1.0 was denied making changes in Consul difficult.
  • Consul is now bootstrapped and started with its access control list system enabled – offering better authentication/authorization of the service and better protection of the key/value configuration it stores.
  • Fewer processes and services are required to run as root in Docker containers.
  • The API Gateway (Kong) setup has been improved and simplified.
  • EdgeX now prevents Redis, the persistent store for EdgeX, from running in insecure mode.

Simplified Device Profiles

Device profiles are the way that users define the characteristics about sensors/devices connected to EdgeX – that is the data they provide, and how to command them.  For example, a device profile for BACnet thermostats provides information on data a BACnet thermostat sends, such as current temperature and humidity level. It also defines which types of commands or actions EdgeX can send to the BACnet thermostat such as the ability to set the cooling or heating point.  

Device profiles are specified in YAML or JSON and uploaded to EdgeX.  As such, they are the critical descriptions that makes EdgeX work.  Device profiles allow users to describe their sensors and devices to EdgeX without having to write lots of programming code. 

Writing device profiles in previous releases could be long and tedious depending on the device and what type of data it provided.  We greatly simplified and shortened device profiles in EdgeX 2.0.  As an example, here is the same essential profile in EdgeX 1 and EdgeX 2.

New Device Services

The growth of an open-source community can often be measured by how well it is attracting new contributors into the project.  During this release, thanks to some new contributors, we added several new Device Services.  Here is the new Device Services you can now find in the project:

  • Constrained Application Protocol (CoAP) is a web transfer protocol used in resource constrained nodes and networks.
  • GPIO (or general pin input/output) is a standard interface used to connect microcontrollers to other electronic devices.  It is also very popular with Raspberry Pi adopters given its availability on the devices.
  • Low Level Reader Protocol (LLRP) is a standardized network interface to many RFID readers.
  • Universal Asynchronous Receiver/Transmitter (UART) is serial data communications and is used in modems and can be used with USB in a USB to UART bridge.

These Device Services were contributed to EdgeX 1, but are being updated this summer to EdgeX 2.0.

New Graphical User Interface

In this release, a new and complete interface using Angular.JS (with some Go code for backend needs) has been added.  This GUI, complete with EdgeX branding, aligns with industry standards and technologies for user interfaces and should provide a platform for future GUI needs.

In addition to providing the ability to work with and call on EdgeX services, it provides a GUI “wizard” to help provision a new device (with associated Device Service and device profile).  What would typically take a user several complex REST API calls can now be done with a wizard interface that carefully walks the user through the creation process providing simple fill-in forms (with context help) to create the device.

The GUI also provides a visual status display that allows users to track the memory and CPU usage of the EdgeX services.  In the future, additional visualization will be added to be able to display sensor data.

Application Services

Application Services are used to export EdgeX data off the edge to enterprise and cloud systems and applications.  Application Services filter, prepare and transform the sensor data for easy consumption.  Several additions were made to the Application Services and the SDK that helps create them (called the application functions SDK).  The following is a list of some of the new features and functions:

  • New functions to filter sensor readings by device profile name and device resource name before exporting
  • Allowing multiple HTTP endpoints to be specified for export by one Application Service
  • Subscription to multiple message bus (enabling multiple filters by subscriptions)
  • Provided a new template for easier/faster creation of custom Application Services

Additionally, a new LLRP inventory Application Service was contributed to help correlate data coming from RFID readers to backend inventory systems.

EdgeX Ready

EdgeX Ready is a program designed to highlight member organizations that demonstrate the ability to work with EdgeX.  The EdgeX Ready program was launched with the EdgeX 2.0 release in order to promote awareness of users and their organizations that have EdgeX expertise.  This is the first step or a precursor to potentially exploring an EdgeX certification program.

Today, the EdgeX Ready self-assessment process requires an EdgeX adopter to:

  • Get and run an instance of EdgeX (and associated services)
  • Create and validate a device profile.
  • Use the profile with an EdgeX Device Service to provision a device and send sensor data into EdgeX.

When an organization  completes the EdgeX Ready process, they signal to other community members that they have successfully demonstrated they have EdgeX knowledge and experience.  In return, the EdgeX community highlights EdgeX Ready members on the EdgeX Foundry website with an EdgeX Ready badge (shown below) next to their logo.

Additionally, the program hopes to promote sharing of EdgeX device connectivity elements and sample data sets, which are central to the current submission process and helpful to others learning EdgeX.  Future EdgeX Ready programs may highlight additional levels of ability and experience. 

Under the Hood and Behind the Scenes

Effort by DevOps, Testing, Quality Control and Outreach teams often go unnoticed in any software effort.  These teams are composed of unsung heroes of the EdgeX 2.0 as they contributed substantially to this massive undertaking.  In addition to their normal roles of building, testing and marketing EdgeX, they managed to further the project in many ways.

Our DevOps team cleaned up the EdgeX Docker image names (removing unnecessary image prefix/suffix names) and provided descriptions and appropriate tags on all of our images in Docker Hub – helping adopters more quickly and easily find the EdgeX Docker images they need.  They also added informative badges to our GitHub repositories to help community developers and adopters understand the disposition and quality of the code in the repositories.

Our Test/QA team added new integration tests, performance tests and “smoke” tests to give the project an early indicator when there is an issue with a new version of a 3rd party library or component such as Consul – allowing the community to address incompatibilities or issues more quickly.

Finally, our marketing team revamped and upgraded our www.edgexfoundry.org website and create a Chinese version of our website in support of our ever growing Chinese community of adopters.  The welcome mat of our project now has a clean new look, better organization, and a lot more information to provide existing and potential adopters.  The website should help EdgeX adopters the information and artifacts they need more quickly and easily, while also highlighting the accomplishements of the community and its membership.

If all this wasn’t enough, here is a list of some of the other accomplishments the project achieved during this release cycle:

  • Improved the Device Service contribution review process.
  • Incorporated use of conventional commits in the code contribution process.
  • Started a program to vet 3rd party dependencies; insuring 3rd party dependencies are of sufficient license, quality and development activity to support our project.
  • Help launch the Open Retail Reference Architecture project to foster the development of an LF Edge reference architecture for retail adopters.
  • Entered into liaison agreements with the Digital Twin Consortium and AgStack (a new LF project) to figure out how EdgeX can be better integrated into digital twin systems and help facilitate solutions in the agricultural ecosystem.

More on Ireland

Find more information about the release at these locations.

On to Jakarta

As always, I want to take this opportunity to thank the wonderful EdgeX team and community.  EdgeX exists because of the incredibly smart and dedicated group of people in the community.  I also want to thank all the fine people at the Linux Foundation.  This release was especially long and difficult, but this team was up to the challenge and they are providing you another great release.  I am proud of their work and privileged to be the chairman of this group.

But our work is not done.  The EdgeX community has already held its planning session for our next release – codenamed Jakarta – that will release around November of 2021.  Look for more details on our planning meeting and the Jakarta release in another blog post I’ll have out soon.

Enjoy the summer, stay safe and healthy (let’s get this pandemic behind us), and please give EdgeX 2.0 a try.

EdgeX Foundry Releases the Most Modern, Secure, and Production-Ready Open Source IoT Framework

By Announcement, EdgeX Foundry

Four-plus years of collaboration, 190+ contributors, 8M+ container downloads, new retail project ORRA, EdgeX Ready and foundation for future, long-term support pave the way for Ireland release

SAN FRANCISCO August 3, 2021 EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation, today announced it’s Ireland release. Focused on edge/IoT solutions, EdgeX Foundry’s second major release overhauls API sets, removes technical debt, provides more message-based communications, and simplifies and secures interface for adopters and developers, making the platform significantly easier to use and more reliable. 

“As a leading stage 3 project under LF Edge, the EdgeX Ireland release has expanded use cases across retail, building automation, smart cities, process control, and manufacturing,” said Arpit Joshipura, general manager, Networking, Edge & IoT, at the Linux Foundation. “It’s a key to standardizing IoT frameworks across market verticals.”

“This release sets in motion the opportunity for EdgeX to offer its first ever LTS or long-term support release as soon as the fall.  This is a significant commitment on the part of our open-source community to all adopters that says we stand with you, prepared to help support your use of EdgeX in real world, scalable, production deployments,” said Jim White, chief technical officer,  IoTech,  and EdgeX Foundry Technical Steering Committee Chair. 

Ireland Feature Highlights

  • Standardized and modernized northbound and southbound APIs enrich ease of interoperability across the IoT framework
  • Advanced security is built into the APIs, message bus, and internal architecture of EdgeX
  • New device services (southbound) and new app services (northbound) included in Ireland are also inherently secure (e.g., GPIO, CoAP, LLRP, UART)

Commercialization & Use Case Highlights

  • Open Retail Reference Architecture (ORRA): a new sub-project that provides a common deployment platform for edge-based  solutions and IoT devices. ORRA is a collaboration with fellow LF Edge projects Open Horizon and Secure Device Onboard, incubated by EdgeX Foundry.
  • The new Edgex Ready program highlights users and organizations that have integrated their offerings with solutions leveraging EdgeX;  a precursor to a community certification program. Learn how to become EdgeX Ready through the project’s Wiki page

Learn more about Ireland’s feature enhancements in this blog post

Plans for the next EdgeX release, codenamed ‘Jakarta’ are expected in Q4’ of 2021. 

For more information about LF Edge and its projects, visit https://www.lfedge.org/

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Additional Quotes and Community Support

”Beechwoods Software has been a contributing member of EdgeX Foundry since its inception and chairs the Certification Working Group. EdgeX technology is at the core of our EOS IoT Edge platform offering for which we are readying our version 2 release based on the latest EdgeX code base. Beechwoods is pleased with the growing momentum of EdgeX Foundry and look forward to continuing our support and collaboration,” said Michael Daulerio, Vice President of Marketing and Business Development at Beechwoods Software, Inc.

“Canonical is a founding member of the EdgeX Foundry project and has provided technical leadership in the technical steering committee from day one. The Ireland (aka 2.0) release of EdgeX introduces much improved V2 REST APIs, a transition to a secure message bus for data ingestion, and many additional improvements to the security of EdgeX. The cross-company cooperation that contributed to the success and timeliness of this release once again demonstrates the power of open source development. Snaps of the Ireland release of EdgeX are available from the Snap Store using the new 2.0 track, and can be used to build secure enterprise-grade EdgeX deployments using Ubuntu Core 20,” said Tony Espy, technical architect / IoT & Devices, Canonical, and at-large  EdgeX Foundry TSC member. 

“EdgeX Foundry continues to serve as the basis for our Edge Xpert product.  As such, we see the release of EdgeX 2.0 as critical to our company’s success in support of our customers.  It provides the ability for IOTech to add new features and add more value given the new APIs, support for more messaging and overall simplifications of the platform.  On top of that, the move toward an LTS release in the fall based on EdgeX 2.0 is an important milestone of support shown by the EdgeX community.  LTS tells adopters like IOTech that the EdgeX ecosystem stands behind them and is there to provide a scalable, reliable, and robust platform that can be used in production ready solutions,” said Keith Steele, CEO, IOTech Systems. 

Resources:

 

###

 

Edge Primer: Distributed Cloud – The power of public cloud at the edge

By Blog
By Utpal Mangla VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competence; 
Ashek Mahmood, IoT Edge AI – Partner & Practice Lead, IBM; and Luca Marchi, Associate Partner – Innovation Leader – IBM’s Telecom Media Entertainment Industry Center of Competence

Today, many enterprises benefit from using public cloud to build and run their applications.  Consuming IT as a service, through APIs, at scale, and as needed, has transformed how applications are built. Public Cloud enables companies to innovate in new, faster ways. But the reality is only somewhere between 5% to 20% of enterprise workloads have moved to the cloud. 

That first wave of applications was focused mostly on new workloads, or workloads that can easily relocate. Many applications have requirements on security, compliance, latency, regulation, and performance and cannot easily move into a public cloud. Enterprises need a way to gain the benefits of public cloud anywhere they are running their applications.

The solution that helps is called “Distributed Cloud.” Distributed Cloud is a new cloud computing model that extends public cloud services to any location, even at the edge. This means companies can now deploy cloud-native applications not only in their primary cloud provider’s infrastructure but on premises, within other cloud providers’, in third-party data centers or colocation centers, or at the edge – in factories, distribution centers, stores, hospitals and ports – with everything managed from a single control plane. 

Distributed Cloud provides greater control to an organization of its hybrid cloud ecosystem. It mitigates dependency on one provider and provides flexibility to organizations to choose, based on specific business requirements. With Distributed Cloud, developers can take advantage of the catalog of services from public cloud providers with industry-optimized security and compliance and leading AI capabilities anywhere its needed. All with a common API, user experience, management dashboard and user consumption model. 

With 5G use cases across industries becoming a reality, massive growth of IoT devices (in trillions of volumes) is expected. Control and audit of network traffic is critical for maintaining data protection, privacy, and compliance. Management from a single control enables zero-trust Edge security across all devices and users, irrespective of which network they are on. Distributed cloud extends the capabilities through multiple cloud satellites across edge networks. 

To get a better idea of how Distributed Cloud works, let’s look at some actual applications from different industries.

  • The first application comes from the financial services industry. In order to compete in a market being disrupted by newcomers, financial institutions need to create exceptional user experiences powered by technology. To do that, they leverage AI and trading algorithms to stay ahead of the market. The competitiveness of financial services organizations is based on quick iteration and deployment of cloud-native tools and best practices. However, the financial industry is heavily regulated. So, they need to be able to keep that data secure and compliant – which often means there are restrictions on where that data can live. 
    • Distributed Cloud can help solve this conflict. These financial services companies can now extend public cloud services in their data centers, allowing them to leverage cloud native best practices and meet their security and compliance obligations, all at the same time. So, Distributed Cloud enables financial institutions to take advantage of a public cloud consumption model, but in many different locations – and they do not need a high level of skills to run this software outside of the public cloud. 
  • Another example is the construction industry managing its worker health and safety regulations. One key player in the industry is leveraging distributed analytics to keep workers safe. In this scenario, a builder might have a business, or an office building that is under construction, and they need to leverage video analytics to tell if someone is wearing a hard hat or not – and warn them before they move into a potentially dangerous area of that building. They want to use video analytics to automatically detect and alert this situation, but latency could be a real problem – delays in such alert could fail to prevent the injury. 
    • To solve this safety problem, the construction company needs to analyze video feed from many cameras all over the office building, but it is impractical to send all of that data back to the cloud to be processed remotely. It would be better if the video processing happened close to the actual device. Traditionally, that would require you to install servers and software and manage them on-prem.   
    • With Distribute Cloud, the AI Video processing is extended into the office, so cloud services can be leveraged to run this application close to the device – and that is critical in ensuring that latency is reduced, and workers are effectively warned before they enter a dangerous area. 
    • That same company had to adjust to challenges brought by the Covid pandemic. They were able to quickly iterate on their application and modify the use-case for AI models and rapid deployment to distributed locations. Now, instead of warning workers about wearing hard hats, they can make sure that workers are wearing masks and ensure that the masks are being worn correctly. Additionally, they can even leverage thermal devices to take temperatures. 

Distributed Cloud enables organizations to quickly address unforeseen challenges, leverage cloud benefits in any location, and innovate quickly. 

See how the Akraino project’s StarlingX Far Edge Distributed Cloud blueprint can serve as a starting point: https://wiki.akraino.org/display/AK/StarlingX+Far+Edge+Distributed+Cloud

####

 

EdgeX Foundry Launches Chinese Website

By Blog, EdgeX Foundry

By Melvin Sun (Sr. marketing manager in Intel IoT Group, EdgeX TSC member, co-founder & maintainer of EdgeX China Project) and Kenzie Lan (Operations specialist of EdgeX China community)

EdgeX Foundry, a project under the LF Edge umbrella, is a vendor-neutral open source project on edge computing. 

To support the growing community of EdgeX Chinese developers and ecosystem partners, the project established the EdgeX China project in Q4’19 to drive collaboration in the region.  Since then,  the project community in China has set up local meetups, working groups, and events. In addition, local and language-specific developer and social media channels were created to generate awareness about the project and improve the adoption and experience of Chinese users and developers. 

As of today, the EdgeX China community has gained more than 300,000 followers on mainstream developer platforms of CSDN and OSChina, as well as six WeChat groups! In addition to developers, users and ecosystem partners, there are  ~10 commercial offerings from China based on EdgeX. 

To further support the EdgeX China community growth, the project officially launched its Chinese website (https://cn.edgexfoundry.org/) on June 30th, 2021 to provide awareness of EdgeX Foundry for new developers and users and to keep the local community up to date on events and news. 

The EdgeX Foundry Chinese site is divided into “Home Page”, “Why EdgeX”, “Getting Started”, “Software”, “Ecosystem”, “Community”, “Latest”, and “EdgeX Challenge China 2021”. This site provides a systematic introduction to EdgeX’ s architecture, methods of use, services, community, latest news, and overview and information on the ongoing 2021 China EdgeX Challenge for EdgeX users to compete for prizes by creating solutions based on EdgeX. Visitors can easily jump from the English website (https://www.edgexfoundry.org/)to the Chinese through the link between them.

CTOs, architects, developers, students, and business folks all are welcome to learn and exchange practical experience in the EdgeX China community. The EdgeX community will continue to grow and improve in collaboration with members around the world.

Thanks to all involved in creating and maintaining this China-focused version of the EdgeX Foundry website!

Baetyl Issues 2.2 Release, Adds EdgeX Support, New APIs, Debugging, and More

By Baetyl, Blog

Baetyl, which seamlessly extends cloud computing, data and services to edge devices, enabling developers to build light, secure and scalable edge applications, has issued its 2.2 release. Baetyl 2.2, based on community contributions, now contains more  advanced functions. Still based on cloud native functionality, Baetyl’s new features continue build an open, secure, scalable, and controllable intelligent edge computing platform.

Specific new features in the 2.2 release include:

Support for working with EdgeX Foundry

Baetyl v2.2 has updated compatibility with LF Edge sister project, EdgeX Foundry. Through Baetyl’s remote management suite,  “Baetyl-cloud,” users can deliver all 14 EdgeX services to the edge. The delivered EdgeX services will be submitted by Baetyl to local Kubernetes clusters for deployment and monitoring synchronized to the cloud.

New API definition, which is needed to support edge cluster environments

In the industrial IoT scenario, there are often many industrial control boxes to form an edge cluster scenario. Baetyl defines open multi-cluster management APIs. By implementing these APIs, the entire cluster can be reflected on the cloud console. Users can easily deploy applications to defined edge clusters, and specify edge node affinity within those clusters.

Support for DaemonSet load type applications

In the context of supporting clusters, a new load method is needed to support deployments like the function of monitoring the status of each node in the cluster, so Baetyl also supports DaemonSet. Through this load type, the service can be a single replica launched on every node in the matched cluster, and it will be automatically scaled to new added nodes and vice-versa.

New API definitions for remote debugging and remote log viewing of deployed applications

To facilitate debugging or log viewing operations on edge devices, Baetyl has established an open remote debugging API that can connect with multiple cloud control systems in the future.

New API definitions for GPU monitoring and sharing functions

Support for the GPU mainly includes two aspects:  one is the monitoring of the use of the GPU, and the other is the support for the GPU sharing. Through the GPU monitoring module, Baetyl-core can obtain the current GPU memory usage, temperature, energy consumption and other information in real time. With the GPU sharing function, multiple applications can share the GPU resources of the device. At present, the definition of the GPU support interface has been completed, and only a module containing the GPU share function needs to be provided on the end side to use. 

More official modules

There will be more official system modules are also provided:

  1. baetyl-init: Responsible for activating edge nodes to the cloud, initializing and guarding baetyl-core, after the task is completed, it will continue to report and synchronize the core state.
  2. baetyl-rule: can realize the message flow of baetyl framework end-side, and exchange messages in baetyl-broker (end-side message center), function service, IoT Hub (cloud MQTT broker).

In addition to these new features, Baetyl 2.2 also provides many other functional details for optimization and mechanism improvement, such as the optimization of the installation process; the system application can now be configured according to needs, and the transaction execution interface and task queue interface are defined.

All these new features will be available immediately with the release of Baetyl 2.2. More information is available here: https://github.com/baetyl/baetyl.

Where the Edges Meet and Apps Land: Akraino Release 4 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix

Introduction

The Public Cloud Interface (PCEI) blueprint was introduced in our last blog. This blog describes the initial implementation of PCEI in Akraino Release 4. In summary, PCEI R4 is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S) and demonstrates

  • Deployment of Public Cloud Edge (PCE) Apps from two Clouds: Azure (IoT Edge) and AWS (GreenGrass Core),
  • Deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API Server.
  • End-to-end operation of the deployed Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Before describing the specifics of the implementation, it is useful to revisit PCEI architectural concepts.

The purpose of Public Cloud Edge Interface (PCEI) is to implement a set of open APIs and orchestration functionalities for enabling Multi-Domain interworking across Mobile Edge, Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. Interfaces between the functional domains are shown in the figure below:

The detailed PCEI Reference Architecture is shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

There are some important points to highlight regarding the relationships between the Public Cloud Core (PCC) and the Public Cloud Edge (PCE) domains. These relationships influence where the orchestration tasks apply and how the edge infrastructure is controlled. Both domains have some fundamental functions such as the underlying platform hardware, virtualization environment as well as the applications and services. In the Public Cloud context, we distinguish two main relationship types between the PCC and PCE functions: Coupled and Decoupled. The table below shows this classification and provides examples.

The PCC-PCE relationships also involve interactions such as Orchestration, Control and Data transactions, messaging and flows. As a general framework, we use the following definitions:

  • Orchestration: Automation and sequencing of deployment and/or provisioning steps. Orchestration may take place between the PCC service and PCE components and/or between an Orchestrator such as the PCEI Enabler and PCC or PCE.
  • Control: Control Plane messaging and/or management interactions between the PCC service and PCE components.
  • Data: Data Plane messaging/traffic between the PCC service and the PCE application.

The figure below illustrates PCC-PCE relationships and interactions. Note that the label “O” designates Orchestration, “C” designates Control and “D” designates Data as defined above.

The PCEI implementation in Akraino Release 4 shows examples of two App-Coupled PCC-PCE relationships: Azure IoT Edge and AWS GreenGrass Core, and one Fully Decoupled PCE-PCC relationship: an implementation of ETSI MEC Location API Server.

In Release 4, PCEI Enabler does not support Hardware (bare metal) or Kubernetes orchestration capabilities.

PCEI in Akraino R4

Public Cloud Edge Interface (PCEI) is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S). PCEI Release 4 (R4) supports deployment of Public Cloud Edge (PCE) Apps from two Public Clouds: Azure and AWS, deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API App, as well as the end-to-end operation of the deployed PCE Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Functional Roles in PCEI Architecture

Key features and implementations in Akraino Release 4:

  • Edge Multi-Cloud Orchestrator (EMCO) Deployment
    • Using ONAP4K8S upstream code
  • Deployment of Azure IoT Edge PCE App
    • Using Azure IoT Edge Helm Charts provided by Microsoft
  • Deployment of AWS Green Grass Core PCE App
    • Using AWS GGC Helm Charts provided by Akraino PCEI BP
  • Deployment of PCEI Location API App
    • Using PCEI Location API Helm Charts provided by Akraino PCEI BP
  • PCEI Location API App Implementation based on ETSI MEC Location API Spec
    • Implementation is based on the ETSI MEC ISG MEC012 Location API described using OpenAPI.
    • The API is based on the Open Mobile Alliance’s specification RESTful Network API for Zonal Presence
  • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge
  • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge

 

End-to-End Validation

The below description shows an example of the end-to-end deployment and validation of operation for Azure IoT Edge. The PCEI R4 End-to-End Validation Guide provides examples for AWS GreenGrass Core and the ETSI MEC Location API App.

 

Description of components of the end-to-end validation environment:

  • EMCO – Edge Multi-Cloud Orchestrator deployed in the K8S cluster.
  • Edge K8S Clusters – Kubernetes clusters on which PCE Apps (Azure IoT Edge, AWS GGC), 3PE App (ETSI Location API Handler) are deployed.
  • Public Cloud – IaaS/SaaS (Azure, AWS).
  • PCE – Public Cloud Edge App (Azure IoT Edge, AWS GGC)
  • 3PE – 3rd-Party Edge App (PCEI Location API App)
  • Private Interconnect / Internet – Networking between IoT Device/Client and PCE/3PE as well as connectivity between PCE and PCC.
  • IoT Device – Simulated Low Power Wide Area (LPWA) IoT Client.
  • IP Network/vEPC/UPF) – Network providing connectivity between IoT Device and PCE/3PE.
  • MNO DC – Mobile Network Operator Data Center.
  • Edge DC – Edge Data Center.
  • Core DC – Public Cloud.
  • Developer – an individual/entity providing PCE/3PE App.
  • Operator – an individual/entity operating PCEI functions.

The end-to-end PCEI validation steps are described below (the step numbering refers to Figure 5):

  1. Deploy EMCO on K8S
  2. Deploy Edge K8S clusters
  3. Onboard Edge K8S clusters onto EMCO
  4. Provision Public Cloud Core Service and Push Custom Module for IoT Edge
  5. Package Azure IoT Edge and AWS GGC Helm Charts into EMCO application tar files
  6. Onboard Azure IoT Edge and AWS GGC as a service/application into EMCO
  7. Deploy Azure IoT Edge and AWS GGC onto the Edge K8S clusters
  8. All pods came up and register with Azure cloud IoT Hub and AWS IoT Core
  9. Deploy a custom LPWA IoT module into Azure IoT Edge on the worker cluster
  10. Successfully pass LPWA IoT messages from a simulated IoT device to Azure IoT Edge, decode messages and send Azure IoT Hub

For more information on PCEI R4: https://wiki.akraino.org/x/ECW6AQ

PCEI Release 5 Preview

PCEI Release 5 will feature expanded capabilities and introduce new features as shown below:

For a short video demonstration of PCEI Enabler Release 5 functionality based on ONAP/CDS with GIT integration, NBI API Handler, Helm Chart Processor, Terraform Plan Executor and EWBI Interconnect please refer to this link:

https://wiki.akraino.org/download/attachments/28968608/PCEI-ONAP-DDF-Terraform-Demo-Audio-v5.mp4?api=v2

Acknowledgements

Project Technical Lead: 

Oleg Berzin, Equinix

Committers:

Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm

Contributors: 

Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks

Embedded IoT 2021 – Intro of the research insight from the presentation based on LF Edge HomeEdge

By Blog, Home Edge

The blog post initially appeared on the Samsung Research blog

By Suresh LC and Sunchit Sharma, of SRI-Bangalore

Introduction

Every home is flooded with consumer electronic devices which have capability to connect to internet. These devices can be accessed from anywhere in the world and thus making the home smart. As per a survey the size of smart home market would be $6 Billion by 2022. As per Gartner report 80% of all the IoT projects to include AI as a major component.

In conventional cloud based setup following two things need to be considered with utmost importance:

Latency – As per study there is latency of 0.82ms for every 100 miles travelled by data

Data Privacy – Average total cost of data breach is $3.92 Million

With devices of heterogeneous hardware capabilities the processing which were traditionally performed at cloud are being moved to devices if possible as shown below.

These lead us to work towards developing an edge platform for smart home system. As an initial step the Edge Orchestration project was developed under the umbrella of LF Edge.

Before getting into the details of Edge orchestration, let us understand few terms commonly used in this.

Edge computing – a distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth

Devices : Our phones, tablets, smart TVs and so that are now a part of our homes

Inferencing – act or process of reaching a conclusion from known facts.

ML perspective – act of taking decisions / giving predictions / aiding the user based on a pre trained model.

Home Edge – Target

Home Edge project defines uses cases, technical requirements for smart home platform. The project aims to develop and maintain features and APIs catering to the above needs in a manner of open source collaboration. The minimum viable features of the platform are below:

Dynamic device/service discovery at “Home Edge”

Service Offloading

Quality of Service guarantee in various dynamic conditions

Distributed Machine Learning

Multi-vendor Interoperability

User privacy

Edge Orchestration targets to achieve easy connection and efficient resource utilization among Edge devices. Currently REST based device discovery and service offloading is supported. Score is calculated based on resource information (CPU/Memory/Network/Context) by all the edge devices. The score is used for selecting the device for service offloading when more than one device offer same service. Home Edge also supports Multi Edge Communication for multi NAT device discovery.

Problem Statement

As mentioned above the service offloading is based on score of all the devices. Score is in-turn is calculated using CPU/Memory/Network/Context. These values are populated in the db initially when home edge is installed in the device. And hence these are more static values.

Static Score is a function of

Number of CPU cores in a machine

Bandwidth of each CPU core

Network bandwidth

Round trip time in communication between devices

For example say at time t1, device B has available memory ‘X’ and hence this value is stored in db. Say at time t2 device A requests device B for score and at t2 the available memory in device B is ‘X-Y’. But the score is calculated using ‘X’ which has been stored initially.

Say in another scenario device A requests device B for service offloading. Device B starts executing the service. But say in the mid of execution moves away from the network. In such cases it would be unsuccessful offloading.

We could see in the above situations that the main purpose of Home Edge is gone for a task.
And these lead to the proposal of dynamic score calculation which is more effective and efficient.
Properties of a home based Distributed Inferencing System

We propose an effective method for score calculation. The proposed method for score calculation is based on following points:

1. Model Specific Device Selection

2. Efficient Data Distribution

3. Efficient Churn Prediction

4. Non Rigid and Self Improving

Based on the above four properties we defined a feedback based system that can help us achieve optimal data distribution for parallel inferencing in a heterogeneous system.

High Level System

Following diagram shows the high level architecture of the system. Let us take a use case and see the flow of tasks. For example when a user queries to Speaker “Let me know is any visitors today?”. The User query is indigested by the query manager to understand the query. The user query is to identify the number of visitors who visited today. The Data Source needs to be identified for query execution say the Doorbell Camera/Backyard CCTV. Then the model required for this purpose is identified and the devices which have the service (model). Based on the past performance, static score, trust of the device and number times previously the model has ran on the device, the devices are selected. Priority of the devices are calculated and based on the same the data is split among the devices. The result from the devices are aggregated to calculate the final output.

Understanding Score

Total Score is a function of Static score and Memory usage of the device

Changes in Score can help us predict estimated runtimes at the current time

Understanding Time Estimates

Time Estimates are calculated based on previous time records, average and current score

Data Points which are stored :

1. Average Score

2. Average time taken per inference

3. Number of times a model has been deployed on a device

4. Success probability of a model running on a device – Trust

Estimating runtime on Devices

Runtime estimation is done post sufficient data points being collected in the system

Delta Factor – A liner relationship factor to approximate the change in runtimes when score changes is calculated

Estimated time – using the Delta factor, Average Score and Average Runtime and the Current score of the device

Understanding Churn Probability Estimation

Logical buckets for every 5 minute frames is created to know the availability of devices

In case a bucket is polling a device for nth time and the was available k times before the new probability would be

1. (K+1)/(n+1), if device is available

2. K/(n+1), if device unavailable

Calculating Priority

Post estimation of runtimes, The probability of devices moving out of the system are checked and devices whose probability falls low are not kept in the consideration list

All devices in the consideration list are sorted based on the product of the trust and inverse of runtimes

Top K devices out of N are selected and data is distributed among them

Data Distribution

Data on the devices is distributed in the inverse ratio of the estimated runtimes to minimize the total time the system takes to make the complete inference

Feedback

Post execution the actual runtimes and scores of the devices are used to adjust the average in the databases

In case of failure next device is selected and the trust value of the device is adjusted in database

Test Run on Simulator

Aim – To demonstrate heuristic based data distribution among devices
To Test – Note the devices that work best for a model, checking the selection irrespective of device scores

Web based simulator was developed to demonstrate the proposed method. Following is a sample screenshot of the simulator

 

In the due course of our work we also found few models execute better in specific devices. To check up on this we had run two use case – Camera Anomaly detection and Person identification. And we found that the models used in both the scenarios gave better execution results in different devices as shown below.

AgStack joins with LF Edge and EdgeX Foundry to Create Agriculture IoT Solutions

By Blog, EdgeX Foundry

This post originally published on the AgStack website

The newest Linux Foundation project, AgStack, announces a liaison with LF Edge and EdgeX Foundry to dock existing EdgeX software for IoT into AgStack’s IoT Extension. AgStack has also agreed to participate in the EdgeX Foundry Vertical Solutions Working Group to explore the adoption and use of the EdgeX IoT/edge platform as an extension to its design of digital infrastructure for agriculture stakeholders.  Through mutual LF project cooperation, the two projects hope to create a complete edge-to-enterprise open-source solution as part of AgStack that can help the global agricultural ecosystem.

The goals of the cooperation between the two projects are to:

  • Utilize the existing EdgeX Foundry framework to quickly accelerate AgStack’s reach into the agriculture edge – providing a universal platform for communicating with the ag industry’s sensors, devices and gateways.
  • Extend the EdgeX framework to handle agricultural edge use cases and unique ag ecosystem protocols, models, data formats, etc.
  • Jointly work on edge-to-enterprise market ready solutions that can be easily demonstrated and used as the foundation for real world products benefiting ag industry creators and consumers.
  • Setup an exchange (as fellow LF projects) to mutually assist and share lessons learned in areas such as project governance, devops, software testing, security, etc.

“We are in the early stages of defining and building the AgStack platform and we prefer not to have to start from scratch or reinvent the wheel as we build our industry leading open-source platform” said Sumer Johal, Executive Director of AgStack.  “EdgeX Foundry gives us the opportunity to leapfrog our IoT / edge efforts by several years and take advantage of the ecosystem, edge expertise, and lessons learned that EdgeX has acquired in the IoT space.”

“As a versatile and horizontal IoT/edge platform, we are excited to partner with AgStack who can help to highlight how EdgeX can be used in  agriculture IoT use cases and how the AgStack and EdgeX communities can collaborate to scale digital transformation for the agriculture and food industries” said Jim White, Technical Steering Committee Chairman of EdgeX Foundry.  “Even though a fellow LF project, we view AgStack as one of our vertical customers – applying EdgeX to solve real world problems – and what better place to demonstrate that than in an industry that feeds the world.”

 About AgStack

  AgStack was launched by the Linux Foundation in May 2021.  AgStack seeks to improve global agriculture efficiency through the creation, maintenance and enhancement of free, re-usable, open and specialized digital infrastructure for data and applications.  Founding members of AgStack include Hewlett Packard Enterprise, Our Sci, bscompany, axilab, Digital Green, Farm Foundation, Open Team, NIAB and Product Marketing Association.  To learn more visit https://agstack.org.

 

Summary: Akraino’s Internet Edge Cloud (IEC) Type 4 AR/VR Blueprint Issues 4th Update

By Akraino, Blog

By Bart Dong

The IEC Type 4 focuses on AR VR applications running on the edge. In general, the architecture consists of three layers:

  • We use IEC as IaaS Layer.
  • We deploy TARS Framework as PaaS Layer to governance backend services.
  • And for the SaaS layer, AR/VR Application backend is needed.

There are multiple use cases for AR /VR itemized below. For Release 4, we focused on building the infrastructure and virtual classroom application, which I highlighted here:

Let’s have a quick look at the Virtual Classroom app. Virtual Classroom is a basic app that allows you to live a virtual reality experience simulating a classroom with teachers and students.

Generally, it has two modes, the Teacher mode, and the Student mode.

  • In Teacher mode
    • You will see the classroom from the teacher’s view.
    • You can see some students in the classroom listening to your presentation.
  • In Student mode
    • You will see the classroom from a student’s view.
    • You can see the teacher and other students on the remote side.

The whole architecture, shown below, consists of three nodes: Jenkins Master, Tars Master, and TARS Agent with AR/VR Blueprint.

  • For the Jenkins Master, we deployed a Jenkins Master for our private lab for testing.
  • For the TARS Master, we deploy a Tars Platform for serverless use case integration.
  • For the TARS agent, we deployed the Virtual Classroom back-end on this node and two front-end clients as the Virtual Classroom teacher and student on KVM.

It’s not a very difficult architecture. As I mentioned before, the TARS Framework plays an important role as PaaS Layer to governance backend services. Then let’s go a little bit further with TARS.

TARS is a high-performance microservice framework based on name service and TARS protocol, an integrated administration platform, and an implemented hosting service via a flexible schedule. TARS adds support for Arm, x86, and multiple platforms, including macOS, Linux, and Windows.

TARS can quickly build systems and automatically generate code, taking into account ease of use and high performance. At the same time, TARS supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python. TARS can help developers and enterprises quickly build their own stable and reliable distributed applications in a microservices manner to focus on business logic to improve operational efficiency effectively.

TARS is working in PaaS. It can run on physical machines, virtual machines, and containers, including Docker and Kubernetes. You can store TARS service data in Cache, DB, or a file system.

TARS framework supports Tars protocol, TUP, SSL, HTTP 1/2, protocol buffers, and other customized protocols.
 TARS protocol is IDL-based, binary, extensible, and cross-platform. These features allow servers to use different languages to communicate with each other using RPC. And TARS supports different RPC methods like synchronous, asynchronous, and one-way requests.

TARS has multiple functions to govern services and can integrate other DevOps Tools, such as Jenkins as the IEC Type 4.

For release 4, we updated  TARS to version 2.4.13, which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.

We provided a serial of TARS API to the Akraino API map, including application management APIs and service management APIs.

And last year, on March 10th, TARS evolved into a non-profit microservices foundation under the Linux Foundation umbrella.

The TARS Foundation is a neutral home for open source Microservices projects that empower any industry to quickly turn ideas into applications at scale. It’s not only TARS but a microservices Ecosystem. It will solve microservices problems with TARS characteristics, such as:

  • Agile Development with DevOps best practices
  • Built-in comprehensive service governance
  • Multiple languages supported
  • High performance with Scalability

Here is a summary of what we have done in release 4:

  • We focused on building the infrastructure and virtual classroom application, which I mentioned before.
  • We deployed the AR/VR Blueprint in Parserlabs, and used Jenkins to make CI/CD available in the Blueprint.
  • For release 4, we updated TARS to version 2.4.13 which supports multiple new features, such as API Gateway (named TarsGateway), that supports transfer of HTTP protocol to TARS protocol.
  • We then passed the security scan and the validation lab check. 

For the coming Release 5,  we are planning to deploy IEC Type 4 AR/VR on Kubernetes by K8STARS.

K8STARS is a convenient solution to run TARS services in Kubernetes.

It maintains the native development capability of TARS and automatic registration and configuration deletion of name service for TARS.

It supports the smooth migration of original TARS services to Kubernetes and other container platforms.

The K8STARS is a non-intrusive design, no coupling relationship with the operating environment.

Service can be run as easy as one command like:

kubectl apply -f simpleserver.yaml

If you are interested in TARS Foundation or TARS, please feel free to contact us via email tars@tarscloud.org

Anyone is welcome to join us in building the IEC Type 4 AR/VR blueprint and microservices ecosystem!