Skip to main content
All Posts By

LF Edge

Akraino R5: Spotlight on Integrated Edge Cloud, Type2 — LightWeight Multi-Access Edge Cloud (MEC) on AWS

By Akraino, Blog

By Vinothini Raju

Integrated Edge Cloud (IEC) is a family of blueprints within the  Akraino project, which is developing a fully integrated edge infrastructure solution. This open source software stack provides critical infrastructure to enable high performance, reduce latency, improve availability, lower operational overhead, provide scalability, address security needs, and improve fault management within edge computing. The IEC project addresses multiple edge use cases and industries, (not just Telco). The group is  developing solutions and support for carriers, providers, and the IoT networks. The IEC Type2 blueprint, specifically, focuses on medium deployment of edge clouds.

Meanwhile, Multiaccess Edge Cloud (MEC) offers cloud computing capabilities at the edge of the network. Collecting and processing data closer to subscribers reduces latency, data congestion and improves subscriber experience by providing real-time updates. A cloud native implementation at the edge realises the full potential of cloud that allows developers to focus on writing scalable and highly reliable applications, instead of worrying about the operational constraints. Developing applications at the edge goes beyond the scalability requirements, as these edge native applications need real time processing as they are latency-sensitive and are hungry for high bandwidth.

  • Modern Developer platforms should offer a unified experience for building both cloud and edge native applications seamlessly. Using these cloud and edge native sandbox environments, developers should be able to simulate a real-time environment to test applications on mobility, test caching, performance etc. Out-of-the-box integrations like AI/ML frameworks like Kubeflow, data processing frameworks like EdgeXFoundry to synthesize data at the edge, monitoring and logging frameworks like Prometheus, Grafana, EFK/ELK stacks– along with no code/low code experience– can accelerate time- to- delivery and promote developer innovation. 

Apart from providing a rich developer experience, these sandbox environments should be lightweight, which can be provisioned really quickly, and later, extended to a production- ready environment. 

Additionally, leveraging public cloud providers like AWS for a distributed cloud at the edge can greatly reduce the CAPEX/OPEX to set up a MEC Cloud.

Considering these requirements, gopaddle team has proposed and developed an Akraino blueprint that provisions a lightweight Multi-Access Edge Cloud on AWS leveraging microk8s. 

A word about microk8s

Microk8s is a lightweight Kubernetes distribution from Canonical. It uses  snap manager to spin up a Kubernetes cluster in less than a minute. Snap installer consumes as little as 192 MB RAM, and K8s distribution consumes as little as 540 MB. It is fairly simple to spin up a single node cluster, which later can be extended to a multi-node cluster. Once there are 3+ nodes, a High Availability mode can be enabled, making the cluster production-ready. Microk8s offers out-of-the-box tool chains like Kubeflow for Machine Learning workloads, Prometheus for monitoring, in-build image registry, etc. 

While there are other lightweight Kubernetes distributions, microk8s offers better CPU/Memory/Disk utilization. Thus ,microk8s stands out as a good candidate for a developer sandbox environment which can be scaled to a production environment.

Reference: Sebastian B¨ohm and Guido Wirtz 

Distributed Systems Group, University of Bamberg, Bamberg, Germany

AWS Wavelength – Bringing Cloud Experience to the Edge

AWS has partnered with a few telecom carrier providers like Verizon, Vodafone, etc. to offer edge environments called AWS Wavelength zones in a few selected regions/zones. AWS VPCs can be extended to AWS Wavelength zones, bringing the AWS experience to the edge environment. 

Using AWS carrier gateway, developers can connect to the carrier network and use their 5G network for developing AI/ML or embedded applications. This gives a real-time development experience to validate their application performance, latency, and caching in real time. This also gives a unified experience for enterprise cloud development and edge development as well.

gopaddle – No Code for Cloud & Edge Native Development

gopaddle is a no code platform to build, deploy and maintain Cloud and Edge native applications across hybrid environments across cloud and edge. The platform helps in easy onboarding of applications using intelligent scaffolding, provides out-of-the-box DevSecOps automation by integrating with 30+ third party tools, pre-built ready-to-use application templates like EdgeXFoundy and provides a centralized provisioning and governance of Multi Cloud and Hybrid Kubernetes environments.

LightWeight MEC Blueprint

The blueprint leverages the three main building blocks  – microk8s, AWS and gopaddle to provision a light weight MEC that acts as a developer sandbox which can be extended for production deployments. The provisioning of the microk8s- based MEC environment on AWS is automated using terraform templates. These templates can be uploaded and centrally managed through gopaddle. Using these templates, multiple environments can be provisioned and centrally managed. 

Fore more information on the IEC Type 2 Akraino blueprint available with the R5 release, visit this page: 

Akraino R5: Spotlight on the AI Edge — Federated Machine Learning at the Edge

By Akraino, Blog

This is the first in a series of brief posts focused on Akraino use cases and blueprints. Thanks to the many community members for creating the blueprints and providing these helpful summaries! 

By: Enzo Zhang  Zifan Wu  Haihui Wang, and Ryan Anderson

All attributes of each row data for Federated Machine Learning (ML) come from more than one edge provider.  Each side has no ability on its own to hold all of the required data to function as designed. Specific edges (or edge sides) can be pretty limited in terms of edge computing capabilities for a variety of reasons/conditions

The application must cover more than one edge while also collecting and exchanging data between edges. Meanwhile, a method must be applied to make sure the communication between these edges is secure enough for all participants.

Here are some examples of  AI edge application use cases to illustrate:  

  • Deploy more than one edge
  • No more extra storage space
  • A few computing units
  • Deal with data locally
  • Network ability
  • Share data each other
  • Work together with ML
  • Data security

Applications group data from edges based on security methods; but data privacy is still managed by the Federated Machine Learning framework.  Data, as an  input of a ML model,  is in a central site, and  exchange of data occurs after ML processes are done.  Required data gets sent back to the edge for its next process.

Many key components are required to make the case work well and machine learning run successfully:.

  1. Data transfer and group
  2. Data collect and distribute
  3. Federated Machine Learning framework
  4. Security encryption algorithm

This functionality, integrated with the framework, can enable edge applications to run right correctly and securely.  

For more information on  the Federated Machine Learning blueprint, please visit this section of the Akraino wiki.

LF Edge Member Spotlight: mimik

By Blog

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge, and beyond. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we highlight our newest general member, mimik. Read on to learn more about the organization, their plans for collaborating across LF Edge, and more. 

Please tell us a little about your organization: 

mimik is a group of dedicated individuals who understand that empowering our unique approach to edge cloud computing with other organizations can change the world for the better. mimik has pioneered a software-only hybrid edgeCloud platform, originating from Founder & CEO Fay Arjomandi’s realization that the future of digital modernization is occurring closer to where data is created and where insights are actionable. As IT becomes increasingly decentralized and sensors and endpoints evolve rapidly in their sophistication, mimik’s Hybrid edgeCloud development platform enables devices to function beyond their current role as mere endpoints and instead as powerful components of a system by acting as cloud servers when needed. mimik’s platform supports standard microservice development, deployment, and communication on any device/OS, including iOS and Android. mimik’s platform also allows developers to create applications that can interact locally with the microservices on the same device to collaborate and communicate with other microservices in dynamic clusters. As a result, data can be processed and analyzed at the source and knowledge exchanged within and across clusters with other nodes and/or with Cloud. This unique architectural approach to the edge whereby any connected computing device can act as a cloud server, reduces reliance on the central cloud for every data stream/transaction. Furthermore, it offers several additional compounding benefits such as 100% control over privacy, up to 90% reduction in cloud hosting costs and energy consumption, and works even in the absence of an internet connection.

Why is your organization adopting an open-sourced approach?

Mobile Internet was mostly about mobile phones connecting to the Internet to consume content. We’re now transitioning to the hyper-connected world era where digital intersects with every aspect of the physical world around us. Therefore digital solutions should mimic physical interactions. We believe in an ecosystem and the ability for solutions to interact across different vendors and providers. Now more than ever the collaboration across the community is of importance. Therefore, we like to embrace an open-source model for any of our offerings. 

Why did you join LF Edge, and what impact do you think LF Edge has on the edge, networking, and IoT industries?

mimik believes in a collaborative approach and recognizes that LF Edge is the most widely recognized and accoladed global organization of platform or solution providers leaders. We’re just at the beginning of the journey to the hyper-connected world. It’s important for the like-minded community of providers to collaborate together and with the wider community to build the next-generation digital infrastructure, solutions, and applications. The open and inclusive approach that LF Edge takes to galvanizing like-minded individuals together in projects, development, and innovative solutions is key to harmonization at the edge, a core value that mimik embraces. We firmly believe in the wide-reaching impact that LF Edge has across industries and sub-sectors of the broad landscape of edge and cloud computing. This is a central driving force for innovative leadership in digital transformation. There are multiple aspects of LF Edge’s community-driven philosophy that we view as vital to growth and expansion and are looking forward to becoming active members of this important group of innovative leaders.

What do you see as the top benefits of being part of the LF Edge community?

Multiple aspects of LF Edge’s community-driven philosophy coincide with how mimik sees our company growing and thriving amongst peers and like-minded individuals looking to push the boundaries of what is possible at the edge. In addition to being closely aligned with partners such as IBM regarding Open Horizon, mimik looks forward to the opportunity to expand and grow our network with a broad range of individuals who are passionate about the fundamental role that it plays in digital transformation. The possibilities that the LF Edge community offers its members are priceless. To add to the invaluable points of connection and access to leaders on the cusp of true transformation, LF Edge will also provide mimik with a stage on which we can showcase what we believe is a unique and differentiating series of solutions to partners as well as potential customers. mimik aspires to work with others to “edgifi” the world and believes that LF Edge will be an important piece of the puzzle to move the needle from concept to fruition.

What sort of contributions has your team made (or plans to make) to the community, ecosystem through LF Edge participation?

The following is a list of type of contributions mimik would like to make to the LF Edge community:

  • Attending LF Thought Leadership Committee meetings
  • Being part of LF Edge Speakers Bureau
  • Attending Member Summit (November) and potential speaking slot
  • Attending Marketing meetings
  • Being elected to the Governing Board
  • Contributing to different membership levels and benefits as an expansion to our standard membership status
  • Publishing Blogs
  • Code submissions either as updates of our own contributions or on the project themselves
  • Contribute with solution architecture 
  • Shape special interest group around practical use cases of edge cloud computing 
  • Lead/participate in training sessions for 
    • principles of edge cloud computing 
    • microservice driven architecture, 
    • API first architecture 
    • service mesh 
    • scalable solution design 
    • mobile application development 
    • etc.

What sets LF Edge apart from other industry alliances?

LF Edge is the only consortium of pioneers of open source innovations at the edge. It’s not just a community of people exchanging contacts and ideas, LF Edge is also an innovation hive that goes beyond responding to market demands and instead creates them by redefining entire markets and industries. The opportunities to guide the future of digital transformation as part of this group are seemingly endless and are constantly evolving versus the static benefits offered by traditional industry alliances.

How will LF Edge help your business?

mimik views LF Edge as an essential consortium of like-minded individuals who help each other do the right thing versus sell the solution. We are passionate about raising awareness of microservice development at the edge, collaborating and working with the community to help build out actual use cases and drive adoption.OF Edge’s platform provides its members tangible opportunities to educate and connect people in multiple forms from training to blogs to events. By being a member of LF Edge, mimik will strengthen its relationship with existing partners, explore new ones, learn and expand the scope of our personal mandate to “edgifi the world.” Moreover, LF Edge will give us an opportunity to learn from others member’s experiences and feedback to ensure that our roadmap is aligned with the requirements of our ecosystem so we can jointly build a future sustainable internet together.

What advice would you give to someone considering joining LF Edge?

mimk’s advice is to dive right into the sea of opportunities at your fingertips. Get engaged as soon as possible to be part of this collaborative ecosystem. 

State of the Edge and Edge Computing World Announce Top Finalists For The Edge Woman of the Year Award 2021

By Blog, State of the Edge

Edge Computing Industry Comes Together to Recognize Top Ten Women Shaping the Future of Edge for 2021

“I am thrilled to participate in announcing this year’s Edge Woman of the Year 2021 finalist categories; together we have much to accomplish and the women nominated for this year inspire us all.” — Edge Woman 2020 winner, Fay Arjomandi, Founder of mimik Tech

CEDAR PARK, TX, UNITED STATES, August 25, 2021 / — Edge computing leaders from State of the Edge and Edge Computing World announce the Third Annual Women top finalists.

The Edge Woman of the Year 2021 nominees reflects a group of qualified industry leaders in roles impacting the direction of their organization’s strategy, technology or communications around edge computing, edge software, edge infrastructure or edge systems. The finalists were selected for their outstanding nomination and referred to the final panel of reviewers by the organizers. The final winner will be chosen by a panel of industry judges, including the previous Edge Woman of the Year 2020 winner, Fay Arjomandi, Founder of mimik Technology Inc. The winner of the Edge Woman of the Year 2021 will be announced during this year’s Edge Computing World, being held virtually October 12-15, 2021.

“The Edge Woman of the Year Award 2021 was created as part of an industry commitment of time and resources to highlight the growing importance of the contributions and accomplishments of women in edge computing,” said Candice Digby, Partnerships and Events Manager at Vapor IO. “The award is presented annually at the Edge Computing World event, which offers the finalists and winners one of the most visible platforms for the entire edge computing ecosystem to highlight the advancements of their efforts in this field.”

The State of the Edge and Edge Computing World are proud to sponsor the annual Edge Woman of the Year Award, which is presented to an outstanding female and/or non-binary professional in edge computing across network, cloud, applications, developers, and infrastructure end-users.

“I was honored to have been chosen as Edge Woman of the Year 2020 and to be recognized alongside many inspiring and innovative women across the industry,” said Fay Arjomandi, Founder and CEO, mimik Technology Inc. “I am thrilled to participate in announcing this year’s Edge Woman of the Year 2021 finalist categories; together we have much to accomplish and the women nominated for this year inspire us all to continue our work in building a sustainable digital economy.”

The annual Edge Woman of the Year Award is presented to outstanding female and non-binary professionals in edge computing for outstanding performance in their roles elevating Edge. The 2021 award committee selected the following seven finalists for their excellent work in the named categories:

Leadership in Edge Startups
Eva Schonleitner, CEO at

Leadership in Edge Open Source Contributions
Dr. Stefanie Chiras, Senior Vice President, Platforms Business at Red Hat

Leadership in Hyperscale Edge
Prajakta Joshi, Group Product Manager, Edge Cloud for Enterprise and Telecom at Google

Leadership in Network Edge
Rita Kozlov, Director of Product at Cloudflare, Inc.

Leadership in Edge Innovation and Research
Azimeh Sefidcon, Research Director at Ericsson

Leadership in Edge Best Practices
Lily Yusupova, Strategic Account Executive, Schneider Electric

Leadership in Rural Edge
Nancy Shemwell, Chief Operating Officer, Trilogy Networks, Inc.

The 2021 submissions continue to be incredibly impressive and the list of Edge Woman of the Year finalists represents a premier group of women taking the reins of leadership across the edge computing ecosystem. Edge computing continues to be one of the fastest growing industries, and we hope these women inspire the industry as well as encourage more women to pursue careers in Edge.

“Visibility of female leadership is so important to the potential growth and innovation in Edge Computing,” said Gavin Whitechurch of Topio Networks and Edge Computing World, “Recognizing this group of elite technologists inspires and encourages continuous forward thinking in our diverse industry.”

For more information on the Women in Edge Award visit:

About State of the Edge
The State of Edge ( is a member-supported research organization that produces free reports on edge computing and was the original creator of the Open Glossary of Edge Computing, which was donated to The Linux Foundation’s LF Edge. The State of the Edge welcomes additional participants, contributors and supporters. If you have an interest in participating in upcoming reports or submitting a guest post to the State of the Edge Blog, feel free to reach out by emailing

About Edge Computing World
Edge Computing World is the only industry event that brings together the entire edge ecosystem.The industry event will present a diverse range of high growth application areas – including AI, IoT, NFV, Augmented Reality, video, cloud gaming & self-driving vehicles – are creating new demands that cannot be met by existing infrastructure. The theme will cover edge as a new solution required to deal with low latency, application autonomy, data security and bandwidth thinning, which all require greater capability closer to the point of consumption.

Join us at Edge Computing World October 12-15, 2021 for the world’s largest virtual edge computing event.

Jessica Rees
+1 4158897444
email us here

Visit us on social media:

It’s Here! Announcing EdgeX 2.0 – the Ireland Release

By Blog, EdgeX Foundry

By Jim White, EdgeX Foundry Technical Steering Committee Chair








When you think of Ireland, most people envision bright green, fresh landscapes.  An island cleaned regularly by plentiful rain and always made new.  The Ireland name is appropriate to our latest release – which is EdgeX made fresh,  a major overhaul.  

EdgeX Foundry 2.0 is a release over a year in the making!  

This is the EdgeX community’s second major release and our 8th release overall.  Indeed, the Ireland release contains both new features and non-backward compatible (with any EdgeX 1.x release) changes.  In general, this release took a huge step to eliminate, or at least significantly reduce, about four years of technical debt accumulation, while also adding new capabilities.

Why Move to EdgeX 2.0?

Whether you are an existing EdgeX user or new to the platform and considering adoption, there are several reasons to explore the new EdgeX 2.0 Ireland release:

  • First, we’ve completely rewritten and improved the microservice APIs.  I’ve provided some more details on the APIs below, but in general, the new APIs allow for protocol independence in communications, better message security, and better tracking of data through the services.  They are also more structured and standardized with regard to API request / response messages.
  • EdgeX 2.0 is more efficient and lighter (depending on your use case).  For example, Core Data is now entirely optional.
  • EdgeX 2.0 is more reliable and supports better quality of service (depending on your setup and configuration).  For example, you can use a message bus to shuffle data from the sensor collecting Device Services to exporting Application Services.  This will reduce your reliance on the REST protocol and allow you to use a message broker with QoS levels elevated when needed.
  • EdgeX 2.0 has eliminated a lot of technical debt and includes many bug fixes.  As an example, EdgeX micro service ports are now within the Internet Assigned Numbers Authority (IANA) dynamic/private port range so as not to conflict with well-known system or registered ports.

Introducing the V2 APIs

During one of our semi-annual planning meetings back in 2019, the community agreed that we needed to address some challenges in the way that EdgeX microservices communicated – both with each other and with third- party systems.  We identified some technical debt and knew that it had to be addressed in order to provide the platform the base from which new features, better security, and additional means of communications could be supported in the future.  This meant changing the entire API set for all EdgeX microservices.

What was “wrong” with the old V1 APIs?  Here were some of the issues:

  • Intertwined data model/object information with poor URI paths that exposed internal implementation details and made the APIs difficult to use.  Here is an example comparing the v1 and v2 calls to update a device.

V1: PUT /device/name/{name}/lastconnected/{time}

V2:  PATCH /device [with JSON body to specify the device info]

  • In many cases, requests often lacked any formal response object.  The HTTP status code was all that was returned.
  • The APIs and model objects were REST specific and didn’t support other communication protocols (such as message bus communications via MQTT or the like).
  • Used a model that was very heavy – often containing full embedded model objects instead of model object references.  For example, in v1, when requesting device information, the device profile and Device Service were also returned.
  • Used inconsistent naming and included unnecessary elements.  This made it hard for users to figure out how to call on the APIs without checking the documentation.  
  • Lacked a defined header that could be used to provide security credentials/tokens in the future.

Changing the entire API set of a platform is a monumental task.  One that required the collective community to get behind and support.  Given the size of the task, the community decided to spread the work over a couple of release cycles.  We began the work on the new APIs (which we called the V2 APIs) last spring while working on the Hanoi release and completed the work with this release.

The new APIs come with common sense request and response objects.  These objects share new features such as a correlation ID which allows requests and responses among services to be associated, and it also allows the flow of data through the system to be tracked more easily.  When a piece of sensor data is collected by a Device Service, the correlation ID on that data is the same all the way through the export of that data to the north side systems.  Below is a simple example of the new request and responses – in this case to add a new Device Service.


Importantly, the V2 APIs set us up for new features in the future while also making EdgeX easier to interact with.

EdgeX 2.0 Feature Highlights

Technical debt removal and the new V2 APIs aside, what else was added in EdgeX 2.0?  Plenty!  Here is a sample of some of the more significant features.

Message Bus and Optional Core Data

In EdgeX 1.0, all communications from the Device Services (the services that communicate with and collect data from “things”) and Core Data was by REST.  In EdgeX 2.0, this communication can be performed via message bus (using either Redis Pub/Sub or MQTT implementations).  In addition to more “fire and forget” communications, the underlying message bus brokers used in message bus communications offer guarantees of delivery of the message.

Moreover, EdgeX 2.0 takes the message bus implementation a step further by allowing Device Services to send the data directly to Application Services via message bus.  Core data becomes an optional secondary subscriber in this instance.  For organizations that do not need to persist sensor data at the edge, this option allows the entire Core Data service to be removed from deployment helping to lighten EdgeX’s footprint and resource needs.

REST communications from Device Services to Core Data can still be done, but it is not the default implementation in EdgeX 2.0.  The diagrams below depict the old versus new message bus service communications.

Improved security

In the 2020 IoT developer survey, security remained one of the top 3 concerns of people building and fielding IoT/edge solutions.  It also remains a prime concern of the members of our community.  The EdgeX project, with each release, has worked to improve the security of the platform.  In this release, there were several additions.

  • We created a module and a pattern that provides a common means for all the services to retrieve secrets from the secret store (EdgeX uses Vault as the secret store by default).  We call this feature “secret provider for all.”
  • EdgeX uses Consul for service registry and configuration.  In this release, the API Gateway is used to allow access to Consul’s APIs.  Access to the Consul APIs in EdgeX 1.0 was denied making changes in Consul difficult.
  • Consul is now bootstrapped and started with its access control list system enabled – offering better authentication/authorization of the service and better protection of the key/value configuration it stores.
  • Fewer processes and services are required to run as root in Docker containers.
  • The API Gateway (Kong) setup has been improved and simplified.
  • EdgeX now prevents Redis, the persistent store for EdgeX, from running in insecure mode.

Simplified Device Profiles

Device profiles are the way that users define the characteristics about sensors/devices connected to EdgeX – that is the data they provide, and how to command them.  For example, a device profile for BACnet thermostats provides information on data a BACnet thermostat sends, such as current temperature and humidity level. It also defines which types of commands or actions EdgeX can send to the BACnet thermostat such as the ability to set the cooling or heating point.  

Device profiles are specified in YAML or JSON and uploaded to EdgeX.  As such, they are the critical descriptions that makes EdgeX work.  Device profiles allow users to describe their sensors and devices to EdgeX without having to write lots of programming code. 

Writing device profiles in previous releases could be long and tedious depending on the device and what type of data it provided.  We greatly simplified and shortened device profiles in EdgeX 2.0.  As an example, here is the same essential profile in EdgeX 1 and EdgeX 2.

New Device Services

The growth of an open-source community can often be measured by how well it is attracting new contributors into the project.  During this release, thanks to some new contributors, we added several new Device Services.  Here is the new Device Services you can now find in the project:

  • Constrained Application Protocol (CoAP) is a web transfer protocol used in resource constrained nodes and networks.
  • GPIO (or general pin input/output) is a standard interface used to connect microcontrollers to other electronic devices.  It is also very popular with Raspberry Pi adopters given its availability on the devices.
  • Low Level Reader Protocol (LLRP) is a standardized network interface to many RFID readers.
  • Universal Asynchronous Receiver/Transmitter (UART) is serial data communications and is used in modems and can be used with USB in a USB to UART bridge.

These Device Services were contributed to EdgeX 1, but are being updated this summer to EdgeX 2.0.

New Graphical User Interface

In this release, a new and complete interface using Angular.JS (with some Go code for backend needs) has been added.  This GUI, complete with EdgeX branding, aligns with industry standards and technologies for user interfaces and should provide a platform for future GUI needs.

In addition to providing the ability to work with and call on EdgeX services, it provides a GUI “wizard” to help provision a new device (with associated Device Service and device profile).  What would typically take a user several complex REST API calls can now be done with a wizard interface that carefully walks the user through the creation process providing simple fill-in forms (with context help) to create the device.

The GUI also provides a visual status display that allows users to track the memory and CPU usage of the EdgeX services.  In the future, additional visualization will be added to be able to display sensor data.

Application Services

Application Services are used to export EdgeX data off the edge to enterprise and cloud systems and applications.  Application Services filter, prepare and transform the sensor data for easy consumption.  Several additions were made to the Application Services and the SDK that helps create them (called the application functions SDK).  The following is a list of some of the new features and functions:

  • New functions to filter sensor readings by device profile name and device resource name before exporting
  • Allowing multiple HTTP endpoints to be specified for export by one Application Service
  • Subscription to multiple message bus (enabling multiple filters by subscriptions)
  • Provided a new template for easier/faster creation of custom Application Services

Additionally, a new LLRP inventory Application Service was contributed to help correlate data coming from RFID readers to backend inventory systems.

EdgeX Ready

EdgeX Ready is a program designed to highlight member organizations that demonstrate the ability to work with EdgeX.  The EdgeX Ready program was launched with the EdgeX 2.0 release in order to promote awareness of users and their organizations that have EdgeX expertise.  This is the first step or a precursor to potentially exploring an EdgeX certification program.

Today, the EdgeX Ready self-assessment process requires an EdgeX adopter to:

  • Get and run an instance of EdgeX (and associated services)
  • Create and validate a device profile.
  • Use the profile with an EdgeX Device Service to provision a device and send sensor data into EdgeX.

When an organization  completes the EdgeX Ready process, they signal to other community members that they have successfully demonstrated they have EdgeX knowledge and experience.  In return, the EdgeX community highlights EdgeX Ready members on the EdgeX Foundry website with an EdgeX Ready badge (shown below) next to their logo.

Additionally, the program hopes to promote sharing of EdgeX device connectivity elements and sample data sets, which are central to the current submission process and helpful to others learning EdgeX.  Future EdgeX Ready programs may highlight additional levels of ability and experience. 

Under the Hood and Behind the Scenes

Effort by DevOps, Testing, Quality Control and Outreach teams often go unnoticed in any software effort.  These teams are composed of unsung heroes of the EdgeX 2.0 as they contributed substantially to this massive undertaking.  In addition to their normal roles of building, testing and marketing EdgeX, they managed to further the project in many ways.

Our DevOps team cleaned up the EdgeX Docker image names (removing unnecessary image prefix/suffix names) and provided descriptions and appropriate tags on all of our images in Docker Hub – helping adopters more quickly and easily find the EdgeX Docker images they need.  They also added informative badges to our GitHub repositories to help community developers and adopters understand the disposition and quality of the code in the repositories.

Our Test/QA team added new integration tests, performance tests and “smoke” tests to give the project an early indicator when there is an issue with a new version of a 3rd party library or component such as Consul – allowing the community to address incompatibilities or issues more quickly.

Finally, our marketing team revamped and upgraded our website and create a Chinese version of our website in support of our ever growing Chinese community of adopters.  The welcome mat of our project now has a clean new look, better organization, and a lot more information to provide existing and potential adopters.  The website should help EdgeX adopters the information and artifacts they need more quickly and easily, while also highlighting the accomplishements of the community and its membership.

If all this wasn’t enough, here is a list of some of the other accomplishments the project achieved during this release cycle:

  • Improved the Device Service contribution review process.
  • Incorporated use of conventional commits in the code contribution process.
  • Started a program to vet 3rd party dependencies; insuring 3rd party dependencies are of sufficient license, quality and development activity to support our project.
  • Help launch the Open Retail Reference Architecture project to foster the development of an LF Edge reference architecture for retail adopters.
  • Entered into liaison agreements with the Digital Twin Consortium and AgStack (a new LF project) to figure out how EdgeX can be better integrated into digital twin systems and help facilitate solutions in the agricultural ecosystem.

More on Ireland

Find more information about the release at these locations.

On to Jakarta

As always, I want to take this opportunity to thank the wonderful EdgeX team and community.  EdgeX exists because of the incredibly smart and dedicated group of people in the community.  I also want to thank all the fine people at the Linux Foundation.  This release was especially long and difficult, but this team was up to the challenge and they are providing you another great release.  I am proud of their work and privileged to be the chairman of this group.

But our work is not done.  The EdgeX community has already held its planning session for our next release – codenamed Jakarta – that will release around November of 2021.  Look for more details on our planning meeting and the Jakarta release in another blog post I’ll have out soon.

Enjoy the summer, stay safe and healthy (let’s get this pandemic behind us), and please give EdgeX 2.0 a try.

EdgeX Foundry Releases the Most Modern, Secure, and Production-Ready Open Source IoT Framework

By Announcement, EdgeX Foundry

Four-plus years of collaboration, 190+ contributors, 8M+ container downloads, new retail project ORRA, EdgeX Ready and foundation for future, long-term support pave the way for Ireland release

SAN FRANCISCO August 3, 2021 EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation, today announced it’s Ireland release. Focused on edge/IoT solutions, EdgeX Foundry’s second major release overhauls API sets, removes technical debt, provides more message-based communications, and simplifies and secures interface for adopters and developers, making the platform significantly easier to use and more reliable. 

“As a leading stage 3 project under LF Edge, the EdgeX Ireland release has expanded use cases across retail, building automation, smart cities, process control, and manufacturing,” said Arpit Joshipura, general manager, Networking, Edge & IoT, at the Linux Foundation. “It’s a key to standardizing IoT frameworks across market verticals.”

“This release sets in motion the opportunity for EdgeX to offer its first ever LTS or long-term support release as soon as the fall.  This is a significant commitment on the part of our open-source community to all adopters that says we stand with you, prepared to help support your use of EdgeX in real world, scalable, production deployments,” said Jim White, chief technical officer,  IoTech,  and EdgeX Foundry Technical Steering Committee Chair. 

Ireland Feature Highlights

  • Standardized and modernized northbound and southbound APIs enrich ease of interoperability across the IoT framework
  • Advanced security is built into the APIs, message bus, and internal architecture of EdgeX
  • New device services (southbound) and new app services (northbound) included in Ireland are also inherently secure (e.g., GPIO, CoAP, LLRP, UART)

Commercialization & Use Case Highlights

  • Open Retail Reference Architecture (ORRA): a new sub-project that provides a common deployment platform for edge-based  solutions and IoT devices. ORRA is a collaboration with fellow LF Edge projects Open Horizon and Secure Device Onboard, incubated by EdgeX Foundry.
  • The new Edgex Ready program highlights users and organizations that have integrated their offerings with solutions leveraging EdgeX;  a precursor to a community certification program. Learn how to become EdgeX Ready through the project’s Wiki page

Learn more about Ireland’s feature enhancements in this blog post

Plans for the next EdgeX release, codenamed ‘Jakarta’ are expected in Q4’ of 2021. 

For more information about LF Edge and its projects, visit

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: Linux is a registered trademark of Linus Torvalds.

Additional Quotes and Community Support

”Beechwoods Software has been a contributing member of EdgeX Foundry since its inception and chairs the Certification Working Group. EdgeX technology is at the core of our EOS IoT Edge platform offering for which we are readying our version 2 release based on the latest EdgeX code base. Beechwoods is pleased with the growing momentum of EdgeX Foundry and look forward to continuing our support and collaboration,” said Michael Daulerio, Vice President of Marketing and Business Development at Beechwoods Software, Inc.

“Canonical is a founding member of the EdgeX Foundry project and has provided technical leadership in the technical steering committee from day one. The Ireland (aka 2.0) release of EdgeX introduces much improved V2 REST APIs, a transition to a secure message bus for data ingestion, and many additional improvements to the security of EdgeX. The cross-company cooperation that contributed to the success and timeliness of this release once again demonstrates the power of open source development. Snaps of the Ireland release of EdgeX are available from the Snap Store using the new 2.0 track, and can be used to build secure enterprise-grade EdgeX deployments using Ubuntu Core 20,” said Tony Espy, technical architect / IoT & Devices, Canonical, and at-large  EdgeX Foundry TSC member. 

“EdgeX Foundry continues to serve as the basis for our Edge Xpert product.  As such, we see the release of EdgeX 2.0 as critical to our company’s success in support of our customers.  It provides the ability for IOTech to add new features and add more value given the new APIs, support for more messaging and overall simplifications of the platform.  On top of that, the move toward an LTS release in the fall based on EdgeX 2.0 is an important milestone of support shown by the EdgeX community.  LTS tells adopters like IOTech that the EdgeX ecosystem stands behind them and is there to provide a scalable, reliable, and robust platform that can be used in production ready solutions,” said Keith Steele, CEO, IOTech Systems. 





Edge Primer: Distributed Cloud – The power of public cloud at the edge

By Blog
By Utpal Mangla VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competence; 
Ashek Mahmood, IoT Edge AI – Partner & Practice Lead, IBM; and Luca Marchi, Associate Partner – Innovation Leader – IBM’s Telecom Media Entertainment Industry Center of Competence

Today, many enterprises benefit from using public cloud to build and run their applications.  Consuming IT as a service, through APIs, at scale, and as needed, has transformed how applications are built. Public Cloud enables companies to innovate in new, faster ways. But the reality is only somewhere between 5% to 20% of enterprise workloads have moved to the cloud. 

That first wave of applications was focused mostly on new workloads, or workloads that can easily relocate. Many applications have requirements on security, compliance, latency, regulation, and performance and cannot easily move into a public cloud. Enterprises need a way to gain the benefits of public cloud anywhere they are running their applications.

The solution that helps is called “Distributed Cloud.” Distributed Cloud is a new cloud computing model that extends public cloud services to any location, even at the edge. This means companies can now deploy cloud-native applications not only in their primary cloud provider’s infrastructure but on premises, within other cloud providers’, in third-party data centers or colocation centers, or at the edge – in factories, distribution centers, stores, hospitals and ports – with everything managed from a single control plane. 

Distributed Cloud provides greater control to an organization of its hybrid cloud ecosystem. It mitigates dependency on one provider and provides flexibility to organizations to choose, based on specific business requirements. With Distributed Cloud, developers can take advantage of the catalog of services from public cloud providers with industry-optimized security and compliance and leading AI capabilities anywhere its needed. All with a common API, user experience, management dashboard and user consumption model. 

With 5G use cases across industries becoming a reality, massive growth of IoT devices (in trillions of volumes) is expected. Control and audit of network traffic is critical for maintaining data protection, privacy, and compliance. Management from a single control enables zero-trust Edge security across all devices and users, irrespective of which network they are on. Distributed cloud extends the capabilities through multiple cloud satellites across edge networks. 

To get a better idea of how Distributed Cloud works, let’s look at some actual applications from different industries.

  • The first application comes from the financial services industry. In order to compete in a market being disrupted by newcomers, financial institutions need to create exceptional user experiences powered by technology. To do that, they leverage AI and trading algorithms to stay ahead of the market. The competitiveness of financial services organizations is based on quick iteration and deployment of cloud-native tools and best practices. However, the financial industry is heavily regulated. So, they need to be able to keep that data secure and compliant – which often means there are restrictions on where that data can live. 
    • Distributed Cloud can help solve this conflict. These financial services companies can now extend public cloud services in their data centers, allowing them to leverage cloud native best practices and meet their security and compliance obligations, all at the same time. So, Distributed Cloud enables financial institutions to take advantage of a public cloud consumption model, but in many different locations – and they do not need a high level of skills to run this software outside of the public cloud. 
  • Another example is the construction industry managing its worker health and safety regulations. One key player in the industry is leveraging distributed analytics to keep workers safe. In this scenario, a builder might have a business, or an office building that is under construction, and they need to leverage video analytics to tell if someone is wearing a hard hat or not – and warn them before they move into a potentially dangerous area of that building. They want to use video analytics to automatically detect and alert this situation, but latency could be a real problem – delays in such alert could fail to prevent the injury. 
    • To solve this safety problem, the construction company needs to analyze video feed from many cameras all over the office building, but it is impractical to send all of that data back to the cloud to be processed remotely. It would be better if the video processing happened close to the actual device. Traditionally, that would require you to install servers and software and manage them on-prem.   
    • With Distribute Cloud, the AI Video processing is extended into the office, so cloud services can be leveraged to run this application close to the device – and that is critical in ensuring that latency is reduced, and workers are effectively warned before they enter a dangerous area. 
    • That same company had to adjust to challenges brought by the Covid pandemic. They were able to quickly iterate on their application and modify the use-case for AI models and rapid deployment to distributed locations. Now, instead of warning workers about wearing hard hats, they can make sure that workers are wearing masks and ensure that the masks are being worn correctly. Additionally, they can even leverage thermal devices to take temperatures. 

Distributed Cloud enables organizations to quickly address unforeseen challenges, leverage cloud benefits in any location, and innovate quickly. 

See how the Akraino project’s StarlingX Far Edge Distributed Cloud blueprint can serve as a starting point:



EdgeX Foundry Launches Chinese Website

By Blog, EdgeX Foundry

By Melvin Sun (Sr. marketing manager in Intel IoT Group, EdgeX TSC member, co-founder & maintainer of EdgeX China Project) and Kenzie Lan (Operations specialist of EdgeX China community)

EdgeX Foundry, a project under the LF Edge umbrella, is a vendor-neutral open source project on edge computing. 

To support the growing community of EdgeX Chinese developers and ecosystem partners, the project established the EdgeX China project in Q4’19 to drive collaboration in the region.  Since then,  the project community in China has set up local meetups, working groups, and events. In addition, local and language-specific developer and social media channels were created to generate awareness about the project and improve the adoption and experience of Chinese users and developers. 

As of today, the EdgeX China community has gained more than 300,000 followers on mainstream developer platforms of CSDN and OSChina, as well as six WeChat groups! In addition to developers, users and ecosystem partners, there are  ~10 commercial offerings from China based on EdgeX. 

To further support the EdgeX China community growth, the project officially launched its Chinese website ( on June 30th, 2021 to provide awareness of EdgeX Foundry for new developers and users and to keep the local community up to date on events and news. 

The EdgeX Foundry Chinese site is divided into “Home Page”, “Why EdgeX”, “Getting Started”, “Software”, “Ecosystem”, “Community”, “Latest”, and “EdgeX Challenge China 2021”. This site provides a systematic introduction to EdgeX’ s architecture, methods of use, services, community, latest news, and overview and information on the ongoing 2021 China EdgeX Challenge for EdgeX users to compete for prizes by creating solutions based on EdgeX. Visitors can easily jump from the English website ( the Chinese through the link between them.

CTOs, architects, developers, students, and business folks all are welcome to learn and exchange practical experience in the EdgeX China community. The EdgeX community will continue to grow and improve in collaboration with members around the world.

Thanks to all involved in creating and maintaining this China-focused version of the EdgeX Foundry website!

Baetyl Issues 2.2 Release, Adds EdgeX Support, New APIs, Debugging, and More

By Baetyl, Blog

Baetyl, which seamlessly extends cloud computing, data and services to edge devices, enabling developers to build light, secure and scalable edge applications, has issued its 2.2 release. Baetyl 2.2, based on community contributions, now contains more  advanced functions. Still based on cloud native functionality, Baetyl’s new features continue build an open, secure, scalable, and controllable intelligent edge computing platform.

Specific new features in the 2.2 release include:

Support for working with EdgeX Foundry

Baetyl v2.2 has updated compatibility with LF Edge sister project, EdgeX Foundry. Through Baetyl’s remote management suite,  “Baetyl-cloud,” users can deliver all 14 EdgeX services to the edge. The delivered EdgeX services will be submitted by Baetyl to local Kubernetes clusters for deployment and monitoring synchronized to the cloud.

New API definition, which is needed to support edge cluster environments

In the industrial IoT scenario, there are often many industrial control boxes to form an edge cluster scenario. Baetyl defines open multi-cluster management APIs. By implementing these APIs, the entire cluster can be reflected on the cloud console. Users can easily deploy applications to defined edge clusters, and specify edge node affinity within those clusters.

Support for DaemonSet load type applications

In the context of supporting clusters, a new load method is needed to support deployments like the function of monitoring the status of each node in the cluster, so Baetyl also supports DaemonSet. Through this load type, the service can be a single replica launched on every node in the matched cluster, and it will be automatically scaled to new added nodes and vice-versa.

New API definitions for remote debugging and remote log viewing of deployed applications

To facilitate debugging or log viewing operations on edge devices, Baetyl has established an open remote debugging API that can connect with multiple cloud control systems in the future.

New API definitions for GPU monitoring and sharing functions

Support for the GPU mainly includes two aspects:  one is the monitoring of the use of the GPU, and the other is the support for the GPU sharing. Through the GPU monitoring module, Baetyl-core can obtain the current GPU memory usage, temperature, energy consumption and other information in real time. With the GPU sharing function, multiple applications can share the GPU resources of the device. At present, the definition of the GPU support interface has been completed, and only a module containing the GPU share function needs to be provided on the end side to use. 

More official modules

There will be more official system modules are also provided:

  1. baetyl-init: Responsible for activating edge nodes to the cloud, initializing and guarding baetyl-core, after the task is completed, it will continue to report and synchronize the core state.
  2. baetyl-rule: can realize the message flow of baetyl framework end-side, and exchange messages in baetyl-broker (end-side message center), function service, IoT Hub (cloud MQTT broker).

In addition to these new features, Baetyl 2.2 also provides many other functional details for optimization and mechanism improvement, such as the optimization of the installation process; the system application can now be configured according to needs, and the transaction execution interface and task queue interface are defined.

All these new features will be available immediately with the release of Baetyl 2.2. More information is available here:

Where the Edges Meet and Apps Land: Akraino Release 4 Public Cloud Edge Interface

By Akraino, Blog

Written by Oleg Berzin, Ph.D., Co-chair of the Akraino Technical Steering Committee and Fellow, Technology and Architecture at Equinix


The Public Cloud Interface (PCEI) blueprint was introduced in our last blog. This blog describes the initial implementation of PCEI in Akraino Release 4. In summary, PCEI R4 is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S) and demonstrates

  • Deployment of Public Cloud Edge (PCE) Apps from two Clouds: Azure (IoT Edge) and AWS (GreenGrass Core),
  • Deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API Server.
  • End-to-end operation of the deployed Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Before describing the specifics of the implementation, it is useful to revisit PCEI architectural concepts.

The purpose of Public Cloud Edge Interface (PCEI) is to implement a set of open APIs and orchestration functionalities for enabling Multi-Domain interworking across Mobile Edge, Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute Hardware and Networks. Interfaces between the functional domains are shown in the figure below:

The detailed PCEI Reference Architecture is shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

There are some important points to highlight regarding the relationships between the Public Cloud Core (PCC) and the Public Cloud Edge (PCE) domains. These relationships influence where the orchestration tasks apply and how the edge infrastructure is controlled. Both domains have some fundamental functions such as the underlying platform hardware, virtualization environment as well as the applications and services. In the Public Cloud context, we distinguish two main relationship types between the PCC and PCE functions: Coupled and Decoupled. The table below shows this classification and provides examples.

The PCC-PCE relationships also involve interactions such as Orchestration, Control and Data transactions, messaging and flows. As a general framework, we use the following definitions:

  • Orchestration: Automation and sequencing of deployment and/or provisioning steps. Orchestration may take place between the PCC service and PCE components and/or between an Orchestrator such as the PCEI Enabler and PCC or PCE.
  • Control: Control Plane messaging and/or management interactions between the PCC service and PCE components.
  • Data: Data Plane messaging/traffic between the PCC service and the PCE application.

The figure below illustrates PCC-PCE relationships and interactions. Note that the label “O” designates Orchestration, “C” designates Control and “D” designates Data as defined above.

The PCEI implementation in Akraino Release 4 shows examples of two App-Coupled PCC-PCE relationships: Azure IoT Edge and AWS GreenGrass Core, and one Fully Decoupled PCE-PCC relationship: an implementation of ETSI MEC Location API Server.

In Release 4, PCEI Enabler does not support Hardware (bare metal) or Kubernetes orchestration capabilities.

PCEI in Akraino R4

Public Cloud Edge Interface (PCEI) is implemented based on Edge Multi-Cluster Orchestrator (EMCO, a.k.a ONAP4K8S). PCEI Release 4 (R4) supports deployment of Public Cloud Edge (PCE) Apps from two Public Clouds: Azure and AWS, deployment of a 3rd-Party Edge (3PE) App: an implementation of ETSI MEC Location API App, as well as the end-to-end operation of the deployed PCE Apps using simulated Low Power Wide Area (LPWA) IoT client and Edge software.

Functional Roles in PCEI Architecture

Key features and implementations in Akraino Release 4:

  • Edge Multi-Cloud Orchestrator (EMCO) Deployment
    • Using ONAP4K8S upstream code
  • Deployment of Azure IoT Edge PCE App
    • Using Azure IoT Edge Helm Charts provided by Microsoft
  • Deployment of AWS Green Grass Core PCE App
    • Using AWS GGC Helm Charts provided by Akraino PCEI BP
  • Deployment of PCEI Location API App
    • Using PCEI Location API Helm Charts provided by Akraino PCEI BP
  • PCEI Location API App Implementation based on ETSI MEC Location API Spec
    • Implementation is based on the ETSI MEC ISG MEC012 Location API described using OpenAPI.
    • The API is based on the Open Mobile Alliance’s specification RESTful Network API for Zonal Presence
  • Simulated IoT Client Code for end-to-end validation of Azure IoT Edge
  • Azure IoT Edge Custom Software Module Code for end-to-end validation of Azure IoT Edge


End-to-End Validation

The below description shows an example of the end-to-end deployment and validation of operation for Azure IoT Edge. The PCEI R4 End-to-End Validation Guide provides examples for AWS GreenGrass Core and the ETSI MEC Location API App.


Description of components of the end-to-end validation environment:

  • EMCO – Edge Multi-Cloud Orchestrator deployed in the K8S cluster.
  • Edge K8S Clusters – Kubernetes clusters on which PCE Apps (Azure IoT Edge, AWS GGC), 3PE App (ETSI Location API Handler) are deployed.
  • Public Cloud – IaaS/SaaS (Azure, AWS).
  • PCE – Public Cloud Edge App (Azure IoT Edge, AWS GGC)
  • 3PE – 3rd-Party Edge App (PCEI Location API App)
  • Private Interconnect / Internet – Networking between IoT Device/Client and PCE/3PE as well as connectivity between PCE and PCC.
  • IoT Device – Simulated Low Power Wide Area (LPWA) IoT Client.
  • IP Network/vEPC/UPF) – Network providing connectivity between IoT Device and PCE/3PE.
  • MNO DC – Mobile Network Operator Data Center.
  • Edge DC – Edge Data Center.
  • Core DC – Public Cloud.
  • Developer – an individual/entity providing PCE/3PE App.
  • Operator – an individual/entity operating PCEI functions.

The end-to-end PCEI validation steps are described below (the step numbering refers to Figure 5):

  1. Deploy EMCO on K8S
  2. Deploy Edge K8S clusters
  3. Onboard Edge K8S clusters onto EMCO
  4. Provision Public Cloud Core Service and Push Custom Module for IoT Edge
  5. Package Azure IoT Edge and AWS GGC Helm Charts into EMCO application tar files
  6. Onboard Azure IoT Edge and AWS GGC as a service/application into EMCO
  7. Deploy Azure IoT Edge and AWS GGC onto the Edge K8S clusters
  8. All pods came up and register with Azure cloud IoT Hub and AWS IoT Core
  9. Deploy a custom LPWA IoT module into Azure IoT Edge on the worker cluster
  10. Successfully pass LPWA IoT messages from a simulated IoT device to Azure IoT Edge, decode messages and send Azure IoT Hub

For more information on PCEI R4:

PCEI Release 5 Preview

PCEI Release 5 will feature expanded capabilities and introduce new features as shown below:

For a short video demonstration of PCEI Enabler Release 5 functionality based on ONAP/CDS with GIT integration, NBI API Handler, Helm Chart Processor, Terraform Plan Executor and EWBI Interconnect please refer to this link:


Project Technical Lead: 

Oleg Berzin, Equinix


Jian Li, China Mobile
Oleg Berzin, Equinix
Tina Tsou, Arm


Mehmet Toy, Verizon
Tina Tsou, Arm
Gao Chen, China Unicom
Deepak Vij, Futurewei
Vivek Muthukrishnan, Aarna Networks
Kavitha Papanna, Aarna Networks
Amar Kapadia, Aarna Networks