Category

Akraino

Akraino Release 4 Enables Kubernetes Across Multiple Edges, Integrates across O-RAN, Magma, and More

By Akraino, Announcement
  • 7 New Akraino R4 Blueprints (total of 25+)  
  • Akraino is Kubernetes-ready with K8s- enabled blueprints across 4 different edge segments (Industrial IOT, ML, Telco, and Public Cloud)
  • New and updated blueprints also target ML, Connected Car, Telco Edge, Enterprise, AI, and more 

SAN FRANCISCO  February 25,  2021LF Edge, an umbrella organization within the Linux Foundation that creates an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system, today announced the availability of Akraino Release 4 (“Akraino R4”).  Akraino’s fourth release enables additional blueprints that support various deployments of Kubernetes across the edge, from Industrial IoT, to Public Cloud, Telco, and Machine Learning (ML). 

Launched in 2018, and now a Stage 3 (or “Impact” stage) project under the LF Edge umbrella, Akraino delivers an open source software stack that supports a high-availability cloud stack optimized for edge computing systems and applications. Designed to improve the state of carrier edge networks, edge cloud infrastructure for enterprise edge, and over-the-top (OTT) edge, it enables flexibility to scale edge cloud services quickly, maximize applications and functions supported at the edge, and to improve the reliability of systems that must be up at all times. 

“Now on its fourth release, Akraino is a fully-functioning open edge stack,” said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. “This latest release brings new blueprints that focus on Kubernetes-enablement across edges, as well as enhancements to other areas like connected cars, ML, telco and more. This is the power of open source in action and I am eager to see even more use cases of blueprint deployments.”  

About Akraino R4

Building on the success of R3, Akraino R4 again delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R4, Akraino blueprints enable additional use cases, deployments, and PoCs with support for new levels of flexibility that scale Public Cloud Edge, Cloud Native Edge, 5G, Industrial IoT, Telco, and Enterprise Edge Cloud services quickly, by delivering community-vetted and tested edge cloud blueprints to deploy edge services.  

In addition, new use cases and new and existing blueprints provide an edge stack for Industrial Edge, Public Cloud Edge Interface, Federated ML, KubeEdge, Private LTE/5G, SmartDevice Edge, Connected Vehicle, AR/VR, AI at the Edge, Android Cloud Native, SmartNICs, Telco Core and Open-RAN, NFV, IOT, SD-WAN, SDN, MEC, and more.

Akraino R4 includes seven new blueprints for a total of 25+, all tested and validated on real hardware labs supported by users and community members, and API map.

The 25+ “ready and proven” blueprints include both updates and long-term support to existing R1, R2, and R3 blueprints, and the introduction of seven new blueprints:

More information on Akraino R4, including links to documentation, code, installation docs for all Akraino Blueprints from R1-R4, can be found here. For details on how to get involved with LF Edge and its projects, visit https://www.lfedge.org/

Looking Ahead

The community is already planning R5, which will include more implementation of public cloud and telco synergy, more platform security architecture, increased alliance with upstream and downstream communities, and development of IoT Area and Rural Edge for tami-COVID19. Additionally, the community is expecting new blueprints as well as additional enhancements to existing blueprints. 

Upcoming Events

The Akraino community is hosting its developer-focused Spring virtualTechnical Meetings  March 1-3. To learn more, visit https://events.linuxfoundation.org/akraino-technical-meetings-spring/ 

Don’t miss the Open Networking and Edge Executive Forum (ONEEF), a virtual event happening March 10-12, where subject matter experts from Akraino and other LF Edge communities will present on the latest open source edge developments. Registration is now open! 

Join Kubernetes on Edge Day, co-located with aKubeCon + CloudNativeCon, to get in on the ground floor and shape the future intersection of cloud native and edge computing. The virtual event takes place May 4 and registration is just $20 US. 

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

 

Kaloom’s Startup Journey (Part 1)

By Akraino, Blog

Written by Laurent Marchand, Founder and CEO of Kaloom, an LF Edge member company

This blog originally ran on the Kaloom website. To see more content like this, visit their website.

When we founded Kaloom, we embarked upon our journey with the vision of creating a truly P4-programmable, cloud- and open standards-based software defined networking fabric. Our name, Kaloom comes from the Latin word “caelum” which means “cloud.”

Armed solely with our intrepid crew of talented and experienced engineers, we left our good-paying, secure jobs at one of the world’s largest incumbent networking vendors and set off on our adventure. This two-part blog is the story of why we founded Kaloom and where we stand today.

In our previous blog, “SDN’s Struggle to Summit the Peak of Inflated Expectations,” Sonny outlined the challenges posed by the initial vision of SDN and the attempt to solve them with VM-based overlays. Upcoming 5G deployments will only amplify those challenges, disruptively creating an extreme need for lightweight container-based solutions that satisfy 5G’s cost, space and power requirements.

5G Amplifying SDN’s Challenges

With theoretical peak data rates up to 100 times faster than current 4G networks, 5G enables incredible new applications and use cases such as autonomous vehicles, AR/VR, IoT/IIoT and the ability to share real-time medical imaging – just to name a few. Telecom, cloud and data center providers alike look forward to 5G forming a robust edge “backbone” for the industrial internet to drive innovative new consumer apps, enterprise use cases and increased revenue streams.

These new 5G-enabled apps require extreme low latency which demands a distributed edge architecture that puts applications close to their data source and end users. When we founded Kaloom, incumbents were still selling the same technology platforms and architectural approach as they had with 4G, however, the economics for 5G are dramatically different.

The Pain Points are the Main Points – Costs, Revenues and Latency

For example, service providers’ revenues from smart phone users running 3G/4G were about $50 per month. However, connected cars will only generate about $1 or $2 per month and installing 5G requires extreme amounts of upfront investments (CAPEX), not only in antennas but also in the backend servers, storage and networking switches required to support these apps. So, it was clear to us that the cost point for deploying 5G had to be much cheaper than for 4G. Even if 5G costs were the same as 4G rollouts, service providers still could not economically justify their revenue models throttling down to $1 or $2 per month.

The low latency requirement is another pain point, mainly because 5G-enabled high speed apps won’t work properly with low throughput and high latency. Many of these new applications require an end-to-end latency below 10 milliseconds, unfortunately, typical public clouds are unable to fulfill such requirements.

To take advantage of lower costs via centralization, most hyperscale cloud providers have massive regional deployments, with only about six to 10 major facilities serving all North America. If your cloud hub is 1,000 miles away, even the speed of light (fiber optics) does not transfer data fast enough to meet the low latency requirements of 5G apps such as connected cars. For example, the latency from New-York to Amazon Web Services or Microsoft Azure in Northern -Virginia is greater than 20 milliseconds.

Today, the VM-based network overlay has dumb, or unmanaged, switches; then all the upper layer functions such as firewalling, load balancing and 5G UPF run on VMs in servers. Because you couldn’t previously run these functions at Tbps speeds on switches, they were put on x86 servers. High-end servers cost about $25,000 and consume 1,000 watts of power each. With x86 servers’ costs and power consumption, we could see that the numbers were way off from what was needed to run a business-viable environment for 5G providers. Rather than have switching and routing run via VMs on x86, Kaloom’s vision was to be able to run all networking functions at Tbps speeds to meet the power- and cost-efficient points needed for 5G deployments to generate positive revenues. This is why Kaloom decided to use containers, specifically open source Kubernetes, and the programmability of P4-enabled chips as the basis for our solutions.

Solving the latency problem required a paradigm shift to the distributed edge, which places compute, storage and networking resources closer to the end user. Because of these constraints, we were convinced the edge would become far more important. As seen in the figure below, there are many edges.

Many Edges

Kaloom’s Green Logo and Telco’s Central Office Constraints

While large regional cloud facilities were not built for the new distributed edge paradigm, service providers’ legacy Central Office (CO) architectures are even more ill-suited for the shift. Containing an odd mishmash of old and new equipment from each decade going back at least 50 years, these facilities are also typically near the limit of their space, power and cooling requirements.

There are about 25,000 legacy COs in the US and they need to be converted, or updated in some way, to sustain low latency networks that support 5G’s shiny new apps. Today, perhaps 10 percent of data is processed outside the CO and data center infrastructure. In a decade, this will grow to 25 percent which means telcos have their work cut out for them.

This major buildup of required new 5G-supporting edge infrastructure will have an extremely negative impact on the environment in terms of energy consumption. According to the Linux Foundation’s State of the Edge 2020 report, by 2028 it will consume 102,000 megawatts of power and over $700 billion in cumulative CAPEX will be spent within the next decade on edge IT infrastructure and data center facilities. The power consumption of a single 5G base station alone is three times that of its 4G LTE predecessor; 5G needs three times the base stations for the same coverage as LTE due to higher frequencies; and 5G base stations costs 4 times the price of LTE, according to mobile executives and an analyst quoted in this article at Light Reading. Upgrading a CO’s power supply involves permits, closing and digging up streets and running new high capacity wiring to the building, which is very costly. Unless all of society’s power sources suddenly turned sustainable, our founding team could see we were going to have to create a technology that could dramatically and rapidly reduce the power needed to provide these new 5G services and apps to consumers and enterprises.

Have you noticed that Kaloom’s logo is green?

This is the reason why. We have set out to solve all the issues facing telcos as they work to bring their infrastructure into the 21st century, including power.

P4 and Containers vs. ASICs and VM-Overlays

Where we previously worked, we were using a lot of switches that were not programmable. We were changing hardware every three to four years to keep pace with changes in the mobile, cloud and networking industries that constantly require more network, compute and storage resources. So, we were periodically throwing away and replacing very expensive hardware. We thought, why not bring the same flexibility by building a 100 percent programmable networking fabric. This, again, was driven by costs.

Before we left our jobs, we were part of a team running 4G Evolved Packet Gateway (EPG) using virtual machines (VMs) on Red Hat’s software platforms. Fortunately, we were asked to investigate the difference in the performance and other characteristics of containers versus VMs. That’s when we realized that, by using containers, the same physical servers could sustain several times more applications as compared to those running VM-based overlay networks.

This meant that the cost per user and cost per Gbps could be lowered dramatically. Knowing the economics of 5G would require this vastly reduced cost point, containers were the clear choice. The fact that they were more lightweight in terms of code meant that we could use less storage to house the networking apps and less compute to run them. They’re also more easily portable and quicker to instantiate. When a VM crashes it can take four or five minutes to resume, but you can restore a container in a few seconds.

Additionally, because VM-based overlays sit on top of the fabric so they provide no direct interaction with the network underlay below in terms of visibility and control. If there is congestion or a bottleneck in the lower hardware’s fabric then the overlays may not know when, where, or how bad the problem is. Delivering Quality of Service (QoS) for service providers and their end users is a key consideration.

Kaloom’s Foundations

As a member of LF Edge, Kaloom has demonstrated innovative containerized Edge solutions for 5G along with IBM at the Open Networking and Edge Summit (ONES) 2020. Kaloom also presented further details of this optimized edge solution to the Akraino TSC as an invited presentation. Kaloom is participating in the Public Cloud Edge Interface Blueprint work as it is interesting and well aligned with our technology directions and customer requirements.

We remain committed to address the various pain points including networking’s costs, the need to reduce 5G’s growing energy footprint so it has a less negative impact on the environment, and to deliver much-needed software control and programmability. Looking to solve all the networking challenges facing service providers’ 5G deployments, we definitely had our work cut out for us.

What do LF Edge, Akraino, OPNFV, ONAP, Fishing and Fairy Tales Have in Common?

By Akraino, Blog

Written by Aaron Williams, LF Edge Developer Advocate

What do LF Edge, Akraino, OPNFV, ONAP, fishing and fairy tales have in common?  They all came up during our interview with the new Chair and Co-Chair of LF Edge’s project Akraino.

LF Edge’s Akraino project recently wrapped up our semi-annual face to face and welcomed newly elected Technical Steering Committee Chairs.

After two terms of serving as a co-chair, Tina Tsou, Enterprise Architect at Arm, was elected to be the Chair this year and Oleg Berzin, Technology Innovation Fellow under the Office of the CTO at Equinix, was elected Co-Chair by the Akraino TSC members.  We sat down with Tina and Oleg to learn more about them, what they are looking forward to next year, and how they see Akraino growing in 2021.

Tina Tsou

Oleg Berzin

How did you first get involved with Akraino?  

[Tina Tsou] My introduction to Akraino happened when I was working on an Edge use case for one of our customers at Arm. But I’m no stranger to the open source communities and working groups. Before Akraino, I directed the OPNFV (Open Platform for Network Function Virtualization) Auto project as PTL, integrating ONAP onto OPNFV (upon both x86 and Arm architecture and hardware) with a focus on edge cloud use cases. I was also the Chair of the Open Source SDN Breckenridge Working Group.

I am currently an active member of the Linux Foundation Networking (LFN) Technical Advisory Committee (TAC) and also lead the VPP/AArch64 activities in FD.io representing Arm.

[Oleg Berzin] Akraino aligned with my interest in the Edge and more specifically in the multi-domain nature of the Edge (spanning from devices to networks, to aggregation, to data centers, to clouds). I was involved in the ONAP (Open Network Automation Platform) project in the past (as an operator/user). With the diverse and complex nature of the edge deployments, we need a community supported set of capabilities integrated as blueprints so customers/user can deploy these solutions with minimum friction.

What was the first blueprint that you worked on?

[Tina Tsou] For Akraino, I worked closely on the Integrated Edge Cloud(IEC) blueprint family which is part of the Edge Stack of Akraino. The blueprint intends to develop a fully integrated edge infrastructure solution for Edge Computing. This open source software stack provides critical infrastructure to enable high performance, reduced latency, improved availability, lower operational overhead, provide scalability, address security needs, and improve fault management. The IEC project will address multiple edge use cases beyond the Telco Industry. IEC intends to develop solutions and support the needs of carriers, service providers, and the IoT networks.

[Oleg Berzin] The first blueprint I worked on was the Public Cloud Edge Interface (PCEI). The idea behind PCEI is to enable interworking between mobile operators, public clouds and edge infrastructure/application providers so that operators have a systematic way to enable their customers access to public and 3rd party edge compute resources and applications via open APIs that facilitate deployment of telco and edge compute functions, interconnection of these functions as well as intelligent orchestration of workloads so that the expected performance characteristics can be achieved.

What is a Blueprint that you find interesting (I know that you don’t have a “favorite”)?

[Tina Tsou] You’re right. It is hard to choose a favorite blueprint since all of them are interesting and serve purposeful needs for many use cases. But here are two that I find very interesting:

School/Education Video Security Monitoring Blueprint: This belongs to the AI Edge Blueprint family and focuses on establishing an open source MEC platform that combined with AI capacities at the Edge. In this blueprint, the latest technologies and frameworks like microservice framework, Kata container, 5G accelerating, and open API have been integrated to build an industry-leading edge cloud architecture that could provide comprehensive computing acceleration support at the edge. This blueprint is a life saver and hence my favorite since it improves the safety and engagement in places such as factories, industrial parks, catering services, and classrooms that rely on AI-assisted surveillance.

IIoT at the Smart Device Edge Blueprint family: This blueprint family use case is for those devices that live at the Smart Device Edge are characterized by having a small footprint yet being powerful enough to be able to compute tasks at the edge.  They tend to have a minimum of 256 MB for a single node and can grow to the size of a small cluster.  These resources could be a router, hub, server, or gateway that are accessible.  Since these device types vary heavily based on the form factor and use case served, they have a very fragmented security and device standards on how the OS and firmware is booted. This is where Arm backed initiatives like Project Cassini and PARSEC helps to enable the standardization of device booting and platform security. I’m excited for this blueprint to see successful deployments of Edge based compute.

[Oleg Berzin] One thing that surprised me when I joined Akraino was the diversity of use cases (IOT, AI, Private 5G, Radio Edge Cloud) that reinforce the notion that edge is everywhere and that it is very complex. I am honored to work with the Akraino community and contributing to the development of blueprints. At this point in time my goal is to make progress in the PCEI blueprint.

Is there a Blueprint that you are looking forward to seeing develop?

[Tina Tsou] I prefer the IEC Type 3: Android cloud native applications on Arm servers in edge for Integrated Edge Cloud (IEC) Blueprint Family. It evolves from Anbox based for single instance in R3, Robox based for multiple instances in R4, and my hope is to have a support for vGPU in the near future.

[Oleg Berzin] I think the Radio Edge Cloud is a very interesting blueprint. If developed, it has a potential of revolutionizing how the radio infrastructure is managed and adopted to diverse use cases.

What is the biggest misconception that people have about Akraino?

[Tina Tsou] As with many open source projects and technologies, the common perception that users have is that these projects serve a very narrow use case. I believe Akraino suffers from a similar misconception that it is a Telco-oriented project only. The reality is quite different. Akraino project blueprints can be applied in many facets of the industry verticals from Edge, Cloud, Enterprises, and IoT.

[Oleg Berzin] I am relatively new to Akraino, and my own misconception was that it only focused on small edge devices and IoT. As a Co-Chair and now having been exposed to the breadth and depth of use cases, I can now see that Akraino is involved in a very diverse set of blueprints targeting enterprise, telco and clouds while also interworking with other organizations and communities, such as ORAN, 3GPP, CNCF, LF Networking, TIP.

What are your and Akraino’s priorities for 2021?

[Tina Tsou &Oleg Berzin] There are multiple that we can list but we would point to these top 3 priorities for 2021.

  1. Akraino Blueprints for O-RAN specifications (e.g., REC integration with RIC)
  2. Akraino Blueprint to support Public Cloud Edge interface
  3. Akraino Edge APIs

What do you like to do in your free time?

[Tina Tsou] I live in the sunny-California bay area and I love fishing during weekends. Anyone who is interested to join me and have a chat about the Akraino project can contact me. 🙂

[Oleg Berzin] Apart from being involved in the technology and networking industry for many years, I enjoy learning new languages and finding common roots in different cultures. I sometimes find inspiration and time to translate children fairy tales from Russian into English – you can find the tales that I translated on Amazon.

Anything that you want people to know about Akraino?

[Tina Tsou] Ever since its launch in 2018, Akraino has found great community support for innovative creation of deployable Edge solutions with work going in more than 30+ Blueprints. These Akraino blueprints are now globally deployed to address several Edge Use Cases. It is a vast community with many active users and contributors and here are few things to know of:

  • Akraino hosts sophisticated communities and multiple user labs to speed the edge innovation.
  • Akraino delivered fully functional new Blueprints for deployment in R3 to address edge use cases such as 5G MEC, AI Edge, Cloud Gaming at Edge, Android in Cloud, Micro-MEC and Hardware acceleration at the edge.
  • Created framework for defining and standardizing APIs across stacks, via upstream/downstream collaboration and published a whitepaper.
  • Akraino introduced tools for automated Blueprint Validations, security tools for Blueprint Hardening and Edge API’s in collaboration with LF Edge projects
  • Akraino community has participated in several industry outreach events that featured participation to foster collaboration and engagement on edge projects across the entire ecosystem.

[Oleg Berzin] The most important fact I want people to know about Akraino is the dedication and professionalism of the individuals who make up our community. The work they do on creating and proving the blueprints is done on a volunteer basis in addition to their primary jobs. It takes long hours, patience, respect for others and true trust to work together and move the edge technology forward.

XGVela: Bring More Power to 5G with Cloud Native Telco PaaS

By Akraino, Blog

Written by Sagar Nangare, an Akraino community member and technology blogger who writes about data center technologies that are driving digital transformation. He is also an Assistant Manager of Marketing at Calsoft.

A Platform-as-a-Service (PaaS) model of cloud computing brings lots of power to the end-users. It provides a platform to the end customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. In the roadmap to build a 5G network, telecom networks need such breakthrough models that help them focus more on the design and development of network services and applications.

XGVela is 5G cloud-native open-source framework that introduces Platform-as-a-Service (PaaS) capabilities for telecom operators. With this PaaS platform, telecom operators can quickly design, develop, and innovate the telco network functions and services. This way, there will be more focus on seizing business opportunities for telecom companies rather than diving into complexities of telecom infrastructure.

China Mobile initiated this project internally which was later hosted and announced by Linux Foundation as a part to support the initiative to accelerate the telco cloud adoption. Apart from China Mobile, other supporters of XGVela are China Unicom, China Telecom, ZTE, Red Hat, Ericsson, Nokia, H3C, CICT, and Intel.

Initially for building XGVela, PaaS functionality was brought from existing open-source PaaS projects like Grafana, Envoy, Zookeeper, and further enhanced with telco requirements.

Why need XGVela PaaS capabilities for the 5G network?

  • XGVela helps telecom operators to align with fast changes in requirements to build a 5G network driven by cloud-native backhaul. They need faster upgrades to network functions and services along with agility in new service deployments.
  • Telecom operators need to focus more on the microservices driven and containers-based network functions (CNFs) rather than VM based network functions (VNFs). But need to continue to use both concurrently. XGVela can support CNFs and VNFs both in the underlying platform.
  • Telecom operators want to reduce network construction costs by adopting an open and healthy technology ecosystem like ONAP, Kubernetes, OpenDaylight, Akraino, OPNFV, etc. XGVela adopted this to reduce the barriers that bring end-to-end orchestrated 5G telecom networks.
  • XGVela simplifies the design and innovation of 5G network functions by allowing developers to focus on application development with service logic rather than dealing with underlying complex infrastructure. XGVela provides standard APIs to tie many internal projects

XGVela integrates CNCF projects based on telco requirements to form General-PaaS. It is architected kept in mind the microservices design method for network function development and further integrated into telco PaaS.

XGVela is integrated with OPNFV and Akraino for integration and testing. In a recent presentation, by Srinivasa Adepalli and Kuralamudhan Ramakrishnan shown that feature set of Akraino ICN blueprints are like XGVela. It is seen that many like to deployed CNFs and application on same cluster. In those environments, there is a need to figure out what additional things to be done on top of K8s to enable CNF deployment, especially telco CNF, normal CNF and application deployments, and further need to evaluate how it worked together. This is where XGVela and Akraino ICN BP family are inclined to each other.

You can subscribe to the XGVela mailing list here to track the project progress. For more information about Akraino Blueprints, click here: https://wiki.akraino.org/. Or, join the conversation on the LF Edge Slack Channel #akraino-blueprints #akraino-help #akraino-tsc

LF Edge Member Spotlight: Zenlayer

By Akraino, Akraino Edge Stack, Blog, LF Edge, Member Spotlight

The LF Edge community comprises a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sit down with Jim Xu, Principal Engineer at Zenlayer, to discuss the importance of open source, collaborating with industry leaders in edge computing, their contributions to Akraino and the impact of being a part of the LF Edge ecosystem.

Can you tell us a little about your organization?

Zenlayer is an edge cloud service provider and global company headquartered in Los Angeles, Shanghai, Singapore, and Mumbai. Businesses utilize Zenlayer’s platform to deploy applications closer to the users and improve their digital experience. Zenlayer offers edge compute, software defined networking, and application delivery services in more than 180 locations on six continents.

Why is your organization adopting an open source approach?

Zenlayer has always relied on open source solutions. We strongly believe that open source is the right ecosystem for the edge cloud industry to grow. We connect infrastructure all over the globe. If each data center and platform integrate open-source software, it is much easier to integrate networks and make literal connections compared to a milieu of proprietary systems. Some of the open source projects we benefit from and support are Akraino Blue Prints, ODL, Kubernetes, OpenNess, DPDK, Linux, mySQL, and more.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We are a startup company in the edge cloud industry. LF Edge is one of the best open-source organizations both advocating for and building open edge platforms. The edge cloud space is developing rapidly, with continuous improvements in cloud technology, edge infrastructure, disaggregated compute, and storage options. Both impact and complexity go far beyond just cloud service providers, device vendors, or even a single traditional industry. LF Edge has helped build a community of people and companies from across industries, advocating for an open climate to make the edge accessible to as many users as possible.

What do you see as the top benefits of being part of the LF Edge community?

Our company has been a member of the LF Edge community for over a year now. Despite the difficulties presented by COVID-19, we have been able to enjoy being part of the Edge community. We interacted with people from a broad spectrum of industries and technology areas and learned some exciting use cases from the LF Edge community. This has helped us build a solid foundation for Zenlayer’s edge cloud services. 

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

We are proud to be part of the Edge Cloud community. Zenlayer is leading the Upstream subcommittee within Akraino and has invited multiple external communities such as ONF CORD,  CNTT, TIP OCN, O-RAN and TARS to share our common interest in building the edge. We also contributed to the upstream requirement and reviews for Akraino releases.

What do you think sets LF Edge apart from other industry alliances?

LF Edge has a clear focus on edge cloud and a very healthy and strong governing board to ensure unbiased technological drive toward open systems. 

How will LF Edge help your business?

We hope LF Edge will continue to empower rapid customer innovation during the drive to edge cloud for video streaming, gaming, enterprise applications, IoT, and more. As a member of a fast-growing community, we also look forward to more interactions via conferences and social events (digital or in person as is safe) so we can continue to get to know and better understand each other’s needs and how we can help solve them. 

What advice would you give to someone considering joining LF Edge?

LF Edge is a unique community pushing for the best future for edge cloud. The group brings together driven people, a collaborative culture, and fast momentum. What you put in you receive back tenfold. Anyone interested in the future of the edge should consider joining, even if they do not yet know much about open source and its benefits. The community will value their inputs and be happy to teach or collaborate in return.

To find out more about LF Edge members or how to join, click here. To learn more about Akraino, click here. Additionally, if you have questions or comments, visit the  LF Edge Slack Channel and share your thoughts in the #community or #akraino-tsc channels.

Akraino White Paper: Cloud Interfacing at the Telco 5G Edge

By Akraino, Akraino Edge Stack, Blog

Written by Aaron Williams, LF Edge Developer Advocate

As cloud computing migrates from large Centralized Data Centers to the Server Provider Edge’s Access Edge (i.e. Server-based devices at the Telco tower) to be closer to the end user and reduce latency, more applications and use cases are now available.  When you combine this with the widespread roll out of 5G, with its speed, the last mile instantly becomes insanely fast.  Yet, if this new edge stack is not constructed correctly, the promise of these technologies will not be possible.

LF Edge’s Akraino project has dug into these issues and created its second white paper called Cloud Interfacing at the Telco 5G Edge.  In this just released white paper, Akraino analyzes the issues, proposes detailed enabler layers between edge applications and 5G core networks, and makes recommendations for how Telcos can overcome these technical and business challenges as they bring the next generation of applications and services to life.

Click here to view the complete white paper: https://bit.ly/34yCjWW

To learn more about Akraino, blueprints or end user case stories, please visit the wiki: https://wiki.akraino.org.

On the “Edge” of Something Great

By Akraino, Announcement, Baetyl, Blog, EdgeX Foundry, Fledge, Home Edge, LF Edge, Open Horizon, Project EVE, Secure Device Onboard, State of the Edge

As we kick off Open Networking and Edge Summit today, we are celebrating the edge by sharing the results of our first-ever LF Edge Member Survey and insight into what our focuses are next year.

LF Edge, which will celebrate its 2nd birthday in January 2021, sent the survey to our more than 75 member companies and liaisons. The survey featured about 15 questions that collected details about open source and edge computing, how members of the LF Edge community are using edge computing and what project resources are most valuable. 

Why did you chose to participate in LF Edge?

The Results Are In

The Top 3 reasons to participate in LF Edge are market creation and adoption acceleration, collaboration with peers and industry influence. 

  • More than 71% joined LF Edge for market creation and adoption acceleration
  • More than 57% indicated they joined LF Edge for business development
  • More than 62% have either deployed products or services based on LF Edge Projects or they are planned by for later this year, next year or within the next 3-5 years

Have you deployed products or services based on LF Edge Projects?

This feedback corresponds with what we’re seeing in some of the LF Edge projects. For example, our Stage 3 Projects Akraino and EdgeX Foundry are already being deployed. Earlier this summer, Akraino launched its Release 3 (R3) that delivers a fully functional open source edge stack that enables a diversity of edge platforms across the globe. With R3, Akraino brings deployments and PoCs from a swath of global organizations including Aarna Networks, China Mobile, Equinix, Futurewei, Huawei, Intel, Juniper, Nokia, NVIDIA, Tencent, WeBank, WiPro, and more. 

Additionally, EdgeX Foundry has hit more than 7 million container downloads last month and a global ecosystem of complementary products and services that continues to increase. As a result, EdgeX Foundry is seeing more end-user case studies from big companies like Accenture, ThunderSoft and Jiangxing Intelligence

Have you gained insight into end user requirements through open collaboration?


Collaboration with peers

The edge today is a solution-specific story. Equipment and architectures are purpose-built for specific use cases, such as 5G and network function virtualization, next-generation CDNs and cloud, and streaming games. Which is why collaboration is key and more than 70% of respondents said they joined LF Edge to collaborate with peers. Here are a few activities at ONES that showcase the cross-project and members collaboration. 

Additionally, LF Edge created a LF Edge Vertical Solutions Group that is working to enable easily-customized deployments based on market/vertical requirements. In fact, we are hosting an LF Edge End User Community Event on October 1 that provides a platform for discussing the utilization of LF Edge Projects in real-world applications. The goal of these sessions is to educate the LF Edge community (both new and existing) to make sure we appropriately tailor the output of our project collaborations to meet end user needs. Learn more.

Industry Influence

More than 85% of members indicated they have gained insights into end user requirements through open collaboration. A common definition of the edge is gaining momentum. Community efforts such as LF Edge and State of the Edge’s assets, the Open Glossary of Edge Computing, and the Edge Computing Landscape are providing cohesion and unifying the industry. In fact,  LF Edge members in all nine of the projects collaborated to create an industry roadmap that is being supported by global tech giants and start-ups alike.

 

 

Where do we go from here? 

When asked, LF Edge members didn’t hold back. They want more. They want to see more of everything – cross-project collaboration, end user events and communication, use cases, open source collaboration with other liaisons. As we head into 2021, LF Edge will continue to lay the groundwork for markets like cloud native, 5G, and edge for  more open deployments and collaboration.  

 

Where the Edges Meet: Public Cloud Edge Interface

By Akraino, Akraino Edge Stack, Blog

Written by Oleg Berzin, Ph.D., a member of the Akraino Technical Steering Committee and Senior Director Technology Innovation at Equinix

Introduction

Why 5G

5G will provide significantly higher throughput than existing 4G networks. Currently, 4G LTE is limited to around 150 Mbps. LTE Advanced increases the data rate to 300 Mbps and LTE Advanced Pro to 600Mbps-1 Gbps. The 5G downlink speeds can be up to 20 Gbps. 5G can use multiple spectrum options, including low band (sub 1 GHz), mid-band (1-6 GHz) and mmWave (28, 39 GHz). The mmWave spectrum has the largest available contiguous bandwidth capacity (~1000 MHz) and promises dramatic increases in user data rates. 5G enables advanced air interface formats and transmission scheduling procedures that decrease access latency in the Radio Access Network by a factor of 10 compared to 4G LTE.

The Slicing Must Go On

Among advanced properties of the 5G architecture, Network Slicing enables the use of 5G network and services for a wide variety of use cases on the same infrastructure. Network Slicing (NS) refers to the ability to provision a common physical system to provide resources necessary for delivering service functionality under specific performance (e.g. latency, throughput, capacity, reliability) and functional (e.g. security, applications/services) constraints.

Network Slicing is particularly relevant to the subject matter of the Public Cloud Edge Interface (PCEI) Blueprint. As shown in the figure below, there is a reasonable expectation that applications enabled by the 5G performance characteristics will need access to diverse resources. This includes conventional traffic flows, such as access from mobile devices to the core clouds (public and/or private) as well as the general access to the Internet, edge traffic flows, such as low latency/high speed access to edge compute workloads placed in close physical proximity to the User Plane Functions (UPF), as well as the hybrid traffic flows that require a combination of the above for distributed applications (e.g. online gaming, AI at the edge, etc). One point that is very important is that the network slices provisioned in the mobile network must extend beyond the N6/SGi interface of the UPF all the way to the workloads running on the edge computing hardware and on the Public/Private Cloud infrastructure. In other words, “The Slicing Must Go On” in order to ensure continuity of intended performance for the applications.


The Mobile Edge

The technological capabilities defined by the standards organizations (e.g. 3GPP, IETF) are the necessary conditions for the development of 5G. However, the standards and protocols are not sufficient on their own. The realization of the promises of 5G depends directly on the availability of the supporting physical infrastructure as well as the ability to instantiate services in the right places within the infrastructure.

Latency can be used as a very good example to illustrate this point. One of the most intriguing possibilities with 5G is the ability to deliver very low end to end latency. A common example is the 5ms round-trip device to application latency target. If we look closely at this latency budget, it is not hard to see that to achieve this goal a new physical aggregation infrastructure is needed. This is because the 5ms budget includes all radio/mobile core, transport and processing delays on the path between the application running on User Equipment (UE) and the application running on the compute/server side. Given that at least 2ms will be required for the “air interface”, the remaining 3ms is all that’s left for the radio/packet core processing, network transport and the compute/application processing budget. The figure below illustrates an example of the end-to-end latency budget in a 5G network.

The Edge-in and Cloud-out Effect

Public Cloud Service Providers and 3rd-Party Edge Compute (EC) Providers are deploying Edge instances to better serve their end-users and applications, A multitude of these applications require close inter-working with the Mobile Edge deployments to provide predictable latency, throughput, reliability, and other requirements.

The need to interface and exchange information through open APIs will allow competitive offerings for Consumers, Enterprises, and Vertical Industry end-user segments. These APIs are not limited to providing basic connectivity services but will include the ability to deliver predictable data rates, predictable latency, reliability, service insertion, security, AI and RAN analytics, network slicing, and more.

These capabilities are needed to support a multitude of emerging applications such as AR/VR, Industrial IoT, autonomous vehicles, drones, Industry 4.0 initiatives, Smart Cities, Smart Ports. Other APIs will include exposure to edge orchestration and management, Edge monitoring (KPIs), and more. These open APIs will be the foundation for service and instrumentation capabilities when integrating with public cloud development environments.

Public Cloud Edge Interface (PCEI)

Overview

The purpose of Public Cloud Edge Interface (PCEI) Blueprint family is to specify a set of open APIs for enabling Multi-Domain Inter-working across functional domains that provide Edge capabilities/applications and require close cooperation between the Mobile Edge, the Public Cloud Core and Edge, the 3rd-Party Edge functions as well as the underlying infrastructure such as Data Centers, Compute hardware and Networks. The Compute hardware is optimized and power efficient for Edge such as the Arm64 architecture.

The high-level relationships between the functional domains are shown in the figure below:

The Data Center Facility (DCF) Domain. The DCF Domain includes Data Center physical facilities that provide the physical location and the power/space infrastructure for other domains and their respective functions.

The Interconnection of Core and Edge (ICE) Domain. The ICE Domain includes the physical and logical interconnection and networking capabilities that provide connectivity between other domains and their respective functions.

The Mobile Network Operator (MNO) Domain. The MNO Domain contains all Access and Core Network Functions necessary for signaling and user plane capabilities to allow for mobile device connectivity.

The Public Cloud Core (PCC) Domain. The PCC Domain includes all IaaS/PaaS functions that are provided by the Public Clouds to their customers.

The Public Cloud Edge (PCE) Domain. The PCE Domain includes the PCC Domain functions that are instantiated in the DCF Domain locations that are positioned closer (in terms of geographical proximity) to the functions of the MNO Domain.

The 3rd party Edge (3PE) Domain. The 3PE domain is in principle similar to the PCE Domain, with a distinction that the 3PE functions may be provided by 3rd parties (with respect to the MNOs and Public Clouds) as instances of Edge Computing resources/applications.

Architecture

The PCEI Reference Architecture and the Interface Reference Points (IRP) are shown in the figure below. For the full description of the PCEI Reference Architecture please refer to the PCEI Architecture Document.

Use Cases

The PCEI working group identified the following use cases and capabilities for Blueprint development:

  1. Traffic Steering/UPF Distribution/Shunting capability — distributing User Plane Functions in the appropriate Data Center Facilities on qualified compute hardware for routing the traffic to desired applications and network/processing functions/applications.
  2. Local Break-Out (LBO) – Examples: video traffic offload, low latency services, roaming optimization.
  3. Location Services — location of a specific UE, or identification of UEs within a geographical area, facilitation of server-side application workload distribution based on UE and infrastructure resource location.
  4. QoS acceleration/extension – provide low latency, high throughput for Edge applications. Example: provide continuity for QoS provisioned for subscribers in the MNO domain, across the interconnection/networking domain for end-to-end QoS functionality.
  5. Network Slicing provisioning and management – providing continuity for network slices instantiated in the MNO domain, across the Public Cloud Core/Edge as well as the 3Rd-Party Edge domains, offering dedicated resources specifically tailored for application and functional needs (e.g. security) needs.
  6. Mobile Hybrid/Multi-Cloud Access – provide multi-MNO, multi-Cloud, multi-MEC access for mobile devices (including IoT) and Edge services/applications
  7. Enterprise Wireless WAN access – provide high-speed Fixed Wireless Access to enterprises with the ability to interconnect to Public Cloud and 3rd-Party Edge Functions, including the Network Functions such as SD-WAN.
  8. Distributed Online/Cloud Gaming.
  9. Authentication – provided as service enablement (e.g., two-factor authentication) used by most OTT service providers 
  10. Security – provided as service enablement (e.g., firewall service insertion)

The initial focus of the PCEI Blueprint development will be on the following use cases:

  • User Plane Function Distribution
  • Local Break-Out of Mobile Traffic
  • Location Services

User Plane Function Distribution and Local Break-Out

The UPF Distribution use case distinguishes between two scenarios:

  • UPF Interconnection. The UPF/SPGW-U is located in the MNO network and needs to be interconnected on the N6/SGi interface to 3PE and/or PCE/PCC.
  • UPF Placement. The MNO wants to instantiate a UPF/SPGW-U in a location that is different from their network (e.g. Customer Premises, 3rd Party Data Center)

UPF Interconnection Scenario

UPF Placement Scenario

UPF Placement, Interconnection and Local Break-Out Examples

Location Services (LS)

This use case targets obtaining geographic location of a specific UE provided by the 4G/5G network, identification of UEs within a geographical area as well as facilitation of server-side application workload distribution based on UE and infrastructure resource location.

 

Acknowledgements

Project Technical Lead: Oleg Berzin

Committers: Suzy GuTina Tsou Wei Chen, Changming Bai, Alibaba; Jian Li, Kandan Kathirvel, Dan Druta, Gao Chen, Deepak Kataria, David Plunkett, Cindy Xing

Contributors: Arif , Jane Shen, Jeff Brower, Suresh Krishnan, Kaloom, Frank Wang, Ampere

LF Edge Member Spotlight: HPE

By Akraino, Akraino Edge Stack, Blog, Member Spotlight

The LF Edge community is represents a diverse set of member companies and people that represent the IoT, Enterprise, Cloud and Telco Edge. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source edge solutions. Today, we sat down with Rohit Arora, Enterprise Architect at Hewlett Packard Enterprise (HPE) to discuss the importance of open source, leading Multi Access Edge Computing (MEC) initiatives, participating in the Technical Advisory Committee (TAC) and collaborating with the LF Edge ecosystem.

Can you tell us a little about your organization?

HPE is a global, edge-to-cloud Platform-as-a-Service company. HPE solutions connect, protect, analyze, and act on data and applications wherever they live, from edge to cloud, so insights can be turned into outcomes at the speed required to thrive in today’s complex world.

Why is your organization adopting an open source approach?

We at HPE believe in innovation and open source encourages innovation by bringing communities together to build common platform. HPE has been involved in various open source projects.

Why did you join LF Edge and what sort of impact do you think LF Edge has on the edge, networking, and IoT industries?

We joined LF edge because it aligns with HPE’s direction of edge to cloud. Edge computing is creating a major transformation in most industries and we believe initiatives driven by LF edge are critical for this digital transformation

What do you see as the top benefits of being part of the LF Edge community?

There are many benefits of being part of LF edge but we believe the biggest is to be part of a community which is driving the innovation for the next gen networks at the edge.

What sort of contributions has your team made to the community, ecosystem through LF Edge participation?

HPE has contributions on the LF Edge Governing Board and TAC, HPE has also made some contributions to the infrastructure requirements for LF Edge. HPE is also actively involved in LF edge projects such as Akraino and process adoption.

What do you think sets LF Edge apart from other industry alliances?

There are two main reasons LF Edge is different from other industry alliances

  1. A wide set of different community members: There is a wide variety of community members in LF edge from telco services providers, NEPs to chip manufacturers. This provides different viewpoints and provides the right level of expertise that is needed.
  2. Projects execution: The community really believes in executing and we have seen some projects coming from idea to development and then testing at a very fast pace.

How will  LF Edge help your business?

HPE is leading infrastructure provider and have wide variety of solutions for the edge. We are also leading MEC (Multi Access Edge Computing) initiatives with some major telcos. By being part of LFEdge we get access to latest innovations and resources in edge computing. This can help us build our solution to fit industry needs.

What advice would you give to someone considering joining LF Edge?

There are so many projects LF Edge is driving, the best place to start would be to pick a project which aligns with your company’s directions and see how you can drive innovation with your contributions for the project. There are many resources available and all the community members are very helpful to provide any info you need.

To find out more about LF Edge members or how to join, click here.

Additionally, if you have questions or comments, visit the  LF Edge Slack to share your thoughts and engage with community members. 

 

Akraino’s AI Edge-School/Education Video Security Monitoring Blueprint

By Akraino, Akraino Edge Stack, Blog, Use Cases

Written by Hechun Zhang, Staff Systems Engineer, Baidu; Akraino TSC member, and PTL of the AI Edge Blueprint; and Tina Tsou, Enterprise Architect, Arm and Akraino TSC Co-Chair

In order to support end-to-end edge solutions from the Akraino community, Akraino uses blueprint concepts to address specific Edge use cases. A Blueprint is a declarative configuration of the entire stack i.e., edge platform that can support edge workloads and edge APIs. In order to address specific use cases, a reference architecture is developed by the community.

The School/Education Video Security Monitoring Blueprint belongs to the AI Edge Blueprint family. It focuses on establishing an open source MEC platform that combined with AI capacities at the Edge. In this blueprint, latest technologies and frameworks like micro-service framework, Kata container, 5G accelerating, and open API have been integrated to build a industry-leading edge cloud architecture that could provide comprehensive computing acceleration support at the edge. And with this MEC platform, Baidu has expanded AI implementation across products and services to improve safety and engagement in places such as factories, industrial parks, catering services, and classrooms that rely on AI-assisted surveillance.

Value Proposition

  • Establish an open-source edge infrastructure on which each member company can develop its own AI applications, e.g. video security monitoring.
  • Contribute use cases which help customers adopt video security monitoring, AI city, 5G V2X, and Industrial Internet applications.
  • Collaborate with members who can work together to figure out the next big thing for the industry.

Use cases

Improved Student-Teacher Engagement

 

Using deep learning model training for video data from classrooms, school management can evaluate class engagement and analyze individual student concentration levels to improve real-time teaching situations.

Enhanced Factory Safety and Protection

Real-time monitoring helps detecting factory workers who might forget security gadgets, such as helmets, safety gloves, and so on, to prevent hazardous accidents in the workplace. Companies can monitor safety in a comprehensive and timely way, and used findings as a reference for strengthening safety management.

Reinforced Hygiene and Safety in Catering

Through monitoring staff behavior in the kitchen, such as smoking breaks and cell phone use, this solution ensures the safety and hygiene of the food production process.

Advanced Fire Detection and Prevention

Linked and networked smoke detectors in densely populated places, such as industrial parks and community properties, can help quickly detect and alert authorities to fire hazards and accidents.

Network Architecture

OTE-Stack is an edge computing platform for 5G and AI. By virtualization it can shield heterogeneous characteristics and gives a unified access of cloud edge, mobile edge and private edge. For AI it provides low-latency, high-reliability and cost-optimal computing support at the edge through the cluster management and intelligent scheduling of multi-tier clusters. And at the same time OTE-Stack makes device-edge-cloud collaborative computing possible.

Baidu implemented video security monitoring blueprints on the Arm infrastructure, including cloud-edge servers, hardware accelerators, and custom CPUs designed for world-class performance. Arm and Baidu are members of the Akraino project and use edge cloud reference stack of networking platforms and cloud-edge servers built on Arm Neoverse. The Arm Neoverse architecture supports a vast ecosystem of cloud-native applications and combines AI Edge blueprint for an open source mobile edge computing (MEC) platform optimized for sectors such as safety, security, and surveillance.

“Open source has now become one of the most important culture and strategies respected by global IT and Internet industries. As one of the world’s top Internet companies, Baidu has always maintained an enthusiastic attitude in open source, actively contributing the cutting edge products and technologies to the Linux foundation. Looking towards the future, Baidu will continue to adhere to the core strategy of open source and cooperate with partners to build a more open and improved ecosystem.” — Ning Liu, Director of AI Cloud Group, Baidu

In the 5G era, OTE-Stack has obvious advantages in the field of edge computing:

  • Large scale and hierarchical cluster management
  • Support third cluster
  • Lightweight cluster controller
  • Cluster autonomy
  • Automatic disaster recovery
  • Global scheduling
  • Support multi-runtimes
  • Kubernetes native support

For more information about this Akraino Blueprint, click here.  For general information about Akraino Blueprints, click here.