Skip to main content
Category

Akraino

Working towards moving the industry forward together

By Akraino, Akraino Edge Stack, Blog

By Alex Reznik, Chair of MEC ISG, HPE Distinguished Technologist and LF Edge member

This content original ran on the ETSI blog.

Quite some time has passed since my last blog entry, and while I thought about a new blog a number of times, a good topic – i.e. one which is appropriate for discussion in a short, informal and public format – just did not seem to present itself. That’s not for the lack of interest or activity in MEC. 2019 is shaping up to be a critical year in which many operators are now public about their plans for edge computing, initial deployments are appearing and, as expected, holes in what the industry has been working on are beginning to be found (witness the much publicized and excellent Telefonica presentation at last month’s Edge Compute Congress). It’s just that it’s hard to blog about on-going work, even when it is very active, much less about internal efforts of various players in MEC. After all, what would that look like “this is hard and we are working hard on it…”

Nevertheless, the time has come. Those of you who follow my random MEC thoughts on a semi-regular basis might recall the subject of that last post ages ago (I mean in February): the need for both a vibrant Open Source community and Standards development in a space like MEC; and how the two are complimentary in that the focus is, by definition, on complimentary problems. And, if you don’t follow me religiously, here is the link: https://www.etsi.org/newsroom/blogs/entry/do-we-still-need-standards-in-the-age-of-open-source, grab a coffee, tea, a…. whatever … and read up!

In a significant way, that post was in part a response to comments and questions like: “There is a lot of confusion in the market, what’s up with ETSI and Akraino and EdgeXFoundry. You guys all seem to compete.” To that, the post provides a rather direct answer of “no we do not – we address different needs.” However, these related questions often followed: “OK, but how well DO YOU actually work together?” To date, one failing of the various players in MEC space has been the lack of a more convincing answer than “well, we all talk and we are all aware of each other.

That’s now changed in a very significant way. On the heels of a formal cooperation agreement between ETSI and LF Edge (see, e.g. https://www.linuxfoundation.org/press-release/2019/04/etsi-and-the-linux-foundation-sign-memorandum-of-understanding-enabling-industry-standards-and-open-source-collaboration/), the MEC ISG within ETSI and LF Edge’e Akraino project have been working towards moving the industry forward together. The first fruit of this labor is about to ripen – an Akraino Mini-Hackathon, endorsed by ETSI, to be held in San Diego the day before KubeCon.

This event was designed to highlight the work Akraino is doing in putting forward solutions which take advantage of ETSI’s Standards, and to allow developers an experience in developing for MEC. The most notable thing about this Hackathon is the model of cooperation – Akraino (and Open Source community) provides implementations to the industry, while the use of ETSI MEC (Standards Org.) standards ensures interoperability across other standards-compliant implementations.

So… that’s it then. We have arrived at a working, standardized solution for MEC, right??? Well, no. If you are looking for that, the mini-hackathon will sorely disappoint you. However, it is a step – an important first step in a growing cooperation between two organizations which should be, over time delivering those operational, standardized components for MEC. The work ahead of ETSI MEC and Akraino is significant, and much of it will lack opportunities for fanfare – as work in our industry often does. Still, there will be more coming from our two organization, so stay tuned…

An Akraino Developer Use Case

By Akraino, Akraino Edge Stack, Blog

Written by technical members members of the Akraino Edge Stack project including Bruce Lin, Rutgers; Robert Qiu, Tencent; Hechun Zhang, Baidu; Tina Tsou, Arm; Ciprian Barbu, Enea; Allen Chen, Tencent; Gabriel Yang, Huawei; Feng Yang, Tencent; Jeremy Liu, Tencent ; Tapio Tallgren, Nokia; Cristina Pauna, Enea

More intelligent edge nodes are deployed in 5G era. This blog shares a developer use case out of the Akraino project that the community is working on, specifically Telco  Appliance Blueprint Family, Connected Vehicle Blueprint, ELIOT: Edge Lightweight and IoT Blueprint Family, Micro-MEC, The AI Edge Blueprint Family, and 5G MEC/Slice System to Support Cloud Gaming, HD Video and Live Broadcasting Blueprint.

Telco Appliance Blueprint Family

Telco Appliance blueprint family provides a reusable set of modules that will be used to create sibling blueprints for other purpose tuned appliances.

Appliance model automates the installation, configuration and testing of:

  • Firmware and/or BIOS/UEFI
  • Base Operating System
  • Components for managing containers, performance, fault, logging, networking, CPU

The first blueprint integrated with TA is REC. Other appliances will be created by combining other applications with the same underlying components to create additional blueprints.

TA produces an ISO that, after being installed in the cluster, provides the underlying tools needed by the applications that come on top of it. The build of this ISO is automated in Akraino’s CI and it includes:

  • Build-tools: Based on OpenStack Disk Image Builder
  • Dracut: Tool for building ISO images for CentOS
  • RPM Builder: Common code for creating RPM packages
  • Specs: the build specification for each RPM package
  • Dockerfiles: the build specifications for each Docker container
  • Unit files: the systemd configuration for starting/stopping services
  • Ansible playbooks: Configuration of all the various components
  • Test automation framework

The image provides the following components, used during installation:

  • L3 Deployer: an OpenStack Ironic-based hardware manager framework
  • Hardware Detector: Used to adapt L3 deployer to specific hardware
  • North-bound REST API framework: For creating/extending blueprint APIs
  • CLI interface
  • AAA server to manage cloud infrastructure users and their roles
  • Configuration management
  • Container image registry
  • Security hardening configuration
  • A distributed lock management framework
  • Remote Installer: Docker image used by Regional Controller to launch deployer

The image provides the following components, usable by the applications:

  • CPU Pooler: Open Source Nokia project for K8s CPU management
  • DANM: Open Source Nokia project for K8s network management
  • Flannel: K8s networking component
  • Helm: K8s package manager
  • etcd: K8s distributed key-value store
  • kubedns: K8s DNS
  • Kubernetes
  • Fluentd: Logging service
  • Elasticsearch: Logging service
  • Prometheus: Performance measurement service
  • OpenStack Swift: Used for container image storage
  • Ceph: Distributed block storage
  • NTP: Network Time Protocol
  • MariaDB, Galera: Database for OpenStack components
  • RabbitMQ: Message Queue for Openstack components
  • Python Peewee: A Python ORM
  • Redis

Radio Edge Cloud (REC)

Akraino Radio Edge Cloud (REC) provides an appliance tuned to support the O-RAN Alliance and O-RAN Software Community‘s Radio Access Network Intelligent Controller (RIC) and is the first example of the Telco Appliance blueprint family.

  • RIC on Kubernetes on “bare metal” tuned for low latency round trip messaging between RIC and eNodeB/gNodeB,
  • Support for telco networking requirements such as SRIOV, dual POD interfaces, IPVLAN
  • Built from reusable components of the “Telco Appliance” blueprint family
  • Automated Continuous Deployment pipeline testing the full software stack (bottom to top, from firmware up to and including application) simultaneously on chassis based extended environmental range servers and commodity datacenter servers
  • Integrated with Regional Controller (Akraino Feature Project) for “zero touch” deployment of REC to edge sites
  • Deployable to multiple hardware models

In Akraino, the work on this blueprint will fully automate the deployment and testing on multiple hardware platforms in a Continuous Deployment system.

SDN Enabled Broadband Access (SEBA):

Here is SEBA user story.

As a Service Provider, I want to setup SEBA environment so that I can use ONF SEBA platform.

As an Administrator, I want to validate HOST OS environment so that I can install Kubernetes/SEBA.

As an Administrator, I want to validate Software environment so that I can install Kubernetes/SEBA

As an Administrator, I want to validate Virtual Machines for my Kubernetes/SEBA environment so that I can validate environment

As a Administrator, I want to validate services for my Kubernetes/SEBA environment so that validate working environment

SEBA validation on ARM – IEC Type 2

SEBA stands for SDN Enabled Broadband Access. It is an ONF/Opencord project which implements a lightweight platform, based on R-CORD. It supports a multitude of virtualized access technologies at the edge of the carrier network, including PON, G.Fast, and eventually DOCSIS and more.

SEBA is also an Akraino Blueprint, but in the context of the IEC Blueprint, the SEBA usecase is deployed and verified on an IEC Type 2 platform. IEC is also the main Akraino sub-project/blueprint which enables ARM architectures for Telco/Enterprise Edge applications.

For the purpose of validating SEBA on IEC Type 2, a SEBA compliant ARM64 lab has been setup at the Akraino Community Lab at the University of New Hampshire (UNH). The lab consists of two development PODs, using Marvel ThunderX2 networking processor equipped servers and Ampere HR330A servers.

The connectivity is provided by Edgecore 7816-64X TOR switches which fulfill the role of Fabric Switches in the SEBA Architecture.

For the validation work we have first ported and deployed BBSim on IEC for ARM64 and right now there is ongoing work for porting and verifying PONSim.

PONSim is also a central piece of SEBA-in-a-Box (SIAB) which is one of the main testing CI/CD projects in upstream Opencord Jenkins infrastructure.

There is ongoing work in Akraino IEC for replicating the SIAB setup by utilizing the same testing environments and facilities provided by Opencord.

Finally, recent collaboration with Opencord resulted in forming a Multiarch Brigade with the purpose of making SEBA seamlessly work on multiple architectures apart from the x86 servers. So far there has been some evident interest from the community and other parties for enabling multiarch support and perpetual verification in a CI/CD manner by adapting the Opencord infrastructure to support other architectures and ARM64 in particular.

Connected Vehicle Blueprint

Connected Vehicle Blueprint focuses on establishing an open source MEC platform, which is the backbone for V2X application.

From use cases perspectives,  the following are the use cases we have tested in our companies. More is possible in the future.

  • Accurate Location: Accuracy of location improved by over 10 times than today’s GPS system.Today’s GPS system is around 5-10 meters away from your reallocation, <1 meter is possible with the help of Connected Vehicle Blueprint.
  • Smarter Navigation: Real-time traffic information update, reduce the latency from minutes to seconds, figure out the most efficient way for drivers.
  • Safe Drive Improvement: Figure out the potential risks which can NOT be seen by the driver.
  • Reduce traffic violation: Let the driver understand the traffic rules in some specific area.For instance, change the line prior to narrow street, avoiding opposite way drive in the one way road, avoiding carpool lane when single driver etc.

From technology architecture perspective,  the blueprint consists of four major layers.

  • Hardware Layer: Connected Vehicle Blueprint runs on top of community hardware. Both Arm and x86 server are well supported.
  • IaaS Layer: Connected Vehicle Blueprint can be deployed in virtual environment as well.  Virtual Machine , Container as well as other IaaS mainstream software(like openstake, kubernates et al)  are supported.
  • PaaS Layer:Tars is the microservice framework of Connected Vehicle Blueprint. Tars can provide high performance RPC call, deploy microservice in larger scale-out scenario efficiently, provide easy-to-understand service monitor feature.
  • SaaS Layer: The latest v2x application depicted in upper use cases perspective.

From network architecture perspective,  the blueprint can be deployed in both 4G and 5G network. Two key points should be paid special attention to. One is offloading data to edge MEC platform.  The policy of data offload is configurable based on different applications.  The other is the ability that letting the edge talks to the other edges as well as remote data center. In some use cases, date process in one edge can NOT address the application’s requirements. We need to collect the data from different edges and figure out a “conclusion” for the application.

ELIOT: Edge Lightweight and IoT Blueprint Family

ELIOT AIoT in Smart Office Blueprint

This blueprint provides edge computing solutions for smart office scenarios. It manages intelligent devices, delivers AI training models and configures rules engine through cloud-edge  collaboration.

Services provided by cloud side:

  • Devices management
  • AI models training and deliver
  • Rules engine configuration
  • Third-party applications

Services provided by edge side:

  • Sending/Receiving devices data
  • Data analysis
  • AI models/Functions execution

Deployment environment:

  • Kubernetes
  • Container
  • VM
  • Bare Metal

Recently, we are still developing this project and Tencent will keep devoting great efforts to the research and development of this blueprint. In the near future, the project will also be open source. Any partners interested in this project are welcome to contact us and let’s work together to promote the application of edge computing in the field of smart office.

The AI Edge Blueprint Family

With this blueprint, teachers and school authorities could conduct a full evaluation of the overall class and the concentration of individual students, helping to fully understand the real time teaching situation. According to the concentration data of each course, teachers and school authorities can conduct knowledge test and strengthen.

The AI Edge Blueprint mainly focuses on establishing an open source MEC platform combined with AI capacities at the Edge, which could be used for safety, security, and surveillance sectors.

5G MEC/Slice System to Support Cloud Gaming, HD Video and Live Broadcasting Blueprint.

The purpose of the blueprint is to offer a 5G MEC networking platform to support delay sensitive, bandwidth consuming services, such as cloud gaming, HD video and live broadcasting. In addition, network slicing will be utilized to guarantee excellent user experience.

As a service provider of cloud gaming, I want to set up a computing platform to execute game rendering as well as video encoding, and deliver the compressed video to the player via a 4G/5G network.

As cloud gaming is sensitive to latency, the computing platform is required to be as close as possible to the network edge. Besides that, a mechanism to discover the service and direct players’ traffic to the platform is demanded.

To direct players’ traffic to the platform, I install an edge connector which is responsible for enabling flexible traffic offloading by interacting with EPC or 5GC, and subscribing the edge slice, which provisions a certain level of QoS between UE and edge application.

To interface with the user plane of 4G or 5G, I install an edge gateway (GW). In 5G SA, the edge GW is connected to a UPF, to manage the traffic from the mobile network. In 5G NSA or 4G, the edge GW is connected to an eNB or SGW, with an additional function of handling GTP-U, the tunneling protocol adopted by 4G and 5G.

When a player accesses a cloud gaming service hosted on the platform, the edge connector will establish an appropriate route between the mobile network and the edge GW. The compressed video and the player’s control data will be transferred between the gaming server and the user equipment, traversing the edge GW.

As a service provider of live broadcasting or HD video delivery, I want to set up a platform to transcode the video source to fit the bandwidth of the transport.

Similar to the cloud gaming use case, an edge connector is installed to direct the traffic to the computing platform, and an edge GW is installed to manage the mobile traffic and terminate GTP-U under 4G or 5G NSA. The computing platform will transfer the format of the video to be adapted to either the wireless link towards the mobile user or the transport towards the internet. The computing platform may be connected to a CDN to optimize the live broadcasting or HD video delivery.

Micro-MEC (µMEC)

 µMEC is a low-power, low-cost server at the edge of the network that supports Smart City use cases with different sensors (including cameras), is connected to Internet and telco networks, and allows third-party developers to create ETSI MEC applications to it.

So the µMEC focuses on Smart City use cases. If you had different smart sensors in a city, collecting data that is available for free, what could you make possible? That was the challenge that we gave in a developer hackathon in May. We used the Akraino uMEC platform and created a model city. Different sensors collected information and made it available through a database.

 

For the next hackathon, we want to make it very easy for developers to create applications that run on the µMEC device itself. For this, we are implementing ETSI MEC compliant interfaces and leverage the OpenFAAS serverless platform.

The µMEC platform that we are developing in Akraino will have a trusted computing stack, from trusted hardware to secure networking and signed containers.

Akraino will be hosting a MEC Hackathon on November 18 at Qualcomm. For more information or to register, visit the Akraino MEC Hackathon wiki.  For more informations about Akraino or blueprints, click here.

Running K8s Conformance Tests in Akraino Edge Stack

By Akraino, Blog

By Cristina Pauna, Bin Lu, Howard Zhang

Due to the large scale f its projects and various application scenarios, Akraino Edge Stack is complex when it comes to testing logistics;  normal integration or unit tests are not enough for testing the projects. To ensure that the full deployment is both functional and reliable, it needs to be tested at all layers of the stack, from hardware to application. To verify deployments based on Kubernetes at the platform layer, K8s conformance is a good way to do this job.

The k8s conformance tests are used in the CNCF vendor  Certified Kubernetes program. It integrates end-to-end (e2e) testing in Kubernetes which contains a wide range of test cases, and passing them proves a high level of common functionality of the platform.

K8s Conformance Tests in Akraino

General testing in Akraino is done using the Blueprint Validation Framework. The framework provides tests at different layers of the stack, such as hardware, operating system, cloud infrastructure, security, etc. Each layer has its own container image built by the validation project. The full list of images provided can be found in the project’s DockerHub repo. The validation project has an multi-arch approach. All images currently support both x86_64 and aarch64 architectures, and can be easily extended to other platform architectures in the future. This is implemented using docker manifest list so the same image name is used to pull regardless of the architecture of the System Under Test. Therefore the steps described in this article apply regardless of the architecture of the SUT.

The Kubernetes conformance tests are included in the k8s layer. The test suit can be run in a couple of ways, either by directly calling the e2e tool or though sonobuoy, which is a diagnostic tool that gathers logs and and stats in one archive. Regardless of how the tests are ran, it is possible to customise what test cases are included, which versions and even the container images within the tooling.

In Akraino, the conformance suite is run from a separate container, using the validation image built for the k8s layer: akraino/validation:k8s-latest. This image contains all of the tooling necessary for running tests on the k8s layer in Akraino, as well as the tests themselves. The tools that go into the image can be found in its Dockerfile. The container is designed to be started on a remote host that has access to the k8s cluster. Check the validation project User guide for more information on the setup.

The conformance suite is customized using the sonobuoy.yaml file. Some of the images used by sonobuoy in Akraino are built by the validation project in order to customize them to fit the community’s needs (e.g. akraino/validation:sonobuoy-plugin-systemd-logs-latest is based on the official gcr.io/heptio-images/sonobuoy-plugin-systemd-logs:latest but with added aarch64 support).

Prerequisites

Before running the image, copy the folder ~/.kube from your Kubernetes master node to a local folder (e.g. /home/ubuntu/kube) that will later be mounted in the container. Optionally, the results folder can be mounted in order to have the logs stored on the local server.

Run the tests

The Akraino validation project has a variety of tests for the k8s layer. To run just the conformance tests, follow the steps below:

  1. Clone the validation repo.
  2. Create a customized blueprint that will be passed to the container. The blueprint will contain just the conformance suite in it, like in the example below.

3. Fill in volumes.yaml file with the data that applies to your setup. The volumes that don’t have a local value set will be ignored. Do not modify the target part of each volume. These paths will be automatically created inside the container when started, and are fixed paths that the tools inside the container expect to exist as is.

In the example below, only the following volumes are set:

  • Location to the kube config files needed for access to the cluster
  • Location to the customized blueprint file
  • Location to where to store the results

4.  Run the tests

To run the tests, the blucon.py command is used which needs the name of the blueprint as input. The full suite takes around one hour to run, depending on the setup.

The sonobuoy containers can be seen running on the cluster on the heptio-sonobuoy namespace.

Get the results

The results are stored inside the container in the /opt/akraino/results folder. If a volume was mounted to this folder (e.g. /home/ubuntu/results) then they can be viewed from the host where the test was ran.

The results can be viewed with a browser, using the report.html file that the robot framework provides. In case of failures, the sonobuoy archive can be used to inspect all the logs.

 

IoT World Today: Now a Part of LF Edge, EdgeX Foundry Gains Momentum

By Akraino, Akraino Edge Stack, EdgeX Foundry, In the News

When grappling with the enormity of IoT platforms, a sort of herd mentality has emerged, leading scores of vendors to create unique IoT platforms. But the problem is, no single IoT platform can accommodate all potential enterprise and industrial IoT use cases, according to Jason Shepherd, former chair of the EdgeX Foundry governing board. So organizations can become overwhelmed by the complexity of platform integration on the one hand or creating a platform from scratch on the other, Shepherd said. “I liken it to a riptide current. Your natural inclination is to swim into the current, but you risk drowning if you do that,” added Shepherd, who is the IoT and edge chief technology officer at Dell Technologies. “What you’re supposed to do, which is not intuitive, is to swim sideways.”

The EdgeX Foundry was created to sidestep the IoT platform battles. “While most people were trying to create their own platforms, we went open,” Shepherd said. “We swam sideways. And that’s what’s actually going to win.”

The EdgeX Foundry recently announced growing momentum with its latest release, known as “Edinburgh.” The product of a global ecosystem, Edinburgh is the latest example of the EdgeX Foundry’s open source microservices framework. The approach enables users to plug and play components from a growing number of third-party offerings.

In other LF Edge–related news, LF Edge’s Akraino Edge Stack initiative launched its first release in June to establish a framework for the 5G and IoT edge application ecosystem. Known as Akraino R1, it brings together several edge disciplines and offers deployment-ready blueprints.

Kandan Kathirvel, a director at AT&T and Akraino technical steering committee chair, invokes the early days of cloud computing to explain the mission behind the initiative. “In cloud computing, one of the pain points many users had when deploying the cloud was integrating multiple open source projects together,” Kathirvel said. “A user might need to work with hundreds of different open source communities.” And after deploying a cloud project, sometimes gaps were evident. Many organizations found themselves individually in this situation without realizing other users were essentially doing the same. “And this situation increases the cost and deployment time.”

Read more about EdgeX Foundry’s Edinburgh release and Akraino Edge Stack’s R1 release in this IoT World Today article here.

Using DockerHub in Akraino Edge Stack & Other Linux Foundation Projects

By Akraino, Blog

By Kaly Xin, Eric Ball,  and Cristina Pauna

DockerHub is the world’s largest library and community for container images. It offers a huge repository for storing container images and it is available world wide. It can automatically build container images from GitHub and Bitbucket and push them to Docker Hub. These are just a few of the features it provides, but maybe one of the best features is that it offers seamless support for multi arch images through fat manifest.

Why Docker Hub is recommended for Multi-Arch

Docker Hub registry is able to store manifest list (or fat manifests). A manifest list acts as a pointer to other images built for a specific architecture thus making it possible to use the same name for images that are built on hardware with different architectures.

Figure 1: Docker registry storing amd64, arm64 images and their fat manifest

In the picture above akraino/validation:k8s-latest is the fat manifest, and its name can be used to reference both images akraino/validation:k8s-amd64-latest and akraino/validation:k8s-arm64-latest. Inspecting the manifest offers the details on what images it has, for what hardware architecture and what OS.

Figure 2: Docker fat manifest details

How does it work?

When building an image for a specific arch, the arch is added in the tag of the image (akraino/validation:k8s-amd64-latest and akraino/validation:k8s-arm64-latest).

After the images are pushed in the Docker Hub repo, the manifest can be created from the two images. Its name will be the same as the two images but with the arch removed from the tag (akraino/validation:k8s-latest).

To do this in CI with Jenkins, the Jenkins slave has to have docker and a couple of other LF tools installed. The connection to Docker Hub is done through LF scripts (see releng-global-jjb for more info) and all you need to do is define the jjb jobs .

The Akraino validation project is already pushing to Dockerhub, so if you would like to check out some template code, take a look at ci-management/jjb/validation. The docker images are pushed in the official repo and the docker build jobs are running daily. 

In the figure below, the main Jenkins job (validation-master-docker) triggers two parallel jobs that build and push into the Docker Hub registry the amd64 (akraino/validation:k8s-amd64-latest ) and arm64 (akraino/validation:k8s-arm64-latest ) images.  At the end, the fat manifest (akraino/validation:k8s-latest ) is done in a separate job. 

When pulling the image, the name of the manifest is used  (akraino/validation:k8s-latest); the correct image will be pulled based on the architecture of the host from which the pull is made.

Figure 4: Pulling a docker image from two different hardware architecture servers using the same name

What’s next

Docker Hub has been integrated in LF projects like OPNFV from the beginning and is now integrated in Akraino too, so other open source projects can refer to this successful experience to integrate Docker Hub in their pipeline.

Your Path to Edge Computing: Akraino Edge Stack’s Release 1

By Akraino, Blog

By Kandan Kathirvel, Akraino Edge Stack TSC-Chair and Tina Tsou, Akraino Edge Stack TSC Co-Chair

The Akraino community was proud to announce the availability of its release 1 on June 6th. The community has experienced extremely rapid growth over the past year, in terms of both membership and community activity: Akraino includes broad contributions from across LF Edge, with 60% of LF Edge’s 60+ members contributing to project, as well as several other developers across the globe.

Before Akraino, developers had to download multiple open source software packages and integrate/test on deployable hardware, which prolonged innovation and increased cost. The Akraino community came up with a brilliant way to solve this integration challenge with the Blueprint model.

An Akraino Blueprint is not just a diagram; it’s real code that brings everything together so users can download and deploy the edge stack in their own environment to address a specific edge use case. Example use cases include IoT gateway, MEC for connected car, and a RAN intelligent controller that enables 5G infrastructure.

The Blueprints address interoperability, packaging, and testing under open standards, which reduces both overall deployment costs and integration time by users. The Akraino community will supply Blueprints across the LF Edge portfolio of projects, with plans to address 5G, IoT and a range of other edge use cases.

The key strength of the Akraino community is the well-defined process to welcome new Blueprints, new members, users and developers. The technical community is comprised of a Technical Steering committee (TSC), which consists of representatives from across member companies. The TSC acts as a “watchdog” to set process, monitor the community, and ensure open collaboration. In addition to the TSC, the Akraino community has seven sub-committees focused on much-needed areas such as security, edge APIs, CI and validation labs, upstream collaborations, documentation, process and community. Regular meetings are scheduled to ensure broader collaboration and accelerate progress on the various projects. The community calendar can be found here. It is not necessary to be a member to join the community calls, we invite anyone interested in learning more to join!

The above picture illustrates the primary use of Akraino R1 Blueprints and its targeted deployment areas. The release 1 Blueprints cover everything from a larger deployment in a telco-based edge cloud to a smaller deployment, such as in a public building like a stadium. Each Blueprint is validated via community standards on real physical lab hardware, hosted by either the community or the users.

Akraino Edge Stack prides itself on continuous refinement and development to ensure the success of Blueprints and projects. The community is already planning R2, which will include both new Blueprints and enhancements to existing Blueprints, tools for automated Blueprint validations, defined edge API’s, new community lab hardware, and much more. For future events and meetings please visit: https://wiki.akraino.org/display/AK/Akraino+TSC+Group+Calendar.

 

TelecomTV: Akraino Edge Stack makes its formal debut with telco-specific architecture blueprints for 5G and IoT

By Akraino, In the News
  • Inaugural release unifies multiple sectors of the edge;
  • Release 1 delivers tested and validated deployment-ready blueprints;
  • Creating a framework for defining and standardising APIs across stacks;
  • Akraino is part of the LF Edge organisation

Forget the expanding universe, for telcos it is all about the ever-expanding network edge, which just got a little bit bigger yesterday with the news that the Akraino open source project has published its first software release. Although it was only launched in February 2018, Akraino had a rich pedigree with seed code from AT&T and has enjoyed plenty of support during its short time under the Linux Foundation’s LF Edge umbrella.

Akraino Edge Stack, to give the project its full name, is focused on creating an open source software stack that supports a high-availability cloud stack optimised for edge computing systems and applications. It is being designed to improve the state of edge cloud infrastructure for enterprise edge, OTT edge, as well as telecoms edge networks. The project promises to give users new levels of flexibility to scale their edge cloud services quickly, to maximise the applications and functions supported at the edge, and to help ensure the reliability of critical systems.

“Akraino Release 1 represents the first milestone towards creation of a much-needed common framework for edge solutions that address a diverse set of edge use cases,” said Arpit Joshipura, general manager, Networking, Automation, Edge and IoT, the Linux Foundation. “With the support of experts from all across the industry, we are paving the way to enable and support future technology at the edge.”

Read the full article here.

LinuxGuizmos: LF Edge announces first Akraino release for open edge computing

By Akraino, EdgeX Foundry, In the News, Project EVE

The Linux Foundation’s LF Edge project announced the first release of the Akraino Edge Stack with 10 “blueprints” for different edge computing scenarios. Also: LF Edge recently announced new members and the transfer of seed code from Zededa to Project EVE.

The Akraino Edge Stack project, which earlier this year was folded into the Linux Foundation’s LF Edge umbrella initiative for open source edge computing, announced the availability of Akraino Edge Stack Release 1 (Akraino R1). Last month, LF Edge announced new members and further momentum behind its Project EVE edge technology. More recently Linux Journal’s Doc Searls published a piece on the LF’s 5G efforts and argued for more grass-roots involvement in LF Edge (see farther below).

Read the full article here.

Converge! Network Digest: LF Edge marks first release of Akraino

By Akraino, In the News

LF Edge marked the first release of Akraino Edge Stack Release 1, which is an open source software stack for edge computing systems and applications.

LF Edge is part of the Linux Foundation.

“Akraino R1 represents the first milestone towards creation of a much-needed common framework for edge solutions that address a diverse set of edge use cases,” said Arpit Joshipura, general manager, Networking, Automation, Edge and IoT, the Linux Foundation. “With the support of experts from all across the industry, we are paving the way to enable and support future technology at the edge.”

Akraino is currently comprised of 11+ blueprint families with 19+ specific blueprints under development to support a variety of edge use cases.

Read the full article here.

FierceWireless: Akraino Edge stack emerges from LF Edge to provide framework for 5G, IoT

By Akraino, In the News

LF Edge, an umbrella organization within the Linux Foundation, announced the availability of Akraino Edge Stack Release 1, setting a framework to address 5G, IoT and a range of edge use cases.

LF Edge was first launched in January with a mission to create a unified open source framework for the edge. It started with more than 60 founding members and grew from there. At that time, Akraino Edge Stack was announced as one of its projects.

Arpit Joshipura, general manager, Networking, Automation, Edge and IoT at the foundation, said the Akraino project continues its momentum with operators’ network edge and the enterprise/IoT edge that emanates out of it.

Read the full article here.