Category

Blog

Innovations and Key Implementations of Kubernetes for Edge: A summary of Kubernetes on Edge Day at KubeCon Europe 2021

By Blog

By Sagar Nangare

Kubernetes is the key component in the data centers that are modernizing and adopting cloud-native development architecture to deliver applications using containers. Capabilities like orchestrating VMs and containers together make Kubernetes the go-to platform for modern application infrastructure adopters. Telecom operators are also using Kubernetes to orchestrate their applications in a distributed environment involving many edge nodes.

But due to the large scale of telco networks that includes disparate cloud systems, Kubernetes adoption requires different architectures for different use cases. Specifically, if we look at a use case where Kubernetes is used to orchestrate edge workloads, there are various frameworks and public cloud managed Kubernetes solutions that are available that offer different benefits and give telecom operators choices to select the best fit. 

In recent Kubernetes on Edge Day sessions at KubeCon Europe 2021, many new use cases of Kubernetes for the edge have been discussed along with a showcase of cross-platform integration that may help enterprises adopting 5G edge and telecom operators to scale it to a high level.

 

Here is a high-level overview of some of the key sessions.

The Edge Concept

There are different concepts of edge that have been discussed so far by different communities and technology solution experts. But when Kubernetes is coming into infrastructure, IT operators need to clearly understand the key pillars on which the Kubernetes deployment will seamlessly deliver the low latency performance in telco or private 5G use cases. First, there should be a strong implementation of Kubernetes management at scale. Second, operators need to choose the lightweight K8s for edge solution which is preferably certified by CNCF. And third, there should have a lightweight OS deployed at every node from Cloud to the far edge.

Microsoft’s Akri Project: Microsoft’s Akri project is an innovation that will surely break into multiple Kubernetes-based edge implementations. It discovers and monitors far edge devices of brownfield devices that cannot have own compute – can be a part of Kubernetes cluster. Akri platform will let these devices exposed to the Kubernetes cluster.

AI/ML with TensorFlow: TensorFlow is a machine learning platform that takes inputs to generate insights. It can be deployed on cloud, on-premises, or edge nodes where ML operations need to perform. In one of the sessions, it has been shown that Kubernetes clusters deployed in the cloud and edge can host analytics tools set (Prometheus, EnMasse/MQQT, Apache Camel, AlertManager, Jupyter, etc) to process ML request with the lowest latency.

Architectures for Kubernetes on the edge: While deploying Kubernetes for an edge, there are many architecture choices that are varied per use case. And each architecture poses new challenges. But the bottom line is – there is no one-size-fits-all solution as various workloads have different requirements and IT teams are focusing on the connection between network nodes. So, the overall architecture needs to evolve centralized and distributed control planes.

Robotics: Kubernetes has also been implemented in Robotics. Sony engineers have showcased how the K8s cluster systems can be for distributed system integration of robots and perform specific tasks collaboratively.

Laser-based Manufacturing: Another interesting use case discussed by Moritz Kröger who is a Researcher at RWTH Chair for Lasertechnology that leveraged Kubernetes based distributed system. Kubernetes features like automation configuration management and flexibility in moving workloads in clusters give operational benefits to Laser manufacturing machines.

OpenYurt + EdgeXFoundry: OpenYurt is yet another open-source framework that extends the orchestration features of upstream Kubernetes to the edge. It is showcased that – it can integrate with EdgeXFoundtry in 5G IoT edge use cases where EdgeXFoundtry is used to manage the IoT devices and OpenYurt is used to handle server environments using OpenYurt plugins set.

Using GitOps: Kubernetes supports the cloud-native application orchestration as well as declarative orchestration. It is possible to apply the GitOps approach to achieve the Zero Touch Provisioning at multiple edges from the central data center.

Hong Kong-Zhuhai-Macao Bridge: Another use case discussed is – Kubernetes is implemented in edge infrastructure for managing applications that are managing sensors at Hong Kong-Zhuhai-Macao Bridge. The use case is unique as it focuses on how to define the sensor devices on the bridge as CRD in Kubernetes, how to associate each device with the CI/CD, and how to manage and operate the Applications deployed on edge nodes.

Node Feature Discovery: There are a vast number of end devices that can be part of thousands of edge nodes connected to data centers. Similar to Akri project, Node Feature Discovery (NFD) add-on can be used to detect and push into Kubernetes clusters to orchestrate with edge server as well as cloud systems.

Kuiper and KubeEdge: EMQ’s Kuiper is the open-source data analytics/streaming software that runs on edge devices that have low resource requirements. It can integrate with KubeEdge where we get a combined solution that leverage KubeEdge’s application orchestration capabilities and with streaming analytics. The combined solution delivers low latency, saving cost on bandwidth, ease in implementing business logic and operators can manage and deploy Kuiper software applications from the cloud.

EdgeX Foundry Reaches Four Years!

By Blog, EdgeX Foundry

In four short years, EdgeX has become the global standard at the IOT Edge; this blog examines why and looks to the future.

By Keith Steele, CEO IOTech, and member of LF Edge Board and EdgeX Technical Steering Group.

The emergence of Edge Computing

Back in 2017, “edge computing” was a forecast in a Gartner report, it was called “the mother of all infrastructures.” A new computing model, edge computing promised to alter how businesses interact with the physical world.

With edge computing, data from physical devices—whether it be a drone, sensor, robot, HVAC unit, autonomous car, or other intelligent device—is acquired, processed, analyzed, and acted upon by dedicated edge computing platforms. The processed data can be acted upon locally and then sent to the cloud as required for further action and analysis.

Edge computing helps businesses very rapidly and inexpensively store, process, and analyze portions of their data closer to where it is needed, reducing security risks and reaction times, and making it an important complement to cloud computing. It is, however, a complex problem.

As more devices generate more data, existing infrastructure, bandwidth restrictions, and organizational roadblocks can stand in the way of extracting beneficial insights. What is more, there is no one-size solution that fits everyone. Different applications require different types of compute and connectivity and need to meet a variety of compliance and technology standards.

EdgeX – A Strategic Imperative

This inherent complexity at the edge was recognized as a major barrier to market take- up at the edge; in the same way that common standards and platforms are applied across most of the IT stack, there was recognition that a common ‘horizontal’ software foundation at the edge was needed.

In June 2017, in Boston MA, under the auspices of the Linux Foundation, around 60 people from many technology companies, gathered from around the World, to constitute the EdgeX Foundry open-source project;  the attendees had one aim: To create EdgeX Foundry as the global open edge software platform standard!

.       

At the outset, the EdgeX team saw an open edge software platform as a strategic imperative.  EdgeX enables successful edge implementation strategies and makes the IT/OT boundary a key value-add in building new end-to-end IoT systems. Platforms like EdgeX support heterogeneous systems and ‘real time’ performance requirements on both sides of the boundary, promote choice and flexibility, and enable collaboration across multiple vendors.

Four years later with literally millions of downloads, thousands of deployments across multiple vertical markets, and a truly global ecosystem of technology and commercialization partners, we can   justifiably claim to have achieved our goal.

Over the years the project has had something like 200 code contributors, from companies as diverse as Dell, Intel, HP, IOTech, VMWare, Samsung, Beechwoods, Canonical, Cavium Networks, and Jiangxing Intelligence.  Some made small contributions, while others have stayed for the whole journey. This blog is a tribute to all who have contributed to the project’s continued success.

The Importance of Ecosystem

Open-source projects without a global ecosystem of partners to develop, productize, test, and even deploy stand little chance of success.

The EdgeX ecosystem was greatly enhanced in January 2019 when EdgeX, along with project Akraino, became one of two founding projects in LF Edge, which is an umbrella organization created by the Linux Foundation that “aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system.” This project aims to bring together complementary open-source technologies.

EdgeX standalone pre- 2019 was already of great value. However, as a founding project under the umbrella of LF Edge, it proved even more valuable as the momentum increased with additional global collaboration. The additional amplification and support across LF Edge projects, community, and members, helped turn EdgeX into a real high velocity project.

     

\EdgeX Fundamentals

There are several fundamental technical and business tenets we set out to solve with EdgeX:

Leveraging the power of Edge and Cloud Computing

Our starting point is that edge and cloud complement one another.

Cloud computing and data centers offer excellent support for large data storage and computing workloads that do not demand real-time insights. For example, a company may choose cloud computing to offload long-term data processing and analysis, or resource-intensive workloads like AI training.

However, the latency and potential costs of connections between the cloud and edge devices makes cloud computing impractical for some scenarios – particularly those in which enterprises need faster or real-time insights, or the edge produces massive amounts of data that should first be filtered to reduce cloud costs. In these cases, an edge computing strategy offers unique value.

Open vs. Proprietary

The idea behind EdgeX was to maximize choice so users did not have to lock themselves into proprietary technologies that, by design, limit choice.

Given the implicit heterogeneity at the Edge, ‘open’ at a minimum means the Edge X platform had to be silicon, hardware, operating system, analytics engine, software application and cloud agnostic. It would seem odd to embrace IoT diversity at the edge, but then be tied to a single cloud vendor, hardware supplier, application vendor or chip supplier.

Secure, Pluggable and Extensible Software Architecture

To offer choice and flexibility we chose a modern, distributed, microservices based software architecture, which we believe supported the inherent complexities at the edge. The other really big thing we defined was a set of open standard APIs that enable ‘plug and play’ of any software application at the Edge. Coming back to ‘product quality’ we also wanted these maintained in a way that any changes to the APIs did not mean huge rewrites for application providers.

Edge Software Application ‘Plug and Play’

A key promise of EdgeX is that it provides a standard open framework around which an ecosystem can emerge, offering interoperable plug-and-play software applications and value add services providing users with real choice, rather than having to deal with siloed applications, which may potentially require huge systems integration efforts.

EdgeX ensures that any application can be deployed and run on any hardware or operating system, and with any combination of application environments. This enables real flexibility to deliver interoperability between connected devices, applications, and services across a wide range of use cases.

Time-Critical Performance and Scalability

Many of the applications we want to run at the edge, including specialist AI and analytics applications, need access to ‘real-time’ data. These can be very challenging performance constraints, e.g., millisecond or even microsecond response times, often with absolute real-time predictability requirements. Edge systems can also be very large-scale and highly distributed.

The hardware available to run time-critical Edge applications is often highly constrained in terms of memory availability or the need to run at low power. This means edge computing software may need to be highly optimized and have a very small ‘footprint’.

Access to real time data is a fundamental differentiator between the edge and cloud computing worlds. With EdgeX we decided to focus on applications that required round trip response times in the milliseconds rather than microseconds.

Our target operating environments are server and gateway class computers running standard Windows or Linux operating systems. We decided to leave it to the ecosystem to address Time Critical Edge systems, which required ultra-low footprint, microsecond performance and even hard real time predictability. (My company – IOTech- just filled that gap with a product called Edge XRT. Its important real-time requirements are understood in full, as decisions taken can significantly impact success or failure of edge projects.)

Connectivity and Interoperability

A major difference between the edge and cloud is inherent heterogeneity and complexity at the edge. This is best illustrated in relation to connectivity and interoperability requirements, south and northbound:

  • Southbound: The edge is where the IT computer meets the OT ‘thing’ and there is a multitude of ‘things’ with which we will want to communicate, using a range of different ‘connectivity’ protocols at or close to real time. Many of these ‘things’ are legacy devices deployed with some old systems (brownfield). EdgeX provides reference implementations of some key protocols north and southbound along with SDK’s to readily allow users to add new protocols where they do not already exist; ensuring acquired data is interoperable despite the differences in device protocols. The commercial ecosystem also provides many additional connectors, making connectivity a configuration versus a programming task
  • Northbound: Across industry, we also have multiple cloud and other IT endpoints; therefore, EdgeX provides flexible connectivity to and from these different environments. In fact, many organizations today use multi-cloud approaches to manage risk, take advantage of technology advances, avoid obsolescence, obtain leverage over cloud price increases, and support organizational and supply-chain integration. EdgeX software provides for this choice by being cloud agnostic.

How does the EdgeX Vendor Ecosystem deliver customer solutions?

There are many companies offering value add products and services to the baseline open-source product, including mine, IOTech. There are also may examples of live deployments in vertical markets such as manufacturing and process automation, retail, transportation, smart venues, and cities etc.  See the EdgeX Adopter Series presentations for some examples.

Where Next for EdgeX?

The EdgeX project goes from strength to strength, with huge momentum behind its V1 Release and we will soon release EdgeX 2.0, a major release which includes all new and improved API set (eliminating technical debt that has incurred over 4 years), more message bus communications between services (replacing REST communications where you need more quality of service and asynchronous behavior), enhanced security services, and new device/sensor connectors.  The EdgeX 2.0 release will also emphasize outreach, including much more of a focus on users as well as developers.  With this release, the community launches the EdgeX Ready program.  The program is a means for organizations and individuals to demonstrate their ability to work with EdgeX.

Some Closing Thoughts

The full promise of IoT will be achieved when you combine the power of cloud and edge computing: delivering real value that allows businesses to analyze and act on their data with incredible agility and precision, giving them a critical advantage against their competitors.

The key challenges at the edge related to latency, network bandwidth, reliability, security, and OT heterogeneity cannot be addressed in cloud-only models – the edge needs its own answers.

EdgeX and the LF Edge ecosystem maximize user choice and flexibility and enable effective collaboration across multiple vertical markets at the edge helping to power the next wave of business transformation. Avoid the risk of getting left behind. To learn more, please visit the EdgeX website and LF Edge website and get involved!

 

Akraino’s Connected Vehicle Blueprint Gets an Update

By Akraino, Blog

By Tao Wang

Tencent Future Network Lab, one of Tencent’s laboratories that focuses on 5G and intelligent transport research, standardization and commercial rollout, has provided commercialized 5G Cooperative Vehicle Infrastructure System(CVIS) and relies on road-side unit (RSU), road sensing equipment (such as cameras), and 5G base stations to realize closed-loop  information among its connected vehicles. In addition, the edge cloud computing of the system is connected to a central cloud (traditional data center) through public or dedicated network lines (such as 4G/5G, CPE or carriers’ special lines), which brings large-scale network coverage with a 5G base station, makes up the lack of network coverage in roadside infrastructure, and leverages the integration of a vehicle-road-cloud-network by using the computing power of the central cloud.

Edge cloud computing infrastructure can improve data processing capacity of existing roadside infrastructure. With the deployment of an AI processing workstation, the edge cloud infrastructure can analyze all the object i.e. traffic participants & events via the video captured by road-side camera and other sensor equipment including vehicles, pedestrians, traffic accidents, etc.

The main functions are as follows:

1)Object recognition and detection: recognizes and detects road traffic participants and events including vehicles, pedestrians, scatters, traffic accidents, etc.

2)Object tracking: tracks road objects accurately, recognizes and excludes duplicated tracking when multiple cameras capture the same target within overlapped area.

3)Location, velocity & direction: locates the objects in real- time and calculate its speed and direction.

4)GNSS high precision positioning: based on the original GNSS position measured by the vehicle, the edge cloud can realize high- precision positioning without a special RTK terminal. Additionally, the positioning result can be delivered to connected vehicles and other modules of the edge cloud to determine the scene which would trigger certain CVIS functions and service functions.  Compared to the conventional approach using GPS as a positioning reference, the accuracy is greatly improved without increasing the cost.

5)Abnormal scene recognition: Recognition of scenes as shown in the following figures includes emergency braking, parking, abnormal lane change, left turn, reverse driving, large trucks, vulnerable traffic participants, scatters and lane level-congestion, etc.

visual recognition of emergency breaking

visual recognition of vehicle left turn

visual recognition of vulnerable traffic

 

In Tencent’s  Cooperative Vehicle Infrastructure System’s open platform for edge cloud and central cloud systems, we built a vehicle-road cooperative edge computing platform with good ecological openness based on Tencent’s cloud-edge collaborative system coined TMEC (Tencent MEC). TMEC is complied with the latest 5G standards specified by 3GPP and where Tencent Future Network Lab have contributed more than 400 standard contributions on 5G V2X, MEC and other technical area like cloud-based multimedia services since 2018.

Functionality includes:

1)Flexible integration of third- party system: the platform is completely decoupled from but friendly to integration with third-party systems.  Within the system, the raw data from road side collected by roadside infrastructure can be directly delivered to third- party systems via senseless forwarding. In addition, edge cloud and center cloud connect with third-party systems (such as traffic monitoring center) through specific open data interfaces.  The edge cloud can share the structured v2x information with third-party systems, while the center cloud can provide big data interface to third-party system and also provide big data processing and analysis functions for third-party applications.

2)Seamless connection to 5G / V2X network: the platform supports the connected vehicle’s service deployed by edge cloud to seamlessly connect to a 5G / V2X network, and it can simultaneously adapt v2x messages into standardized data format for transmission over 5G base station (Uu Interface) and C-V2X RSU (PC5 interface).  Additionally, this platform can support 5G Internet of vehicles and cloud game business under a high-speed vehicle scenario, using the latest 5G network slice technology which can be activated by application and device location.  AI-based weak network detection and MEC resource scheduling for Internet services and other technologies have been adopted in the platform to achieve seamless integration for 5G/V2X. The platform adopts Tencent’s innovated micro service framework to realize service-oriented governance of edge cloud businesses for the  third-party.  The micro service framework has been adopted by 3GPP as one of the four main solutions of 5G core network service architecture evolution (eSBA) (the other three are traditional telecom equipment provider solutions) and has been partially adopted by 3GPP standards TS23.501 and TS23.502.

3)Low-cost: due to the openness of the platform, it can make full use of the existing infrastructure and flexibly aggregate information from a third-party system and then reduce the cost for customers. The current system integrates Intel’s low-cost computing hardware in the edge cloud, which reduces the computing power cost by 50% and reduces the energy consumption of the edge cloud by 60%.  In addition, by using a cloud-based GNSS high-precision positioning approach, compared to the traditional RTK solution, the cost of connecting vehicle terminals can be reduced by about 90%.  The solution slightly reduces the positioning accuracy  but still meeting the requirements of lane level positioning.

To learn more about Akraino’s Connected Vehicle blueprint, visit the wiki here: https://wiki.akraino.org/pages/viewpage.action?pageId=9601601 .

Charting the Path to a Successful IT Career

By Blog

Originally posted on the Linux Foundation Training Blog.

So, you’ve chosen to pursue a career in computer science and information technology – congratulations! Technology careers not only continue to be some of the fastest growing today, but also some of the most lucrative. Unlike many traditional careers, there are multiple paths to becoming a successful IT professional. 

What credentials do I need to start an IT career?

While certain technology careers, such as research and academia, require a computer science degree, most do not. Employers in the tech industry are typically more concerned with ensuring you have the required skills to carry out the responsibilities of a given role. 

What you need is a credential that demonstrates that you possess the practical skills to be successful; independently verifiable certifications are the best way to accomplish this. This is especially true when you are just starting out and do not have prior work experience. 

We recommend the Linux Foundation Certified IT Associate (LFCA) as a starting point. This respected certification demonstrates expertise and skills in fundamental information technology functions, especially in cloud computing, which is something that has not traditionally been included in entry-level certifications, but has become an essential skill regardless of what further specialization you may pursue.

How do I prepare for the LFCA?

The LFCA tests basic knowledge of fundamental IT concepts. It’s good to keep in mind which topics will be covered on the exam so you know how to prepare. The domains tested on the LFCA, and their scoring weight on the exam, are:

  • Linux Fundamentals – 20%
  • System Administration Fundamentals – 20%
  • Cloud Computing Fundamentals – 20%
  • Security Fundamentals – 16%
  • DevOps Fundamentals – 16%
  • Supporting Applications and Developers – 8%

Of course if you are completely new to the industry, no one expects you to be able to pass this exam without spending some time preparing. Linux Foundation Training & Certification offers a range of free resources that can help. These include free online courses covering the topics on the exam, guides, the exam handbook and more. We recommend taking advantage of these and the countless tutorials, video lessons, how-to guides, forums and more available across the internet to build your entry-level IT knowledge. 

I’ve passed the LFCA exam, now what?

Generally, LFCA alone should be sufficient to qualify for many entry-level jobs in the technology industry, such as a junior system administrator, IT support engineer, junior DevOps engineer, and more. It’s not a bad idea to try to jump into the industry at this point and get some experience.

If you’ve already been working in IT for a while, or you want to aim for a higher level position right off the bat, you will want to consider more advanced certifications to help you move up the ladder. Our 2020 Open Source Jobs Report found the majority of hiring managers prioritize candidates with relevant certifications, and 74% are even paying for their own employees to take certification exams, up from 55% only two years earlier, showing how essential these credentials are. 

We’ve developed a roadmap that shows how coupling an LFCA with more advanced certifications can lead to some of the hottest jobs in technology today. Once you have determined your career goal (if you aren’t sure, take our career quiz for inspiration!), this roadmap shows which certifications from across various providers can help you achieve it. 

How many certifications do I really need?

This is a difficult question to answer and really varies depending on the specific job and its roles and responsibilities. No one needs every certification on this roadmap, but you may benefit from holding two or three depending on your goals. Look at job listings, talk to colleagues and others in the industry with more experience, read forums, etc. to learn as much as you can about what has worked for others and what specific jobs or companies may require. 

The most important thing is to set a goal, learn, gain experience, and find ways to demonstrate your abilities. Certifications are one piece of the puzzle and can have a positive impact on your career success when viewed as a component of overall learning and upskilling. 

Want to learn more? See our full certification catalog to dig into what is involved in each Linux Foundation certification, and suggested learning paths to get started!

Telco Cooperation Accelerates 5G and Edge Computing Adoption

By Blog

Re-posted with permission from section.io

Cooperation among telco providers has been ramping up over the last decade in order to catalyze innovation in the networking industry. A variety of working groups have formed and in doing so, have successfully delivered a host of platforms and solutions that leverage open source software, merchant silicon, and disaggregation, thereby offering a disruptive value proposition to service providers. The open source work being developed by these consortiums is fundamental in accelerating the development of 5G and edge computing.

Telco Working Groups

 

Working groups in the telco space include:

Open Source Networking

The Linux Foundation hosts over twenty open source networking projects in order to “support the momentum of this important sector”, helping service providers and enterprises “redefine how they create their networks and deliver a new generation of services”. Projects supported by Open Source Networking include the Akraino  (working towards the creation of an open source software stack that supports high-availability cloud services built for edge computing systems and applications), FRRouting (an IP routing protocol suite developed for Linux and Unix platforms, which include protocol daemons for BGP, IS-/S and OSF) and the IO Visor Project (which aims to help enable developers in the creation, innovation and sharing of IO and networking functions). The ten largest networking vendors worldwide are active members of the Linux Foundation.

LF Networking (LFN)

LFN focuses on seven of the top Linux Foundation networking projects with the goal of increasing “harmonization across platforms, communities, and ecosystems”. The Foundation does so by helping facilitate collaboration and high-quality operational performance across open networking projects. The seven projects, all open to participation – pending acceptance by each group, are:

  • FD.io – The Fast Data Project, focused on data IO speed and efficiency;
  • ONAP – a platform for real-time policy-driven orchestration and automation of physical and virtual network functions (see more below);
  • Open Platform for NFV – OPNFV – a project and community that helps enable NFV;
  • PNDA – a big data analytics platform for services and networks;
  • Streaming Networks Analytics System – a framework that enables the collection, tracking and accessing of tens of millions of routing objects in real time;
  • OpenDaylight Project – an open, extensible and scalable platform built for SDN deployments;
  • Tungsten Fabric Project – a network virtualization solution focused on connectivity and security for virtual, bare-metal or containerized workloads.

ONAP

One of the most significant LFN projects is the Open Network Automation Platform (ONAP). 50 of the world’s largest tech providers, network and cloud operators make up ONAP, together representing more than 70% of worldwide mobile subscribers. The goal of the open source software platform is to help facilitate the orchestration and management of large-scale network workloads and services based on Software Defined Networking (SDN) and Network Functions Virtualization (NFN).

Open Networking Foundation (ONF)

The Open Networking Foundation is an independent operator-led consortium, which hosts a variety of projects with its central goal being “to catalyze a transformation of the networking industry” via the promotion of networking through SDN and standardizing the OpenFlow protocol and related tech. Its hosted projects focus on white box economics, network disaggregation, and open source software. ONF recently expanded its reach to include an ecosystem focus on the adoption and deployment of disruptive technologies. The ONF works closely with over 150 member companies; its Board includes members of AT&T, China Unicom, and Google. It merged with the ON.Lab in 2016. The group has released open source components and integrated platforms built from those components, and is working with providers on field trials of these technologies and platforms. Guru Parulkar, Executive Director of ONF, gave a really great overview of the entire ONF suite during his keynote at the Open Networking Summit.

Open Mobile Evolved Core (OMEC)

One of the most notable projects coming out of ONF is the Open Mobile Evolved Core (OMEC). OMEC is the first open source Evolved Packet Core (EPC) that has a full range of features, and is scalable and capable of high performance. The EPC connects mobile subscribers to the infrastructure of a specific carrier and the Internet.

Major service providers, such as AT&T, SK Telecom, and Verizon, have already shown support for CORD (Central Office Re-architected as a Datacenter) which combines NFV, SDN, and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. This allows the operator to manage their Central Offices using declarative modeling languages for agile, real-time configuration of new customer services. OMEC has been specifically developed to cope with the “ensuing onslaught of devices coming online as part of the move to 5G and IoT”. It can be used either as a stand-alone EPC or via the COMAC platform.

A Shift Toward Container Network Functions

Network functions virtualization (NFV) has been an important factor in enabling the move towards a 5G future by helping to virtualize the network’s different appliances, including firewalls, load balancers, and routers. Specifically, it looks to enable network slicing, allowing multiple virtual networks to be created atop a shared physical infrastructure, empowering numerous use cases and new services. NFV has also been seen as important in the enabling of the distributed cloud, aiding in the creation of programmable, flexible networks for 5G.

Over the last couple of years, however, there has been a shift towards cloud-native network functions (CNF) and containerization as a lighter-weight alternative to NFV software. This follows a decision made by ONAP and Kubernetes working groups within The Linux Foundation to work together on a CNF architecture.

According to a report in Container Journal, “Arpit Joshipura, general manager for networking at The Linux Foundation, says that while telecommunications providers have been making extensive investments in NFV software based on virtual machines, it’s become apparent to many that containers offer a lighter-weight approach to virtualizing network services.”

Joshipura says CNFs offer greater portability and scalability and many organizations are beginning to load VNFs in Docker containers or deploy them using Kubernetes technologies like KubeVirt to make them more portable.

This has gone even further recently, according to ONF, with telecom operators increasingly adopting cloud-native solutions in their data centers and moving their operations away from traditional Central Office (CO). In a recent blog post, ONF says, “3GPP standardized 5G core will be cloud-native at start, which means telcos should be able to run and operate Containerized Network Functions (CNF) in their CO in the near future.” At a recent ONOS/CORD/K8s tech seminar in Seoul, eight telcos, vendors and research institutions demonstrated this in action, sharing stories about their containerization journeys.

Kubernetes Enabling 5G

News out of AT&T in February has led to industry speculation that Kubernetes is now set to become the de facto operating system for 5G. The carrier said it would be working in conjunction with open source infrastructure developer Mirantis on Kubernetes-controlled infrastructure to underlie its 5G network.

In an interview with Telecompetitor, Mirantis Co-Founder and Chief Marketing Officer Boris Renski said he believes that other carriers will adopt a similar approach as they deploy 5G, and described Kubernetes as the next logical step following cloud computing and virtual machines. As large companies like Google need to run their applications such as its search engine over multiple servers, Kubernetes enables just that.

Renski said that Kubernetes is becoming the go-to standard for telco companies looking to implement mobile packet core and functionality like VNFs because it is open source. A further key appeal of Kubernetes and containerization is that it offers the potential for a flexible, affordable and streamlined backend, essential factors to ensuring scalable and deployable 5G economics.

There are still a number of areas to be worked out as the shift to Kubernetes happens, as pointed out by AvidThink’s Roy Chua in Fierce Wireless recently. Network functions will need to be rethought to fit the architecture of microservices, for instance, and vendors will have to explore complex questions, such as how to best deploy microservices with a Kubernetes scheduler to enable orchestration and auto-scaling of a range of services.

Democratizing the Network Edge

Nonetheless, the significant amounts of cooperation between telcos is indeed helping accelerate the move towards 5G and edge computing. The meeting point of cloud and access technologies at the access-edge offers plenty of opportunities for developers to innovate in the space. The softwarization and virtualization of the access network democratizes who can gain access – anyone from smart cities to manufacturing plants to rural communities. Simply by establishing an access-edge cloud and connecting it to the public Internet, the access-edge can open up new environments, and in doing so enable a whole new wave of innovation from cutting-edge developers.

There are several reasons to be hopeful about the movement towards this democratization: first, there is demand for 5G and edge computing – enterprises in the warehouse, factory, and automotive space, for instance, increasingly want to use private 5G networks for physical automation use cases; secondly, as we have looked at here, the open-sourcing of hardware and software for the access network offers the same value to anyone and is a key enabler of 5G and edge computing for the telcos, cable and cloud companies; finally, spectrum is becoming increasingly available with countries worldwide starting to make unlicensed or gently licensed spectrum for 5G deployments available. The U.S. and Germany are leading the way.

How Edge computing enables Artificial Intelligence

By Blog

 

By Utpal Mangla (VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competence), 
Ashek Mahmood (IoT Edge AI – Partner & Practice Lead, IBM), and Luca Marchi (Associate Partner – Innovation Leader – IBM’s Telecom Media Entertainment Industry Center of Competence)

Edge computing as an enabler

Artificial Intelligence is the engine of the most recent changes in business. Enterprises are infusing AI into their processes to digitally transform the way they operate.

  Operations intensive industries are transforming with AI powered IoT applications running at the edge of the physical environments. For examples, the number of connected and intelligent devices is already into billions, generating more data every day. These devices are currently or will be soon connected via 5G, making them able to share enriched data like videos or live streams.

These potential use cases are raising some concerns in terms of latency, network bandwidth, privacy and user experience. Edge computing, as an enabler of dynamic delivery of computing power is an answer to these concerns. 

Edge is faster, allowing new instantaneous computation even in absence of 5G. Low latency in fundamental in all mission critical use cases, like for example self-driving cars, remote healthcare and or certain manufacturing applications. AI alone would not be sufficient without the ability to make a decision in a matter of milliseconds. Edge improves the efficiency of some use cases: local computation reduced the load of network bandwidth generating savings to the enterprise. 

On the other side, edge boost the use cases efficiency allowing architects to deploy computational power and AI where it makes the most sense on the network: for example, directly on an oil rig to support a worker safety use case.

Finally, edge computing enables data security: deploying computational capabilities to a specific location at the edge of the network makes it possible to leverage data that is not supposed to leave that location and cannot be processed on a public cloud. 

Applying edge computing to mission critical situations

One good example of an edge application is the monitoring of public venues, for instance a train station. A train station is a complex and critical infrastructure: a large area, densely crowded where security is paramount. Edge computing allows the implementation of an innovative protection system through video analysis and data processing in real time, designed to recognize specific situations like unattended objects, people laying on the ground, abnormal flows of people. This information is critical to trigger the prompt reaction of the authorities in case of a particular event and people in need of help.

The use case entails 24/7 video stream from several cameras spread across the station. Centralizing the AI models in the cloud would prove very inefficient: streaming and storing the videos would be very expensive. Edge computing, instead, enables the deployment of the visual recognition machine learning model on a network node in the station: this way the data never leaves the station, the need for storage and bandwidth is reduced and he use case is financially viable. Also, keeping the AI models at the edge reduces the latency and speeds up the alert and reaction processes.

Continuing with the theme of the rail industry, safety inspection for trains is being automated with AI powered Vision systems running on the edge of the railway tracks. When a train goes through a portal of cameras, it can generate 10,000+ images capturing 360 view of the cars and there can be 200+ visual Inspection use cases. Complex cases may require object detection, measurement, image stitching and augmentations through deep learning neural networks. An intelligent edge solution for this case will have the AI systems be connected at close proximity of the cameras and will consider a full stack technology architecture to optimize data and compute intensive workloads. 

Below image gives a high-level view of architecture components that come to play here.

An integrated pipeline of edge AI applications are required to run autonomous aggregation and scheduling of these AI models. Hardware acceleration and GPU/CPU optimizations practices are often required to achieve the desired performance and throughputs. Different technology solutions will provide distinct competitive advantages — this is true in the world of high-precision computing and transaction processing, in the world of AI at the edge. The future is one full of heterogeneous compute architectures, orchestrated in an ecosystem based on open standards to help unleash the potential of data and advanced computation to create immense economic value.

A step forward

A robust edge computing architecture will develop and support a hybrid AI Ops architecture to manage workloads running on a series of heterogeneous systems of edge, core, & cloud nodes. These phenomena will allow artificial intelligence to take a step forward towards the adoption of mission critical applications. These next generational use cases will strengthen the partnership between humans and AI towards a sustainable future.

Authors

Utpal Mangla: VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competence
Ashek Mahmood: IoT Edge AI – Partner & Practice Lead, IBM
Luca Marchi: Associate Partner – Innovation Leader – IBM’s Telecom Media Entertainment Industry Center of Competence

 

How Do We Prevent Breaches Like Verkada, Mirai, Stuxnet, and Beyond? It Starts with Zero Trust.

By Blog, Project EVE

By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

News of the Verkada hack that breached 150,000 surveillance cameras sent shockwaves through the security world last week. For those of us in the edge computing industry, it simply underscored what we already knew — securing distributed devices is hardAs intelligent systems increasingly expand into the field, we’ll see more and more attacks of this sort if we continue to leverage the same security stance and tools that we’ve used for decades within perimetered locations like data centers and secure telecommunication facilities.

In this article, I will go through a few of the widely-cited edge breaches in the industry and highlight how a zero trust security model optimized for the unique challenges of the edge, such as employed by ZEDEDA, would have helped prevent them from happening.

In a hack reminiscent of Verkada, the 2016 Mirai virus infected millions of cameras and turned them into bots that together launched a massive DDOS attack on upstream networks, briefly taking down the internet in the Northeastern US. Something on the order of under twenty combinations of username and password got into all those cameras, because the developers made it too easy to change, or not even possible to change, these security credentials. Often this was due to prioritizing usability and instant gratification for users over security.

Another commonly-cited example is the massive Target data breach in 2014 that was a result of attackers accessing millions of customers’ credit card information by way of a networked HVAC system. The hackers stole credentials from a HVAC contractor and were able to access the payment system because the operations network the HVAC was on wasn’t properly segmented from the IT network.

In a final example, the 2010 Stuxnet breach involved malware that was loaded into process control systems by using a USB flash drive to bypass the network air gap. The worm then propagated across the internal process control network, scanning for Siemens S7 software on industrial PCs. When successful, the virus would send unexpected commands to PLCs controlling industrial processes while giving the operators a view of normal operation.

Viruses like Stuxnet that focus on compromising industrial systems are especially concerning because attacks can lead to immediate loss of production, or worse life. This is compared to breaches of IT systems which typically play out over long periods of time, with compromises to privacy, financial data and IP.

With these examples in mind, what is unique about the proverbial edge that makes security such a challenge?

  • : Part of the value of IoT and edge computing comes from having devices connected across the organization, providing a holistic view of operations. Over time we will see edge device deployments grow into the trillions, and traditional data center solutions for security and management are not designed for this kind of scale.
  •  Distributed edge computing resources rarely have the defense of four physical walls or a traditional network perimeter. This requires a security approach that assumes that these resources can be physically tampered with and doesn’t depend upon an owned network for defense.
  • The edge is at the convergence of the physical and digital worlds. In addition to a highly heterogeneous landscape of technologies, we also have to account for diverse skill sets spanning IT and OT (e.g. network and security admins, DevOps, production, quality and maintenance engineers, data scientists).
  •  As OT and IT converge at the edge, each organization’s often conflicting priorities must be considered. While OT typically cares about uptime and safety, IT prioritizes data security, privacy and governance. Security solutions must balance these priorities in order to be successful.
  • Many IoT devices are too resource-constrained to host security measures such as encryption, plus the broad install base of legacy systems in the field was never intended to be connected to broader networks, let alone the internet. Because of these limitations, these devices and systems need to rely on more capable compute nodes immediately upstream to serve as the first line of defense, providing functions such as root of trust and encryption.

The Verkada hack and its predecessors make it clear that edge computing requires a zero trust architecture that addresses the unique security requirements of the edge. Zero trust begins with a basic tenant — trust no one and verify everything.

At ZEDEDA, we have built an industry-leading zero trust security model into our orchestration solution for distributed edge computing. This robust security architecture is manifested in our easy-to-use, cloud-based UI and is rooted in the open source EVE-OS which is purpose-built for secure edge computing.

ZEDEDA contributed EVE-OS to form Project EVE in 2019 as a founding member of the Linux Foundation’s LF Edge organization, with the goal of delivering an open source, vendor-agnostic and standardized foundation for hosting distributed edge computing workloads. EVE-OS is a lightweight, secure, open, universal and Linux-based distributed edge operating system with open, vendor-neutral APIs for remote lifecycle management. The solution can run on any hardware (e.g., x86, Arm, GPU) and leverages different hypervisors and container runtimes to ensure policy-based isolation between applications, host hardware, and networks. The Project EVE community is now over 60 unique developers, with more than half not being affiliated with ZEDEDA.

EVE-OS Zero Trust Components

Let’s take a look at the individual components of the EVE-OS zero trust security framework.

  • EVE-OS leverages the cryptographic identity created in the factory or supply chain in the form of a private key generated in a hardware security model (e.g., TPM chip). This identity never leaves that chip and the root of trust is also used to store additional keys (e.g., for an application stack such as Azure IoT Edge). In turn, the public key is stored in the remote console (e.g., ZEDCloud, in the case of ZEDEDA).
  • An edge compute node running EVE-OS leverages its silicon-based trust anchor (e.g., TPM) for identity and communicates directly with the remote controller to verify itself. This eliminates having a username and password for each edge device in the field, instead all access is governed through role-based access control (RBAC) in a centralized console. Hackers with physical access to an edge computing node have no way of logging into the device locally.
  • EVE-OS has granular, software-defined networking controls built in, enabling admins to govern traffic between applications, compute resources, and other network resources based on policy. The distributed firewall can be used to govern communication between applications on an edge node and on-prem and cloud systems, and detect any abnormal patterns in network traffic. As a bare metal solution, EVE-OS also provides admins with the ability to remotely block unused I/O ports on edge devices such as USB, Ethernet and serial. Combined with there being no local login credentials, this physical port blocking provides an effective measure against insider attacks leveraging USB sticks.
  • All of these tools are implemented in a curated, layered fashion to establish defense in depth with considerations for people, process, and technology.
  • All features within EVE-OS are exposed through an open, vendor-neutral API that is accessed remotely through the user’s console of choice. Edge nodes block unsolicited inbound instruction, instead reaching out to their centralized management console at scheduled intervals and establishing a secure connection before implementing any updates.

Returning to the above examples of security breaches, what would the impact of these attacks have looked like if the systems were running on top of EVE-OS? In short, there would have been multiple opportunities for the breaches to be prevented, or at least discovered and mitigated immediately.

  •  In the Verkada and Mirai examples, the entry point would have had to be the camera operating system itself, running in isolation on top of top EVE-OS. However, this would not have been possible because EVE-OS itself has no direct login capabilities. The same benefit would have applied in the Target example, and in the case of Stuxnet, admins could have locked down USB ports on local industrial PCs to prevent a physical insider attack.
  • In all of these example attacks, the distributed firewall within EVE-OS would have limited the communications of applications, intercepting any attempts of compromised devices to communicate with any systems not explicitly allowed. Further, edge computing nodes running EVE-OS deployed immediately upstream of the target devices would have provided additional segmentation and protection.
  • EVE-OS would have provided detailed logs of all of the hackers’ activities. It’s unlikely that the hackers would have realized that the operating system they breached was actually virtualized on top of EVE-OS.
  • Security policies established within the controller and enforced locally by EVE-OS would have detected unusual behavior by each of these devices at the source and immediately cordoned them off from the rest of the network, preventing hackers from inflicting further damage.
  • Centralized management from any dashboard means that updates to applications and their operating systems (e.g. a camera OS) could have been deployed to an entire fleet of edge hardware powered by EVE-OS with a single click. Meanwhile, any hacked application operating system running above EVE-OS would be preserved for subsequent forensic analysis by the developer.
  • Thezero-trust approach, comprehensive security policies, and immediate notifications would have drastically limited the scope and damage of each of these breaches, preserving the company’s brand in addition to mitigating direct effects of the attack.

The diverse technologies and expertise required to deploy and maintain edge computing solutions can make security especially daunting. The shared technology investment of developing EVE-OS through vendor-neutral open source collaboration is important because it provides a high level of transparency and creates an open anchor point around which to build an ecosystem of hardware, software and services experts. The open, vendor neutral API within EVE-OS prevents lock-in and enables anyone to build their own controller. In this regard, you can think of EVE-OS as the “Android of the Edge”.

ZEDEDA’s open edge ecosystem is unified by EVE-OS and enables users with choice of hardware and applications for data management and analytics, in addition to partner applications for additional security tools such as SD-WAN, virtual firewalls and protocol-specific threat detection. These additional tools augment the zero trust security features within EVE-OS and are easily deployed on edge nodes through our built-in app marketplace based on need.

Finally, in the process of addressing all potential threat vectors, it’s important to not make security procedures so cumbersome that users try to bypass key protections, or refuse to use the connected solution at all. Security usability is especially critical in IoT and edge computing due to the highly diverse skill sets in the field. In one example, while developers of the many smart cameras that have been hacked in the field made it easy for users to bypass the password change for instant gratification, EVE-OS provides similar zero touch usability without security compromise by automating the creation of a silicon-based digital ID during onboarding.

Our solution is architected to streamline usability throughout the lifecycle of deploying and orchestrating distributed edge computing solutions so users have the confidence to connect and can focus on their business. While the attack surface for the massive 2020 SolarWinds hack was the centralized IT data center vs. the edge, it’s a clear example of the importance of having an open, transparent foundation that enables you to understand how a complex supply chain is accessing your network.

At ZEDEDA, we believe that security at the distributed edge begins with a zero-trust foundation, a high degree of usability, and open collaboration. We are working with the industry to take the guesswork out so customers can securely orchestrate their edge computing deployments with choice of hardware, applications, and clouds, with limited IT knowledge required. Our goal is to enable users to adopt distributed edge computing to drive new experiences and improve business outcomes, without taking on unnecessary risk.

To learn more about our comprehensive, zero-trust approach, download ZEDEDA’s security white paper. And to continue the conversation, join us on LinkedIn.

The Linux Foundation Supports Asian Communities

By Blog, LF Edge, Linux Foundation News

The Linux Foundation and its communities are deeply concerned about the rise in attacks against Asian Americans and condemn this violence. It is devastating to hear over and over again of the attacks and vitriol against Asian communities, which have increased substantially during the pandemic.

We stand in support with all those that have experienced this hate, and to the families of those who have been killed as a result. Racism, intolerance and inequality have no place in the world, our country, the tech industry or in open source communities.

We firmly believe that we are all at our best when we work together, treat each other with respect and equality and without hate or vitriol.

This blog originally ran on the Linux Foundation website.

Nominations are Open for the Annual EdgeX Community Awards!

By Blog, EdgeX Foundry

Written by Aaron Williams, LF Edge Developer Advocate

It’s our favorite time of the year – where we take a step back and reflect on what we have accomplished and honor the members of our community who have gone above and beyond to support EdgeX Foundry.  We are pleased to announce the launch nominations for the 4th Annual EdgeX Community Awards.

Milestones

Back in 2018 when we started the awards, we had about 1,000 Docker downloads and about 20 contributors.  Today, we are well over 7 million downloads and have 4 times the number of contributors.

In 2020, we expanded into Asia, especially China, which was demonstrated by the amazing turn out at EdgeX Foundry China Day 2020.  We also kicked off the EdgeX Foundry Adopter Series, where the community learns more about how companies around the world use EdgeX.  In addition to two more successful releases, we also announced the creation of the Open Retail Reference Architecture (ORRA) – a project to deliver a base edge foundation to the retail industry – with a presentation from IBM, Intel, and HP.  There were many more events and milestones (like debuting our brand-new website) that we could dive into, but we want to focus in the opening for nominations of the EdgeX Awards!

How to Participate

If you are already a member of the EdgeX Foundry community, please take a moment and visit our EdgeX Awards page and nominate the community member that you feel best exemplified innovation and/or contributed leadership and helped EdgeX Foundry advance and grow over the last year.  Nominations are being accepted through April 7, 2021.

The Awards

Innovation Award:  The Innovation Award recognizes individuals who have contributed the most innovative solution to the project over the past year.

Previous Winners:

2018- Drasko Draskovic (Mainflux) and Tony Espy (Canonical)

2019- Trevor Conn (Dell) and Cloud Tsai (IOTech)

2020- Michael Estrin (Dell Technologies and James Gregg (Intel)

Contribution Award:  The Contribution Award recognizes individuals who have contributed leadership and helped EdgeX Foundry advance and continue momentum in the past year. This includes individuals who have led work groups, special projects, and/or contributed large amounts of code, bug fixes, marketing material, documentation, or otherwise significantly advanced the efforts of EdgeX with their drive and leadership.

Previous Winners:

2018- Andy Foster (IOTech) and Drasko Draskovic (Mainflux)

2019- Michael Johanson (Intel) and Lenny Goodell (Intel)

2020- Bryon Nevis (Intel) and Lisa Ranjbar (Intel)

From top left to right: Keith Steele, Maemalynn Meanor, Bryon Nevis. Michael Estrin, Lisa Rashidi-Ranjbar, James Gregg, Aaron Williams, Jim White

To learn more about EdgeX Foundry, visit our website or join our Slack to ask questions and engage with community members. If you are not already a member of our community, it is really easy to join, simply visit our wiki page and/or check out our Git Hub.  And you can follow us on Twitter, and LinkedIn.

 

Open Horizon Begins Second Mentorship Term, Looks Back on the First

By Blog, Open Horizon

Written by Joe Pearson, Open Horizon Chair of the Technical Steering Committee and Edge Computing and Technology Strategist at IBM

This spring, the open-source Open Horizon project continued working with the LFX Mentorship Program.  As part the LF Edge umbrella organization, Open Horizon had access to both the Mentorship Program and the COVID-related funding provided by both the Linux Foundation and Intel which paid stipends to participating mentees.

Similar to our previous experience last Fall, we had a wealth of potential applicants to choose from.  Thanks in large part to being featured on the LFX Mentoring home page, our applicant pool grew from 30 last term to 43 this term.  This time, the range of educational experience from candidates was even wider – from sophomores to doctoral candidates.  The wealth of experience was much greater this term, and the drive and determination was incredible.  Our methods of choosing the top candidates last time narrowed the pool from 30 to 12, but this time only took us from 43 to 30.  After conducting interviews and matching applicant experience to our mentorship opportunities, we selected the final four candidates.

Two mentors returned for the Spring term, and we added two new mentors.  Returning were:

  • David Booz (Dave) – Chief Architect and one of the project founders, Dave is also Chair of the Agent Working Group and a member over the last six years. Dave understands the complete breadth and depth of the project code.
  • Liang Wang (Los) – Senior Technical Staff Member (STSM), Architect, and Chair of the China Manufacturing Special Interest Group (SIG). Los is creating a community around manufacturing and Industry 4.0 use cases.

The two new mentors were:

Dave and Bruce decided to collaborate and provide a challenging mentorship opportunity to the mentees by providing a linked project to work on.  One mentee would collaborate with the Agent Working Group and one with the Management Hub Working Group.  Together they would add the ability to support shared secrets.  On the Agent side, Dave chose:

  • Debabrata Mandal (Deb) – Deb is a final year student at IIT Bombay in Mumbai. Deb described his current work, saying: “I am currently working on an issue … related to fetching the logging information from the specified service using the service url. While solving this issue I am getting a more detailed idea of the working of the agent component.”

Bruce wanted to work with:

  • Megha Varshney – Megha is a final year undergraduate at Indira Gandhi Delhi Technical University for Women. She is: “… currently working on [an] issue wherein I will be adding an API which would provide org specific status info.”

Los cooked up a challenge this year by adding support for the RISC-V microarchitecture to the Open Horizon project.  For that effort, he selected:

  • Quang Hiệp Mai (Mike) – Mike is in the second year of his master’s degree program at Soongsil University in South Korea. He said: “Since RISC-V is blooming in China, supporting RISC-V arch for the agent device would attract more users in China market. So first I will port some of the examples to RISC-V arch then I will help to port Anax to RISC-V”

And Ben wanted to work with a mentee to evaluate and choose an appropriate technology and subsequently to migrate pipelines to that choice.  He wanted to work with:

  • Mustafa Al-tekreeti – Mustafa is a Ph.D. educator transitioning to a career in software engineering. He explained: “I am working on developing a CI/CD pipeline for Anax, one of the Open-Horizon projects, using one of the state-of-art technologies. First, we evaluate three of the available options (Circle-CI, GitHub Actions, and Jenkins Job Builder hosted by LF Edge) and summarize their pros and cons. Then, we implement the pipeline using the chosen technology.”

We’re now three weeks into the Spring 2021 term and beginning to make plans for a potential Summer 2021 term, which would open for applicants beginning April 15th.  In the meantime, we recently recorded a webinar with the graduates from our Fall 2020 term.  It was a great opportunity to get everyone together on a call and discuss what we accomplished and learned and to provide helpful advice to future applicants.