Skip to main content
Monthly Archives

April 2021

Akraino’s Connected Vehicle Blueprint Gets an Update

By Akraino, Blog

By Tao Wang

Tencent Future Network Lab, one of Tencent’s laboratories that focuses on 5G and intelligent transport research, standardization and commercial rollout, has provided commercialized 5G Cooperative Vehicle Infrastructure System(CVIS) and relies on road-side unit (RSU), road sensing equipment (such as cameras), and 5G base stations to realize closed-loop  information among its connected vehicles. In addition, the edge cloud computing of the system is connected to a central cloud (traditional data center) through public or dedicated network lines (such as 4G/5G, CPE or carriers’ special lines), which brings large-scale network coverage with a 5G base station, makes up the lack of network coverage in roadside infrastructure, and leverages the integration of a vehicle-road-cloud-network by using the computing power of the central cloud.

Edge cloud computing infrastructure can improve data processing capacity of existing roadside infrastructure. With the deployment of an AI processing workstation, the edge cloud infrastructure can analyze all the object i.e. traffic participants & events via the video captured by road-side camera and other sensor equipment including vehicles, pedestrians, traffic accidents, etc.

The main functions are as follows:

1)Object recognition and detection: recognizes and detects road traffic participants and events including vehicles, pedestrians, scatters, traffic accidents, etc.

2)Object tracking: tracks road objects accurately, recognizes and excludes duplicated tracking when multiple cameras capture the same target within overlapped area.

3)Location, velocity & direction: locates the objects in real- time and calculate its speed and direction.

4)GNSS high precision positioning: based on the original GNSS position measured by the vehicle, the edge cloud can realize high- precision positioning without a special RTK terminal. Additionally, the positioning result can be delivered to connected vehicles and other modules of the edge cloud to determine the scene which would trigger certain CVIS functions and service functions.  Compared to the conventional approach using GPS as a positioning reference, the accuracy is greatly improved without increasing the cost.

5)Abnormal scene recognition: Recognition of scenes as shown in the following figures includes emergency braking, parking, abnormal lane change, left turn, reverse driving, large trucks, vulnerable traffic participants, scatters and lane level-congestion, etc.

visual recognition of emergency breaking

visual recognition of vehicle left turn

visual recognition of vulnerable traffic

 

In Tencent’s  Cooperative Vehicle Infrastructure System’s open platform for edge cloud and central cloud systems, we built a vehicle-road cooperative edge computing platform with good ecological openness based on Tencent’s cloud-edge collaborative system coined TMEC (Tencent MEC). TMEC is complied with the latest 5G standards specified by 3GPP and where Tencent Future Network Lab have contributed more than 400 standard contributions on 5G V2X, MEC and other technical area like cloud-based multimedia services since 2018.

Functionality includes:

1)Flexible integration of third- party system: the platform is completely decoupled from but friendly to integration with third-party systems.  Within the system, the raw data from road side collected by roadside infrastructure can be directly delivered to third- party systems via senseless forwarding. In addition, edge cloud and center cloud connect with third-party systems (such as traffic monitoring center) through specific open data interfaces.  The edge cloud can share the structured v2x information with third-party systems, while the center cloud can provide big data interface to third-party system and also provide big data processing and analysis functions for third-party applications.

2)Seamless connection to 5G / V2X network: the platform supports the connected vehicle’s service deployed by edge cloud to seamlessly connect to a 5G / V2X network, and it can simultaneously adapt v2x messages into standardized data format for transmission over 5G base station (Uu Interface) and C-V2X RSU (PC5 interface).  Additionally, this platform can support 5G Internet of vehicles and cloud game business under a high-speed vehicle scenario, using the latest 5G network slice technology which can be activated by application and device location.  AI-based weak network detection and MEC resource scheduling for Internet services and other technologies have been adopted in the platform to achieve seamless integration for 5G/V2X. The platform adopts Tencent’s innovated micro service framework to realize service-oriented governance of edge cloud businesses for the  third-party.  The micro service framework has been adopted by 3GPP as one of the four main solutions of 5G core network service architecture evolution (eSBA) (the other three are traditional telecom equipment provider solutions) and has been partially adopted by 3GPP standards TS23.501 and TS23.502.

3)Low-cost: due to the openness of the platform, it can make full use of the existing infrastructure and flexibly aggregate information from a third-party system and then reduce the cost for customers. The current system integrates Intel’s low-cost computing hardware in the edge cloud, which reduces the computing power cost by 50% and reduces the energy consumption of the edge cloud by 60%.  In addition, by using a cloud-based GNSS high-precision positioning approach, compared to the traditional RTK solution, the cost of connecting vehicle terminals can be reduced by about 90%.  The solution slightly reduces the positioning accuracy  but still meeting the requirements of lane level positioning.

To learn more about Akraino’s Connected Vehicle blueprint, visit the wiki here: https://wiki.akraino.org/pages/viewpage.action?pageId=9601601 .

Charting the Path to a Successful IT Career

By Blog

Originally posted on the Linux Foundation Training Blog.

So, you’ve chosen to pursue a career in computer science and information technology – congratulations! Technology careers not only continue to be some of the fastest growing today, but also some of the most lucrative. Unlike many traditional careers, there are multiple paths to becoming a successful IT professional. 

What credentials do I need to start an IT career?

While certain technology careers, such as research and academia, require a computer science degree, most do not. Employers in the tech industry are typically more concerned with ensuring you have the required skills to carry out the responsibilities of a given role. 

What you need is a credential that demonstrates that you possess the practical skills to be successful; independently verifiable certifications are the best way to accomplish this. This is especially true when you are just starting out and do not have prior work experience. 

We recommend the Linux Foundation Certified IT Associate (LFCA) as a starting point. This respected certification demonstrates expertise and skills in fundamental information technology functions, especially in cloud computing, which is something that has not traditionally been included in entry-level certifications, but has become an essential skill regardless of what further specialization you may pursue.

How do I prepare for the LFCA?

The LFCA tests basic knowledge of fundamental IT concepts. It’s good to keep in mind which topics will be covered on the exam so you know how to prepare. The domains tested on the LFCA, and their scoring weight on the exam, are:

  • Linux Fundamentals – 20%
  • System Administration Fundamentals – 20%
  • Cloud Computing Fundamentals – 20%
  • Security Fundamentals – 16%
  • DevOps Fundamentals – 16%
  • Supporting Applications and Developers – 8%

Of course if you are completely new to the industry, no one expects you to be able to pass this exam without spending some time preparing. Linux Foundation Training & Certification offers a range of free resources that can help. These include free online courses covering the topics on the exam, guides, the exam handbook and more. We recommend taking advantage of these and the countless tutorials, video lessons, how-to guides, forums and more available across the internet to build your entry-level IT knowledge. 

I’ve passed the LFCA exam, now what?

Generally, LFCA alone should be sufficient to qualify for many entry-level jobs in the technology industry, such as a junior system administrator, IT support engineer, junior DevOps engineer, and more. It’s not a bad idea to try to jump into the industry at this point and get some experience.

If you’ve already been working in IT for a while, or you want to aim for a higher level position right off the bat, you will want to consider more advanced certifications to help you move up the ladder. Our 2020 Open Source Jobs Report found the majority of hiring managers prioritize candidates with relevant certifications, and 74% are even paying for their own employees to take certification exams, up from 55% only two years earlier, showing how essential these credentials are. 

We’ve developed a roadmap that shows how coupling an LFCA with more advanced certifications can lead to some of the hottest jobs in technology today. Once you have determined your career goal (if you aren’t sure, take our career quiz for inspiration!), this roadmap shows which certifications from across various providers can help you achieve it. 

How many certifications do I really need?

This is a difficult question to answer and really varies depending on the specific job and its roles and responsibilities. No one needs every certification on this roadmap, but you may benefit from holding two or three depending on your goals. Look at job listings, talk to colleagues and others in the industry with more experience, read forums, etc. to learn as much as you can about what has worked for others and what specific jobs or companies may require. 

The most important thing is to set a goal, learn, gain experience, and find ways to demonstrate your abilities. Certifications are one piece of the puzzle and can have a positive impact on your career success when viewed as a component of overall learning and upskilling. 

Want to learn more? See our full certification catalog to dig into what is involved in each Linux Foundation certification, and suggested learning paths to get started!

Telco Cooperation Accelerates 5G and Edge Computing Adoption

By Blog

Re-posted with permission from section.io

Cooperation among telco providers has been ramping up over the last decade in order to catalyze innovation in the networking industry. A variety of working groups have formed and in doing so, have successfully delivered a host of platforms and solutions that leverage open source software, merchant silicon, and disaggregation, thereby offering a disruptive value proposition to service providers. The open source work being developed by these consortiums is fundamental in accelerating the development of 5G and edge computing.

Telco Working Groups

 

Working groups in the telco space include:

Open Source Networking

The Linux Foundation hosts over twenty open source networking projects in order to “support the momentum of this important sector”, helping service providers and enterprises “redefine how they create their networks and deliver a new generation of services”. Projects supported by Open Source Networking include the Akraino  (working towards the creation of an open source software stack that supports high-availability cloud services built for edge computing systems and applications), FRRouting (an IP routing protocol suite developed for Linux and Unix platforms, which include protocol daemons for BGP, IS-/S and OSF) and the IO Visor Project (which aims to help enable developers in the creation, innovation and sharing of IO and networking functions). The ten largest networking vendors worldwide are active members of the Linux Foundation.

LF Networking (LFN)

LFN focuses on seven of the top Linux Foundation networking projects with the goal of increasing “harmonization across platforms, communities, and ecosystems”. The Foundation does so by helping facilitate collaboration and high-quality operational performance across open networking projects. The seven projects, all open to participation – pending acceptance by each group, are:

  • FD.io – The Fast Data Project, focused on data IO speed and efficiency;
  • ONAP – a platform for real-time policy-driven orchestration and automation of physical and virtual network functions (see more below);
  • Open Platform for NFV – OPNFV – a project and community that helps enable NFV;
  • PNDA – a big data analytics platform for services and networks;
  • Streaming Networks Analytics System – a framework that enables the collection, tracking and accessing of tens of millions of routing objects in real time;
  • OpenDaylight Project – an open, extensible and scalable platform built for SDN deployments;
  • Tungsten Fabric Project – a network virtualization solution focused on connectivity and security for virtual, bare-metal or containerized workloads.

ONAP

One of the most significant LFN projects is the Open Network Automation Platform (ONAP). 50 of the world’s largest tech providers, network and cloud operators make up ONAP, together representing more than 70% of worldwide mobile subscribers. The goal of the open source software platform is to help facilitate the orchestration and management of large-scale network workloads and services based on Software Defined Networking (SDN) and Network Functions Virtualization (NFN).

Open Networking Foundation (ONF)

The Open Networking Foundation is an independent operator-led consortium, which hosts a variety of projects with its central goal being “to catalyze a transformation of the networking industry” via the promotion of networking through SDN and standardizing the OpenFlow protocol and related tech. Its hosted projects focus on white box economics, network disaggregation, and open source software. ONF recently expanded its reach to include an ecosystem focus on the adoption and deployment of disruptive technologies. The ONF works closely with over 150 member companies; its Board includes members of AT&T, China Unicom, and Google. It merged with the ON.Lab in 2016. The group has released open source components and integrated platforms built from those components, and is working with providers on field trials of these technologies and platforms. Guru Parulkar, Executive Director of ONF, gave a really great overview of the entire ONF suite during his keynote at the Open Networking Summit.

Open Mobile Evolved Core (OMEC)

One of the most notable projects coming out of ONF is the Open Mobile Evolved Core (OMEC). OMEC is the first open source Evolved Packet Core (EPC) that has a full range of features, and is scalable and capable of high performance. The EPC connects mobile subscribers to the infrastructure of a specific carrier and the Internet.

Major service providers, such as AT&T, SK Telecom, and Verizon, have already shown support for CORD (Central Office Re-architected as a Datacenter) which combines NFV, SDN, and the elasticity of commodity clouds to bring datacenter economics and cloud agility to the Telco Central Office. This allows the operator to manage their Central Offices using declarative modeling languages for agile, real-time configuration of new customer services. OMEC has been specifically developed to cope with the “ensuing onslaught of devices coming online as part of the move to 5G and IoT”. It can be used either as a stand-alone EPC or via the COMAC platform.

A Shift Toward Container Network Functions

Network functions virtualization (NFV) has been an important factor in enabling the move towards a 5G future by helping to virtualize the network’s different appliances, including firewalls, load balancers, and routers. Specifically, it looks to enable network slicing, allowing multiple virtual networks to be created atop a shared physical infrastructure, empowering numerous use cases and new services. NFV has also been seen as important in the enabling of the distributed cloud, aiding in the creation of programmable, flexible networks for 5G.

Over the last couple of years, however, there has been a shift towards cloud-native network functions (CNF) and containerization as a lighter-weight alternative to NFV software. This follows a decision made by ONAP and Kubernetes working groups within The Linux Foundation to work together on a CNF architecture.

According to a report in Container Journal, “Arpit Joshipura, general manager for networking at The Linux Foundation, says that while telecommunications providers have been making extensive investments in NFV software based on virtual machines, it’s become apparent to many that containers offer a lighter-weight approach to virtualizing network services.”

Joshipura says CNFs offer greater portability and scalability and many organizations are beginning to load VNFs in Docker containers or deploy them using Kubernetes technologies like KubeVirt to make them more portable.

This has gone even further recently, according to ONF, with telecom operators increasingly adopting cloud-native solutions in their data centers and moving their operations away from traditional Central Office (CO). In a recent blog post, ONF says, “3GPP standardized 5G core will be cloud-native at start, which means telcos should be able to run and operate Containerized Network Functions (CNF) in their CO in the near future.” At a recent ONOS/CORD/K8s tech seminar in Seoul, eight telcos, vendors and research institutions demonstrated this in action, sharing stories about their containerization journeys.

Kubernetes Enabling 5G

News out of AT&T in February has led to industry speculation that Kubernetes is now set to become the de facto operating system for 5G. The carrier said it would be working in conjunction with open source infrastructure developer Mirantis on Kubernetes-controlled infrastructure to underlie its 5G network.

In an interview with Telecompetitor, Mirantis Co-Founder and Chief Marketing Officer Boris Renski said he believes that other carriers will adopt a similar approach as they deploy 5G, and described Kubernetes as the next logical step following cloud computing and virtual machines. As large companies like Google need to run their applications such as its search engine over multiple servers, Kubernetes enables just that.

Renski said that Kubernetes is becoming the go-to standard for telco companies looking to implement mobile packet core and functionality like VNFs because it is open source. A further key appeal of Kubernetes and containerization is that it offers the potential for a flexible, affordable and streamlined backend, essential factors to ensuring scalable and deployable 5G economics.

There are still a number of areas to be worked out as the shift to Kubernetes happens, as pointed out by AvidThink’s Roy Chua in Fierce Wireless recently. Network functions will need to be rethought to fit the architecture of microservices, for instance, and vendors will have to explore complex questions, such as how to best deploy microservices with a Kubernetes scheduler to enable orchestration and auto-scaling of a range of services.

Democratizing the Network Edge

Nonetheless, the significant amounts of cooperation between telcos is indeed helping accelerate the move towards 5G and edge computing. The meeting point of cloud and access technologies at the access-edge offers plenty of opportunities for developers to innovate in the space. The softwarization and virtualization of the access network democratizes who can gain access – anyone from smart cities to manufacturing plants to rural communities. Simply by establishing an access-edge cloud and connecting it to the public Internet, the access-edge can open up new environments, and in doing so enable a whole new wave of innovation from cutting-edge developers.

There are several reasons to be hopeful about the movement towards this democratization: first, there is demand for 5G and edge computing – enterprises in the warehouse, factory, and automotive space, for instance, increasingly want to use private 5G networks for physical automation use cases; secondly, as we have looked at here, the open-sourcing of hardware and software for the access network offers the same value to anyone and is a key enabler of 5G and edge computing for the telcos, cable and cloud companies; finally, spectrum is becoming increasingly available with countries worldwide starting to make unlicensed or gently licensed spectrum for 5G deployments available. The U.S. and Germany are leading the way.

How Edge computing enables Artificial Intelligence

By Blog

 

By Utpal Mangla (VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competence), 
Ashek Mahmood (IoT Edge AI – Partner & Practice Lead, IBM), and Luca Marchi (Associate Partner – Innovation Leader – IBM’s Telecom Media Entertainment Industry Center of Competence)

Edge computing as an enabler

Artificial Intelligence is the engine of the most recent changes in business. Enterprises are infusing AI into their processes to digitally transform the way they operate.

  Operations intensive industries are transforming with AI powered IoT applications running at the edge of the physical environments. For examples, the number of connected and intelligent devices is already into billions, generating more data every day. These devices are currently or will be soon connected via 5G, making them able to share enriched data like videos or live streams.

These potential use cases are raising some concerns in terms of latency, network bandwidth, privacy and user experience. Edge computing, as an enabler of dynamic delivery of computing power is an answer to these concerns. 

Edge is faster, allowing new instantaneous computation even in absence of 5G. Low latency in fundamental in all mission critical use cases, like for example self-driving cars, remote healthcare and or certain manufacturing applications. AI alone would not be sufficient without the ability to make a decision in a matter of milliseconds. Edge improves the efficiency of some use cases: local computation reduced the load of network bandwidth generating savings to the enterprise. 

On the other side, edge boost the use cases efficiency allowing architects to deploy computational power and AI where it makes the most sense on the network: for example, directly on an oil rig to support a worker safety use case.

Finally, edge computing enables data security: deploying computational capabilities to a specific location at the edge of the network makes it possible to leverage data that is not supposed to leave that location and cannot be processed on a public cloud. 

Applying edge computing to mission critical situations

One good example of an edge application is the monitoring of public venues, for instance a train station. A train station is a complex and critical infrastructure: a large area, densely crowded where security is paramount. Edge computing allows the implementation of an innovative protection system through video analysis and data processing in real time, designed to recognize specific situations like unattended objects, people laying on the ground, abnormal flows of people. This information is critical to trigger the prompt reaction of the authorities in case of a particular event and people in need of help.

The use case entails 24/7 video stream from several cameras spread across the station. Centralizing the AI models in the cloud would prove very inefficient: streaming and storing the videos would be very expensive. Edge computing, instead, enables the deployment of the visual recognition machine learning model on a network node in the station: this way the data never leaves the station, the need for storage and bandwidth is reduced and he use case is financially viable. Also, keeping the AI models at the edge reduces the latency and speeds up the alert and reaction processes.

Continuing with the theme of the rail industry, safety inspection for trains is being automated with AI powered Vision systems running on the edge of the railway tracks. When a train goes through a portal of cameras, it can generate 10,000+ images capturing 360 view of the cars and there can be 200+ visual Inspection use cases. Complex cases may require object detection, measurement, image stitching and augmentations through deep learning neural networks. An intelligent edge solution for this case will have the AI systems be connected at close proximity of the cameras and will consider a full stack technology architecture to optimize data and compute intensive workloads. 

Below image gives a high-level view of architecture components that come to play here.

An integrated pipeline of edge AI applications are required to run autonomous aggregation and scheduling of these AI models. Hardware acceleration and GPU/CPU optimizations practices are often required to achieve the desired performance and throughputs. Different technology solutions will provide distinct competitive advantages — this is true in the world of high-precision computing and transaction processing, in the world of AI at the edge. The future is one full of heterogeneous compute architectures, orchestrated in an ecosystem based on open standards to help unleash the potential of data and advanced computation to create immense economic value.

A step forward

A robust edge computing architecture will develop and support a hybrid AI Ops architecture to manage workloads running on a series of heterogeneous systems of edge, core, & cloud nodes. These phenomena will allow artificial intelligence to take a step forward towards the adoption of mission critical applications. These next generational use cases will strengthen the partnership between humans and AI towards a sustainable future.

Authors

Utpal Mangla: VP & Senior Partner; Global Leader: IBM’s Telecom Media Entertainment Industry Center of Competence
Ashek Mahmood: IoT Edge AI – Partner & Practice Lead, IBM
Luca Marchi: Associate Partner – Innovation Leader – IBM’s Telecom Media Entertainment Industry Center of Competence

 

How Do We Prevent Breaches Like Verkada, Mirai, Stuxnet, and Beyond? It Starts with Zero Trust.

By Blog, Project EVE

By Jason Shepherd, LF Edge Governing Board Chair and VP of Ecosystem at ZEDEDA

This blog originally ran on the ZEDEDA Medium blog. Click here for more content like this.

News of the Verkada hack that breached 150,000 surveillance cameras sent shockwaves through the security world last week. For those of us in the edge computing industry, it simply underscored what we already knew — securing distributed devices is hardAs intelligent systems increasingly expand into the field, we’ll see more and more attacks of this sort if we continue to leverage the same security stance and tools that we’ve used for decades within perimetered locations like data centers and secure telecommunication facilities.

In this article, I will go through a few of the widely-cited edge breaches in the industry and highlight how a zero trust security model optimized for the unique challenges of the edge, such as employed by ZEDEDA, would have helped prevent them from happening.

In a hack reminiscent of Verkada, the 2016 Mirai virus infected millions of cameras and turned them into bots that together launched a massive DDOS attack on upstream networks, briefly taking down the internet in the Northeastern US. Something on the order of under twenty combinations of username and password got into all those cameras, because the developers made it too easy to change, or not even possible to change, these security credentials. Often this was due to prioritizing usability and instant gratification for users over security.

Another commonly-cited example is the massive Target data breach in 2014 that was a result of attackers accessing millions of customers’ credit card information by way of a networked HVAC system. The hackers stole credentials from a HVAC contractor and were able to access the payment system because the operations network the HVAC was on wasn’t properly segmented from the IT network.

In a final example, the 2010 Stuxnet breach involved malware that was loaded into process control systems by using a USB flash drive to bypass the network air gap. The worm then propagated across the internal process control network, scanning for Siemens S7 software on industrial PCs. When successful, the virus would send unexpected commands to PLCs controlling industrial processes while giving the operators a view of normal operation.

Viruses like Stuxnet that focus on compromising industrial systems are especially concerning because attacks can lead to immediate loss of production, or worse life. This is compared to breaches of IT systems which typically play out over long periods of time, with compromises to privacy, financial data and IP.

With these examples in mind, what is unique about the proverbial edge that makes security such a challenge?

  • : Part of the value of IoT and edge computing comes from having devices connected across the organization, providing a holistic view of operations. Over time we will see edge device deployments grow into the trillions, and traditional data center solutions for security and management are not designed for this kind of scale.
  •  Distributed edge computing resources rarely have the defense of four physical walls or a traditional network perimeter. This requires a security approach that assumes that these resources can be physically tampered with and doesn’t depend upon an owned network for defense.
  • The edge is at the convergence of the physical and digital worlds. In addition to a highly heterogeneous landscape of technologies, we also have to account for diverse skill sets spanning IT and OT (e.g. network and security admins, DevOps, production, quality and maintenance engineers, data scientists).
  •  As OT and IT converge at the edge, each organization’s often conflicting priorities must be considered. While OT typically cares about uptime and safety, IT prioritizes data security, privacy and governance. Security solutions must balance these priorities in order to be successful.
  • Many IoT devices are too resource-constrained to host security measures such as encryption, plus the broad install base of legacy systems in the field was never intended to be connected to broader networks, let alone the internet. Because of these limitations, these devices and systems need to rely on more capable compute nodes immediately upstream to serve as the first line of defense, providing functions such as root of trust and encryption.

The Verkada hack and its predecessors make it clear that edge computing requires a zero trust architecture that addresses the unique security requirements of the edge. Zero trust begins with a basic tenant — trust no one and verify everything.

At ZEDEDA, we have built an industry-leading zero trust security model into our orchestration solution for distributed edge computing. This robust security architecture is manifested in our easy-to-use, cloud-based UI and is rooted in the open source EVE-OS which is purpose-built for secure edge computing.

ZEDEDA contributed EVE-OS to form Project EVE in 2019 as a founding member of the Linux Foundation’s LF Edge organization, with the goal of delivering an open source, vendor-agnostic and standardized foundation for hosting distributed edge computing workloads. EVE-OS is a lightweight, secure, open, universal and Linux-based distributed edge operating system with open, vendor-neutral APIs for remote lifecycle management. The solution can run on any hardware (e.g., x86, Arm, GPU) and leverages different hypervisors and container runtimes to ensure policy-based isolation between applications, host hardware, and networks. The Project EVE community is now over 60 unique developers, with more than half not being affiliated with ZEDEDA.

EVE-OS Zero Trust Components

Let’s take a look at the individual components of the EVE-OS zero trust security framework.

  • EVE-OS leverages the cryptographic identity created in the factory or supply chain in the form of a private key generated in a hardware security model (e.g., TPM chip). This identity never leaves that chip and the root of trust is also used to store additional keys (e.g., for an application stack such as Azure IoT Edge). In turn, the public key is stored in the remote console (e.g., ZEDCloud, in the case of ZEDEDA).
  • An edge compute node running EVE-OS leverages its silicon-based trust anchor (e.g., TPM) for identity and communicates directly with the remote controller to verify itself. This eliminates having a username and password for each edge device in the field, instead all access is governed through role-based access control (RBAC) in a centralized console. Hackers with physical access to an edge computing node have no way of logging into the device locally.
  • EVE-OS has granular, software-defined networking controls built in, enabling admins to govern traffic between applications, compute resources, and other network resources based on policy. The distributed firewall can be used to govern communication between applications on an edge node and on-prem and cloud systems, and detect any abnormal patterns in network traffic. As a bare metal solution, EVE-OS also provides admins with the ability to remotely block unused I/O ports on edge devices such as USB, Ethernet and serial. Combined with there being no local login credentials, this physical port blocking provides an effective measure against insider attacks leveraging USB sticks.
  • All of these tools are implemented in a curated, layered fashion to establish defense in depth with considerations for people, process, and technology.
  • All features within EVE-OS are exposed through an open, vendor-neutral API that is accessed remotely through the user’s console of choice. Edge nodes block unsolicited inbound instruction, instead reaching out to their centralized management console at scheduled intervals and establishing a secure connection before implementing any updates.

Returning to the above examples of security breaches, what would the impact of these attacks have looked like if the systems were running on top of EVE-OS? In short, there would have been multiple opportunities for the breaches to be prevented, or at least discovered and mitigated immediately.

  •  In the Verkada and Mirai examples, the entry point would have had to be the camera operating system itself, running in isolation on top of top EVE-OS. However, this would not have been possible because EVE-OS itself has no direct login capabilities. The same benefit would have applied in the Target example, and in the case of Stuxnet, admins could have locked down USB ports on local industrial PCs to prevent a physical insider attack.
  • In all of these example attacks, the distributed firewall within EVE-OS would have limited the communications of applications, intercepting any attempts of compromised devices to communicate with any systems not explicitly allowed. Further, edge computing nodes running EVE-OS deployed immediately upstream of the target devices would have provided additional segmentation and protection.
  • EVE-OS would have provided detailed logs of all of the hackers’ activities. It’s unlikely that the hackers would have realized that the operating system they breached was actually virtualized on top of EVE-OS.
  • Security policies established within the controller and enforced locally by EVE-OS would have detected unusual behavior by each of these devices at the source and immediately cordoned them off from the rest of the network, preventing hackers from inflicting further damage.
  • Centralized management from any dashboard means that updates to applications and their operating systems (e.g. a camera OS) could have been deployed to an entire fleet of edge hardware powered by EVE-OS with a single click. Meanwhile, any hacked application operating system running above EVE-OS would be preserved for subsequent forensic analysis by the developer.
  • Thezero-trust approach, comprehensive security policies, and immediate notifications would have drastically limited the scope and damage of each of these breaches, preserving the company’s brand in addition to mitigating direct effects of the attack.

The diverse technologies and expertise required to deploy and maintain edge computing solutions can make security especially daunting. The shared technology investment of developing EVE-OS through vendor-neutral open source collaboration is important because it provides a high level of transparency and creates an open anchor point around which to build an ecosystem of hardware, software and services experts. The open, vendor neutral API within EVE-OS prevents lock-in and enables anyone to build their own controller. In this regard, you can think of EVE-OS as the “Android of the Edge”.

ZEDEDA’s open edge ecosystem is unified by EVE-OS and enables users with choice of hardware and applications for data management and analytics, in addition to partner applications for additional security tools such as SD-WAN, virtual firewalls and protocol-specific threat detection. These additional tools augment the zero trust security features within EVE-OS and are easily deployed on edge nodes through our built-in app marketplace based on need.

Finally, in the process of addressing all potential threat vectors, it’s important to not make security procedures so cumbersome that users try to bypass key protections, or refuse to use the connected solution at all. Security usability is especially critical in IoT and edge computing due to the highly diverse skill sets in the field. In one example, while developers of the many smart cameras that have been hacked in the field made it easy for users to bypass the password change for instant gratification, EVE-OS provides similar zero touch usability without security compromise by automating the creation of a silicon-based digital ID during onboarding.

Our solution is architected to streamline usability throughout the lifecycle of deploying and orchestrating distributed edge computing solutions so users have the confidence to connect and can focus on their business. While the attack surface for the massive 2020 SolarWinds hack was the centralized IT data center vs. the edge, it’s a clear example of the importance of having an open, transparent foundation that enables you to understand how a complex supply chain is accessing your network.

At ZEDEDA, we believe that security at the distributed edge begins with a zero-trust foundation, a high degree of usability, and open collaboration. We are working with the industry to take the guesswork out so customers can securely orchestrate their edge computing deployments with choice of hardware, applications, and clouds, with limited IT knowledge required. Our goal is to enable users to adopt distributed edge computing to drive new experiences and improve business outcomes, without taking on unnecessary risk.

To learn more about our comprehensive, zero-trust approach, download ZEDEDA’s security white paper. And to continue the conversation, join us on LinkedIn.