Monthly Archives

December 2017

SDxCentral: Unraveling the MEC Standards Puzzle

By | EdgeX Foundry, In the News

Multi-access Edge Computing (MEC) is quickly gaining traction as a disruptive technology that promises to bring applications and content closer to the network edge. It is also expected to reduce latency in networks and make new services possible.

Analyst Iain Gillott of iGR Research says that he expects MEC to be as disruptive to the market as 5G and software-defined networking (SDN).  And several companies, including Huawei, have said that they are currently testing MEC.

Read more at SDxCentral.

EdgeX Foundry Member Spotlight: Two Bulls

By | Blog, EdgeX Foundry

The EdgeX Foundry community is comprised of a diverse set of member companies that represent the IoT ecosystem. The Member Spotlight blog series highlights these members and how they are contributing to and leveraging open source solutions. Today, we sat down with Noah Harlan, EdgeX Foundry Board Member and Founder of Two Bulls, to discuss the IoT market, connected devices and security.  

When we talk about IoT we mostly think of consumer IoT, but there is also managed consumer IoT (like security devices offered by ADT/Comcast/Verizon) and Industrial IoT. Can you talk a bit about the IoT market in general?

IoT is a marketing term for connected devices. Any edge device that connects to either a central system or makes itself available via the internet can plausibly be viewed as IoT.

Practically speaking I would bucket IoT into consumer and industrial but the edges quickly become fuzzy. A single apartment with a connected thermostat is clearly consumer IoT but if that apartment’s connected thermostat is part of a building-wide HVAC system is that now Industrial? To the company that installed the building-wide system, probably since they’re B2B2C, while the building management is in the middle (B2C), but to the apartment resident it’s a consumer product. Furthermore, when you expand outward and look at areas like smart cities which may involve a mixture of stakeholders it’s hard to differentiate. TL;DR: perspective matters. So given that, if you view it all as connected devices, then the underlying technologies need to address connectivity and devices, not an “Internet of Things.”

 

Can you tell us about your own solutions for the IoT world?

Two Bulls is a digital and connected product consultancy that works with some of the world’s biggest brands and most innovative startups to bring great products to market. We provide strategic, design, and development services and handle everything from embedded systems, to cloud infrastructure, to user interfaces and everything in between. We currently are working on products in the consumer IoT, smart cities, and industrial IoT spaces and our client list includes Verizon, Vodafone, Qualcomm, LIFX, and many others. 

IoT devices have earned the reputation of Insecure Internet of Things. Why do we keep hearing so many attacks and vulnerabilities in IoT device? 

Poor planning and poor design has led to some very significant security breaches or lapses, whether that was in the form of hacks like the Mirai botnet or overaggressive data gathering (always-on listening TVs and toys). While guaranteeing perfect security is extremely hard, putting in place a set of basic best practices mitigates the risks. When devices are deployed which are connected but can’t be updated or rely on hard-to-update common default security credentials, you are asking for a problem.

Furthermore, when a large company is hacked, it may include your credit card information but the breach is still relatively abstract to you as an individual whereas when you find out that a security camera in your house was breached you feel it very acutely and as more and more devices gain connectivity, the possibility of nefarious actors doing bad things grows. Hacking your email account is bad. Hacking the lock on your front door is potentially catastrophic.

There are two components of IoT devices – the Edge device and the backend server. Where do you see is the problem when it comes to security?

If you don’t worry about end to end, then you’re not thinking about security seriously. That said, we have good protocols for encryption of the upstream communications and we have a lot of experience with protecting backends. Where we have a harder problem is the device-to-device communication at the edge, particularly when there are multiple protocols involved. The protocol translation issue is a very hard security problem. If I am a router and I have a device on one side that speaks Protocol A and a device on the other that speaks Protocol B and I want to translate, I can’t generally rely on end-to-end encryption. I have to decrypt the messages from A, translate them, and then encrypt them when I send them off to B. The challenge becomes how do you determine trust for the translator so that all parties feel comfortable with a “man in the middle” being able to view everything and that the translator is both legitimate and uncompromised.

In most cases we have seen that IoT devices don’t get software updates and patches. Should IoT devices adopt an OS similar to Container OS where the systems are automatically updated? 

Any good edge operating system should have inbuilt systems for easy remote updating or they will eventually be subject to a security risk. Period. 

How much is standardized when it comes to IoT platform (the OS level) and applications?

Currently, nothing is “standardized” and thus a platform like EdgeX Foundry that can acknowledge that the proliferation of protocols and transports likely won’t consolidate for a range of reasons. It provides a system for translation, processing, and gateway security that  are essential and valuable going forward.

Do you think government and regulations can play some role in forcing IoT vendors to take security seriously?
I have worked with members of Congress on their first steps looking at IoT, in particular Cory Booker who is co-sponsoring the DIGIT Act which is making its way through Congress now. My best advice to them was to work to avoid a patchwork of security rules (one set for health, another for automotive, another for consumer, etc…) as that would lead to conflicting rules and stifle innovation. Instead, I encouraged them to establish a privacy and security spectrum which defines the requirements of each place on the spectrum (eg: at one end devices with very little security or privacy and at the other devices with very high security or privacy) and to encourage or require IoT vendors to declare their device’s “tier” in the Security & Privacy Spectrum (eg: a connected speaker with no microphone might be a level 3 device while a connected blood pressure monitor is a level 7 device while a pacemaker is a level 10 device because if it get hacked someone could die). This would mean that regulators would stay out of the minutia of defining *how* to comply and simply state what compliance means. This frees the industry to innovate and gives them a bar to measure against.

IoT Agenda: Six IoT trends we’ll see in 2018

By | EdgeX Foundry, In the News

While reflecting on this year, whether the IoT predictions we all made last year came true and looking into the future, a few things stand out to me. First and foremost, companies realizing early wins in the IoT market are still the ones that have clear focus on use cases with demonstrable business value. Use cases that drive efficiency are table stakes, and those that aid in maintaining regulatory compliance will be increasingly attractive, but facilitating entirely new business models and customer experiences is the Holy Grail.

Read more at IoT Agenda.

EdgeX Foundry Grows Ecosystem with New Members

By | Announcement, EdgeX Foundry

aicas GmbH, CATAI-UNESCO, Irish Manufacturing Research, Sixgill and Zephyr Project join the 60+ collaborating member companies that are committed to IoT interoperability

SAN FRANCISCO – December 19, 2017EdgeX Foundry, an open source project building a common interoperability framework to facilitate an ecosystem for Internet of Things (IoT) edge computing, today announces five new members: acias GmbH, CATAI-UNESCO, Irish Manufacturing Research, Sixgill and the Zephyr Project. These organizations join the more than 60 members that broadly represent the IoT landscape providing products and services supporting analytics, visualization, sensors, security and manageability, among others.

Hosted by The Linux Foundation, EdgeX Foundry is a collaborative effort dedicated to accelerating IoT deployments across industrial and enterprise use cases. With EdgeX, developers can quickly and easily build, deploy, run and scale Industrial Internet of Things (IIoT) solutions. It enables interoperability across tools and solutions and delivers flexibility and security.

“The EdgeX community members are geographically diverse but have continuously come together to collaborate to create an interoperable platform that can work with any hardware, operating system and application framework,” said Philip DesAutels, PhD, Senior Director of IoT at The Linux Foundation. “We are excited to welcome these new members and work closely with them to build out and support an ecosystem for IIoT solutions.”

EdgeX Foundry has made significant progress since its launch in April this year with more than 150 people from around the world joining face-to-face meetings to align on project goals, develop working groups and identify project maintainers and committers across key functional areas. The project also created resources designed to help onboard new developers to the project such as Tech Talks, templates and starter guides. Additional information about these resources, upcoming EdgeX Foundry meetings and how to participate is available at https://wiki.edgexfoundry.org.

New Member Quotes:

aicas GmbH

“aicas is excited to join EdgeX Foundry and support and integrate its microservices architecture with our framework, lightweight process model, ahead-of-time compiler, hard real-time garbage collector and runtime, and Java bytecode multilanguage support,” said David Beberman, Chief Marketing Officer for acias GmbH.

CATAI-UNESCO

“Devoted since 1982 to the innovation and AI in medicine, CATAI-UNESCO is enforcing the urgent demand to widespread Health 4.0 for the humanization of healthcare,” said Prof. Dr. O. Ferrer Roca, UNESCO Chair of Telemedicine in the CATAI Association. “This will only be possible if we integrate IoT to promote universal preventive and participative medicine. The essential infrastructure based on the EdgeX framework will provide less complex and standardized small health data (SHD) exchange.”

Irish Manufacturing Research

“Irish Manufacturing Research is delighted to contribute to the EdgeX Foundry project and work on an open source full featured and secured IIoT platform for the manufacturing industry,” said John Delaney, Principal Investigator for Irish Manufacturing Research. “Ensuring interoperability and scalability at the edge while supporting legacy systems is a key enabler on the digitization roadmap for both small and large industrial enterprises.”

Sixgill

“Edge computing and edge decision services are cornerstones in our universal sensor data platform at Sixgill. We consider edge services critical to helping enterprises realize the full potential of IoT, particularly for real-time operations,” said Elizabeth Shonnard, VP Product at Sixgill, LLC. “We’re excited to join EdgeX Foundry, pursuing a shared vision of data and device-agnostic systems that are interoperable and easily adopted to accelerate IoT development and enable highly responsive, sensing applications.”

Zephyr Project

“In February 2018, the Zephyr Project will celebrate its two-year anniversary of being a collaborative open-source micro-controller operating system,” said Geoff Thorpe, Security Architect for NXP and Chair of Zephyr Governing Board. “EdgeX Foundry proposes a welcome step towards harmonizing and generalizing the fragmented set of technologies in today’s IoT, and does so in an open and platform-neutral manner. This could not be more consistent with the ideals of the Zephyr project, and we’re delighted to be part of this collaborative effort.”

EdgeX Roadmap

The EdgeX Technical Steering Committee has established a bi-annual release roadmap that demonstrates a long-term strategy to provide a product-quality open source foundation for interoperable commercial differentiation. The project is also working towards improved performance, lower start-up times and a reduction in the overall footprint via alternative Go Lang and C-based implementations of key EdgeX microservices. A preview of a Go Lang implementation is targeted for early 2018 with initial testing promising an order of magnitude smaller footprint than the current Java baseline.

EdgeX Foundry’s next major release, “California,” is targeted for Spring 2018. California will be a key step in EdgeX Foundry’s commitment to evolve the framework to support the requirements for deployment in business-critical IIoT applications. In addition to general improvements, planned features for the California release include baseline APIs and reference implementations for security and manageability value-add. Additional information can be found at https://wiki.edgexfoundry.org/display/FA/Roadmap.

Additional resources:

About EdgeX Foundry

EdgeX Foundry is an open source project hosted by The Linux Foundation building a common open framework for IoT edge computing and an ecosystem of interoperable components that unifies the marketplace and accelerates the deployment of IoT solutions. Designed to run on any hardware or operating system and with any combination of application environments, EdgeX enables developers to quickly create flexible IoT edge solutions that can easily adapt to changing business needs. To learn more, visit: www.edgexfoundry.org.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

 

 

Porting EdgeX to Pivotal Cloud Foundry

By | Blog, EdgeX Foundry

Guest blog post by Trevor Conn, Principal Software Engineer, Dell Technologies

A short while ago I was introduced to the EdgeX Foundry platform when I attended a local IoT Meetup in Austin, TX. The presentation was given by fellow Dell Technologies colleague Jim White and I went up to introduce myself afterwards. We discussed the future possibility of making the EdgeX Foundry services cloud ready and I told him of my experience using Pivotal Cloud Foundry (PCF) in a production environment. Given the popularity of PCF, we thought it would make sense to see what it would take to port a couple EdgeX services into the platform.

The proof of concept I present below involves porting two of the foundational EdgeX services to PCF and then calling those services from the Virtual Device service running on my local workstation. Basically, I took the scenario Jim presents in Session 1 of the EdgeX Foundry Tech talks as my scope of work for this proof of concept. If you’re going to follow along, then you will want to fork the Github repos for the following three services:

You will be making changes to the first two, deploying those to PCF and then running the last service on your local machine while pointing to your PCF deployments.

The changes to be made to core-data and core-metadata are essentially the same:

1.Add three new packages for Spring Cloud functionality with regard to configuration, as well as PCF integration. You accomplish this by adding the following to the pom.xml

<dependency>

<groupId>org.springframework.cloud</groupId>

<artifactId>spring-cloud-localconfig-connector</artifactId>

<version>2.0.1.RELEASE</version>

</dependency>

<dependency>

<groupId>org.springframework.cloud</groupId>

<artifactId>spring-cloud-cloudfoundry-connector</artifactId>

<version>2.0.1.RELEASE</version>

</dependency>

<dependency>

<groupId>org.springframework.cloud</groupId>

<artifactId>spring-cloud-spring-service-connector</artifactId>

<version>2.0.1.RELEASE</version>

</dependency>

2. Next, add another configuration class which will be annotated with @Profile in order to differentiate which configuration to use when deployed to a PCF cloud environment versus the default local environment. In my case, I called this new class “CloudConfig” and inherited from AbstractCloudConfig.

3. As you can see in the screenshot above, a new property is necessary to indicate the name of the data service we need to connect to. In our case, this is a Mongo database hosted outside of PCF. However, we still need to provide the cloud infrastructure with a name and a connection string whereby the persistence layer can function. I’ll go into more details about that below. For now, add the following entries to the application.properties file:

#—————–Mongo Cloud Config——————————————

spring.cloud.appId:    edgex-core-data

spring.cloud.data-service: mongodb-coredata

The data-service value will be injected into the private dataServiceName member.

4. Since we are now distinguishing our configuration by profile, I needed to also add an @Profile annotation to the existing AppConfig class.

5. In our case, connectivity to Mongo was a little challenging. The PCF infrastructure at our disposal does not have MongoDB available within the cloud marketplace. That is to say, Mongo for PCF is not enabled. The only option available was to set up a Mongo instance on an external VM and then configure a user provided service within PCF consisting of the service’s name and a connection string. This limitation unfortunately meant I could not utilize the auto-configuration that Spring Cloud Service Connectors

Once I had the user provided service created, I then implemented my Mongo client within the above CloudConfig class like so:

So as you can see, the dataServiceName is used to find the service within PCF. Since we know we’re trying to access a Mongo database, I then cast the ServiceInfo to a MongoServiceInfo which gives me access to the URI I specified when creating my user defined service.

6. With those changes in place you’re ready to deploy. PCF will require a manifest file during deployment that provides parameters for the definition of the container your service will run on. Rather than including all of the content here, I will just link to the manifest file I created for the core-data service. The core-metadata manifest is the same except for the application name.

For a thorough explanation of what all of these settings mean, I refer you to the Cloud Foundry manifest documentation.

7. For the actual deployment, I found that it required use of both the Cloud Foundry command line as well as the Boot Dashboard provided within the Spring Tool Suite editor. There is certainly a way to streamline this step, but I have not had time to dig into it further. You would want to eliminate reliance on the Boot Dashboard for any kind of real deployment pipeline.

The issues I faced here were as follows:

  • The CF command line would provision the container appropriately, download the necessary buildpacks, create the necessary mappings for the user provided service to my application, but then it could never successfully build the application.
  • The Boot Dashboard was unable to do any of the above if I was deploying the service for the first time. At the point where it should have created the initial mapping of the services, it would just hang and then time out.

The approach I took for deployment of new services was to use the command line tool first, which would establish the necessary scaffolding in PCF, and then use the Boot Dashboard to deploy the code and get it built correctly.

Again, a little more work needs to be done here to make this cleaner.

8. And with all of that done, you’re ready to test!

Again, the test scenario involves running the Virtual Device service locally and pointing it to the deployed PCF endpoints. There are two changes necessary to the properties of the device-virtual project

  • In the src/main/resources/rest-endpoint.properties file, you will want to change all of the entries to point to the relevant PCF routes instead of localhost
  • In the src/main/resources/application.properties file, you will want to change the “service.host” value to either your IP address or your fully qualified host name

        ** NOTE: In our case we were running our tests on a VPN whereby our local machines were on the same network as the PCF cluster. If you are behind a firewall trying to hit an external PCF cluster, this reverse lookup will not work without some additional network configuration. **

9. Start the device-virtual project as a Java application and you should start seeing the core-data service’s event counter increment.

The above was a fun, brief initiative undertaken in order to gauge the amount of work necessary to move the whole EdgeX Foundry platform to a cloud infrastructure. Due to the modularity of Spring along with the flexibility in the design of the EdgeX services themselves, the changes needed to accomplish this were not at all extensive. With this effort somewhat quantified, I look forward to hopefully working more on these capabilities in the coming year.

SDxCentral: 6 Internet of Things Terms You Should Know

By | EdgeX Foundry, In the News

Internet of Things (IoT) refers to the millions, or billions, of devices that will be connected to the Internet, enabling communication between machines, and between people and machines. Although large pieces of the IoT puzzle remain uncertain, the ecosystem is beginning to take shape, and we’re starting to see leaders emerge in terms of software platforms, data analytics systems, and even the underlying network technologies.

With that in mind, SDxCentral has compiled a list of six IoT terms that we think will help you understand this burgeoning area and stay on top of the latest developments.

Read more at SDxCentral.

Performance Vs Flexibility – EdgeX Microservices Could be Combined

By | Blog, EdgeX Foundry

Written by Dell’s Jim White, EdgeX Foundry TSC Member and  Chair of Core Services Working Group

I was posed a great question recently by one of our EdgeX contributors.  As he suggested, he was “wrapping his head around EdgeX” when he wondered if it made sense to have the core microservice (those being core data, core metadata, and core command) all be different microservices.  As he commented: “I think the microservice design is a good option, allowing to separate some components outside of the core of EdgeX”, but as he further suggested, “it seems that a lot of time and CPU is spent with RPC between those microservices.”  Would it be better (both easier and faster), he mused, “to have only one core service implementing all those together?”

As I indicated to this developer – he can consider his head fully wrapped around the EdgeX architecture if he is coming to such questions.  Indeed, his question is a legitimate consideration.  Architecture is about tradeoffs.  EdgeX is about offering options to meet the potential tradeoff considerations.  In this case, performance versus flexibility.

We (the original creators of EdgeX) believe there are legitimate reasons why you may want to separate those concerns (core data, metadata and command).  As each is a different function, you may need to improve or significantly modify one of the functions.

For example, we envision a very different and more secure offering of command microservice in the future that checks authentication/authorization before actuation is allowed on a device.  The microservices may need/want to separate their repositories.  Both core data and metadata use MongoDB today, but we have seen situations at Dell (and actually done implementations) where for security, performance, or other reasons that the underlying persistent storage is different for each microservice.  The sensitivity of the incoming sensor data may be such that the core data microservice has to be constructed differently and completely isolated from metadata and command to meet legal or regulatory considerations.  There may also be cases where, due to availability of resources or need (or again to secure some data), that the various functions need to reside physically on different platforms (EdgeX on a more distributed layout).

Having said this, there are also other use cases where these core microservices (and others) may be combined – as indicated by the contributor, likely for optimal performance, to in order to reduce footprint, etc..  In fact, at Dell we conducted such an exercise to prove this out.  By combining core data, metadata and command, we were able to reduce the footprint and improve performance of the Java core services significantly and we envision going forward there will be, potentially, an offering of a single “combined” core microservice in EdgeX.  This will be at the expense of flexibility, but for those looking to productize EdgeX, this may be a perfectly acceptable trade off.

Productization of EdgeX and its use in real world edge/IoT use case scenarios are apt to uncover all sorts of interesting needs.  EdgeX is a collection of building blocks to create a solution.  Some refinements and adjustments to EdgeX may, and probably will, need to be made to accommodate a particular use case.  The flexibility of EdgeX is its strength.  And don’t forget, the flexibility allows those who adopt and embrace it to figure out ways to add value – and thereby generate potential revenue – from their adaptations.

So, the contributor that asked the question about potentially combining microservices was absolutely right on track with his thinking.  We will welcome combined core service (or other services) as alternatives as EdgeX marches forward – while at the same time maintaining separate core microservices to support other use cases.  The beauty of microservices is that they can be replaced and augmented in many ways – in some cases by collapsing them.