Virtualization, silicon, and open source are conspiring to secure the Internet of Things

My chat with Brandon Lewis, Technology Editor at  IoT Design, highlighting prpl’s push around roots-of-trust, virtualization, open source, and interoperability in order to secure the Internet of Things (IoT).

Credits: Brandon Lewis, IoT Design, January 28, 2016 @TechieLew

http://iotdesign.embedded-computing.com/articles/how-virtualization-modern-silicon-and-open-source-are-conspiring-to-secure-the-internet-of-things/

security-guidance-cover

The prpl Foundation is known for open source tools and frameworks like OpenWrt and QEMU, but has recently ventured into the security domain with a new Security prpl Engineering Group (PEG) and the “Security Guidance for Critical Areas of Embedded Computing” document, not to mention wooing you away from your role at security giant Trend Micro. What can you tell us about the drivers behind these moves?

Cesare: One way to look at it is a supply-and-demand schema. On the demand side, according to Gartner, the security market was worth $77 billion in 2015 and it’s going to grow much faster. One strong demand-side driver is the need for stronger security, because industry is not doing a very good job of it – and when I say industry I mean from silicon to software to services – and all of the spending is not resulting in better information security.

Another very important aspect in terms of demand is the specific hardware layer. The current consensus among security professionals is that security is really about risk management, and there’s no single solution that can guarantee 100 percent coverage on anything. Currently, the most responsible and professional approach is through multiple tiers that reduce risk. This is an evolution in the security industry, because it wasn’t this way just a few years ago. A few years ago you would have vendors selling solutions that would “stop” attacks, but industry has realized this is simply not possible. A more credible approach is to say, “I can offer you the best you can do based on your specific criterion.” So from a practitioner or developer perspective, the more layers you put in place, the better off you are. What has been missing, though, is the hardware layer. We have solutions at the networking layer, we have solutions at the web server layer, we have solutions at the application layer, we have solutions in authentication, you name it. All of these tiers are there but what is still missing is the hardware, and there’s demand for this hardware-level security.

Now what’s intriguing is a multi-tenant use case. Based on my conversations with actual OEMs, carriers, and so forth, the model where one single vendor deploys a box and then controls, administrates, and is ultimately responsible for it is a model of the past. The model that is evolving and is already part of the design and architectural choices that vendors are making today is a model that I would make a rough parallel with the model of an app store. So, how do you secure something where you have multiple entities that are peers in this [new] box? The old model of Trusted Execution Environments was good in the past, but if you are one of those applications that runs in the secure world, you have to trust all of the other secure tenants.

On the supply side, what vendors can do and what is called the march of silicon, is that silicon is so powerful these days that it can embed more and more of the features that have been realized in software layers in the past. And, obviously, anything you move down the stack becomes much more resilient because it become much more difficult to tamper, to change, and so forth.

How specifically is modern silicon going to help improve current security paradigms?

Cesare: Something that’s very important is all of the modern processor cores support some kind of hardware-assisted virtualization, and security by separation is a very well understood concept in the security world that is coming to lower level hardware. This idea of looking at things separately is well described in the “Security Guidance for Critical Areas of Embedded Computing” document, so that although independently they might not be more secure, as a system, one bad apple doesn’t compromise the whole system – or in the jargon of the security community, helps prevent lateral movement. If you put in place virtualization and security by separation, what you get is something that’s much more difficult for an attacker to penetrate one weak entry point in the system and then move laterally to some more critical aspect.

An example is the Jeep hack, where they got into the entertainment system and were able to re-flash something that controls the CAN bus and control the brakes and the steering wheel. In a virtualized situation, this would simply not be possible – the hardware is the same, but from a technology perspective it appears to applications as completely isolated systems.

Hardware assisted virtualization makes all this possible because there’s no performance impact as there was in the past. At the same time it also makes software execution on multicore processors more efficient. If you have multiple guests on a multicore processor and can start mapping guests to defined cores, that’s when you start extracting all the power from these multicore platforms. A byproduct of security in this case is performance, which is exactly the opposite of tradition, because traditionally when you secured something you lower performance. But if you put in place security by separation, you’re actually much better off using processor virtualization.

But the [virtualized] architecture does not assume multicore. It does not assume anything. Multicore is a better architecture to get value out of, but is not a requirement. The hypervisor, or virtualization manager, itself is a very low-level, thin layer of software, which you could call a microkernel that provides the security, APIs, services, or agent, depending on the situation. If you have a Linux system, you speak in terms of services, or perhaps inter-process communication, and you don’t ever reach the operating system; if it’s a very small device it’s probably some sort of API or library, depending on the situation. The hypervisor provides all the security features – the moment that the hypervisor is trusted, you trust the code, and you’re sure that the code that is booting has been signed, the hypervisor itself becomes the virtual root-of-trust for everything else because you don’t need multiple keys or different Trusted Execution Environments or whatever you have in place to secure a virtual situation like this. Once the hypervisor is trusted, the only thing each individual entity needs to trust is the hypervisor itself.

You can have a hardware root-of-trust to validate the hypervisor and then the hypervisor becomes the root-of-trust, or you can have many different variations. There might be a trusted element, multiple trusted elements, a one-time programmable element, it can be built into the SoC itself, it can be built into the system in terms of boards and so forth. There are many variations and It depends on the situation, but it’s very flexible. The key concept is that the hypervisor becomes the root of trust and provides all the security services through virtualization, meaning that these services can be driven by policies that are specific to guests.

In fact, the more complicated the stack or rich the operating system, the easier it is for the developer to achieve all of this, because these architectural concepts have been in the Linux stack for a long time, but always deployed in terms of software. Now is the time for hardware to meet the software. So from a developer perspective, this really just looks like a driver, and the driver provides all the services, including inter-process communication, key management, and everything else. But again, since it’s virtualized it can provide different services for different guests. It’s a whole new world.

Where does prpl figure into this whole equation, and how are you working to move this virtualized security architecture forward?

Cesare: It’s all about standards, interoperable protocols and APIs, and making sure this is not a vendor-specific solution, as has happened in the past. Vendor lock-in is not what large players are looking for. The supply side is telling us that proprietary systems, especially in a security context, don’t really cut it. You need to be open sourced up, you need to be able to look at the code, you need to be able to change the code, you need to be able to make sure there are no nation-state actors that tampered with code or drove some of the requirements so security components are weaker. There are a whole host of concerns when you look at the global market on the supply side that lead to this sort of open-source security framework.

The Security PEG is actively working on this right now, and we have the APIs. The problem is that we have too many APIs. The exercise we are going through as we speak is to try to rationalize these APIs and provide two things. One is the API definition, regardless of implementation. That’s what’s expected of prpl. These are not standards yet as we are considering various certification options.

The second thing is reference implementations for these various APIs. In the first case these will be documents, and in the second case these will be actual pieces of software. A key aspect of this is that both will be provided as open source with the most permissive licensing possible, which means you’re be free to do anything you want, even to build commercial implementations, for everyone. That’s really the true nature of prpl. It’s not just open source because open source has many different licensing schema, with the General Public License (GPL) being the least permissive and the Berkeley Software Distribution (BSD) considered one of the most permissive. prpl goes even one step further than BSD, which is what commercial entities are really asking for, and what really makes sense for business.

There’s room in this model for an open-source community, but there’s also room for commercial providers to develop commercial-grade implementations of these APIs. So the definition will be common and standardized, and an open-source, community version will be available for everyone, but not as production ready as required by some major players. In that case there are vendors, and there are already three entities within prpl and many more coming soon that will provide the commercial-grade implementations required.

As I said, all of this is real, but we have too many variations. We need to come to an agreement. That’s what an industry consortium is really about. It’s not about developing things, it’s about getting all the key players together to come up with what’s considered the best on both the supply and demand sides. When it comes to security, it has to be cross-vendor, it has to be platform agnostic. Heterogeneous environments are the reality these days, and vendor lock-in is something noone wants, especially on the security side.

* * *

Read the prpl Foundation’s “Security Guidance Document for Critical Areas of Embedded Computing” at prpl.works/security-guidance, or get involved at prplfoundation.org/participate.

The Journey to a Secure Internet of Things Starts Here

IoT Security Guidance

As the Internet of Things finds its way into ever more critical environments – from cars, to airlines to hospitals – the potentially life-threatening cyber security implications must be addressed. Over the past few months, real world examples have emerged showing how proprietary connected systems relying on outdated notions of ‘security-by-obscurity’ can in fact be reverse engineered and chip firmware modified to give hackers complete remote control. The consequences could be deadly.

A new approach is needed to secure connected devices, which is exactly what the prpl Foundation is proposing in its new document: Security Guidance for Critical Areas of Embedded Computing. It lays out a vision for a new hardware-led approach based on open source and interoperable standards. At its core is a secure boot enabled by a “root of trust” anchored in the silicon, and hardware-based virtualization to restrict lateral movement.

Read more of this post

How to Fix the Internet of Broken Things

iot-securityThe Internet of Things is already permeating every part of our lives – from healthcare to aviation, automobiles to telecoms. But its security is fundamentally broken. In my previous blog I’ve shown how vulnerabilities found by security researchers could have catastrophic consequences for end users. This isn’t just about data breaches and reputational damage anymore – lives are quite literally on the line. The challenges are many: most vendors operate under the misapprehension that security-by-obscurity will do – and lobby for laws preventing the disclosure of vulnerabilities; a lack of security subject matter expertise creates major vulnerabilities; firmware can too easily be modified; and a lack of separation on the device opens up further avenues for attackers.

But there is something we as an industry can do about it – if we take a new hardware-led approach. This is all about creating an open security framework built on interoperable standards; one which will enable a “root of trust” thanks to secure boot capabilities, and restrict lateral movement with hardware-based virtualization.

Read more of this post

The Security Challenges Threatening to Tear the Internet of Things Apart

IoT SecurityThe Internet of Things (IoT) has the power to transform our lives, making us more productive at work, and happier and safer at home. But it’s also developing at such a rate that it threatens to outstrip our ability to adequately secure it. A piece of software hasn’t been written yet that didn’t contain mistakes – after all, we’re only human. But with non-security experts designing and building connected systems the risks grow ever greater. So what can be done?

Read more of this post

Securing The Internet of (broken) Things: A Matter of Life and Death

Securing the Internet of broken thingsIf you’re like me you’ll probably be getting desensitized by now to the ever-lengthening list of data breach headlines which have saturated the news for the past 24 months or more. Targeted attacks, Advanced Persistent Threats and the like usually end up in the capture of sensitive IP, customer information or trade secrets. The result? Economic damage, board level sackings and a heap of bad publicity for the breached organization. But that’s usually where it ends.

Read more of this post

The Data Breach Pandemic: Information Security is Broken

Verizon Data Breach Report 2015Have enterprises basically just given up on IT security? Global budgets fell by 4% in 2014 over the previous year and as a percentage of total IT budget they’ve remained at 4% or less for the past five years. The picture is even starker for firms with revenues of less than $100m, who claim to have reduced security budgets 20% since 2013.

Yet the threats keep on escalating. When it comes to information security, there are really only two situations out there: companies that have been breached, and companies that still don’t know it.

If 2014 was the “Year of the Data Breach” then 2015 is proving to be at least its equal. This month alone we’ve seen TV stations shunted off air by pro-jihadi cyber terrorists; the discovery of major new state-backed attack groups; and another massive data breach at a US healthcare provider.

We talk today about managing risk, rather than providing 100% security – because there’s no such thing. The conclusion I have reached is that the traditional information security model is broken. But why? And how can we fix it?

Read more of this post

Google Vault Makes Play for Mobile Security Hardware Space

Google Project VaultLast week Google made a splash with its latest futuristic tech offering: Project Vault. In essence, this mini-computer on an SD card is designed to enable secure authentication, communications and data storage on your smartphone or laptop. So what exactly is going on here? After years experimenting with Android, has one of the world’s biggest software companies finally admitted hardware level security is the way forward? And if so, what are the implications for enterprise and consumers? Read more of this post

Cesare Garlati Joins prpl Foundation as Chief Security Strategist

prpl FoundationSANTA CLARA, CA–(Marketwired – April 07, 2015) – Well-known information security expert Cesare Garlati today joins the prpl Foundation as Chief Security Strategist. Garlati will assist the Foundation with security strategy in the newly formed Security PEG (prpl Engineering Group), a working group dedicated to creating an open standard framework that addresses next-generation security requirements for connected devices.

“Cesare Garlati is an internationally renowned leader in the mobile security space,” said prpl Foundation president Art Swift. “We all look forward to his contributions in security strategy and his participation in the ground-breaking Security PEG.”

Read more of this post

The GitHub attack – is the worst still to come?

What we can learn from the recent cyber attack to the popular website GitHub and why we should worry about what is likely to come next.

 

TTL analysis performed by Netresec in SwedenOver the last few days the popular website GitHub has been the target of a massive Distributed Denial Of Service attack – DDoS, apparently originated from China. As I write this note, the GitHub status webpage now indicates “Everything operating normally” and “All systems reporting at 100%”. However, I am afraid the story is far from over and the worst may still be to come.

GitHub is the largest and most popular repository of open source projects and a key infrastructure website for the Internet. Among other, GitHub hosts the Linux project – arguably the world’s most widespread open source software. Various flavors of Linux power most of the Internet servers and an ever-increasing number of consumer devices across the globe.

Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 29 other followers