How to Fix the Internet of Broken Things
November 20, 2015 1 Comment
The Internet of Things is already permeating every part of our lives – from healthcare to aviation, automobiles to telecoms. But its security is fundamentally broken. In my previous blog I’ve shown how vulnerabilities found by security researchers could have catastrophic consequences for end users. This isn’t just about data breaches and reputational damage anymore – lives are quite literally on the line. The challenges are many: most vendors operate under the misapprehension that security-by-obscurity will do – and lobby for laws preventing the disclosure of vulnerabilities; a lack of security subject matter expertise creates major vulnerabilities; firmware can too easily be modified; and a lack of separation on the device opens up further avenues for attackers.
But there is something we as an industry can do about it – if we take a new hardware-led approach. This is all about creating an open security framework built on interoperable standards; one which will enable a “root of trust” thanks to secure boot capabilities, and restrict lateral movement with hardware-based virtualization.
Microsoft Windows, Adobe Flash, Oracle Java – what do these software products have in common? They’re all proprietary closed source. And they’re all among the most vulnerable and exploited on the planet. Java security is so bad many mainstream browsers don’t even run it, Flash is such a security concern that modern browser offer the option to activate plugins on a per-page basis, while system administrators will be well aware that Windows receives numerous security updates every single month – ie the CVE database reports 120 Windows 7 vulnerabilities in 2015 alone, as of October 2015. The problem is that the security-by-obscurity mantra that these firms, and many IoT makers, hold so dear is simply not effective any more. Security researchers, and those with more malicious intent, can quite easily extract binary code from devices via JTAG, or find it online in the form of updates, and reverse engineer via one of the many tools readily available.
Tools like IDA and Binwalk, just to name a few, have reached amazing levels of intelligence and sophistication. Security by obscurity simply doesn’t exist anymore – if ever. Instead we need to look to open source and open security. With thousands of eyeballs on a piece of code rather than tens we’ve got a much better chance of engineering something more robust. Just think about the unnecessary complexity of mainstream proprietary software – where ‘new’ products are built on the foundations of legacy versions – versus the clean, clear Darwinism of open source. The open source community is 100% focused on quality and usability. There are no internal decisions made on feature sets for commercial reasons, politics or other corporate dynamics – i.e. MSFT’s famous “Bill says so” or ORCL’s “If it compiles ship it”. In open source it’s all about doing what’s best for the software itself and the end-user community.
What’s more, thanks to the strength, dedication and sheer size of the open source community, security flaws are routinely fixed within hours of discovery. It’s not uncommon to have a rolling process producing and making available near-real-time updates – ie Linux Debian security model. This is certainly not the case with proprietary code – Google just recently announced its commitment to monthly updates for Android.
We also need to think of the impact of running proprietary software in terms of nation state actors, their vast Military-Industrial-Intelligence complexes and their virtually unlimited secretive budgets – for example the US $50B+ budget partly exposed by the revelation of the former NSA contractor Edward Snowden. There have been reports over the years of the US government colluding with hi-tech companies to engineer deliberate backdoors into products. They came to a head most recently a couple of years ago when RSA Security’s relationship with the NSA was questioned in a Reuters report. Question marks must also exist around the number of former military/intelligence personnel now occupying key roles in leading high-tech companies. Is it because they genuinely have the best skill sets for these roles, or is there another reason? One thing’s for sure, no such concerns have ever been raised about the open source community at large.
As a footnote, I’d argue that open standards are also a key requirement if we’re to improve IoT security – particularly when it comes to the defining aspect of these devices: network connectivity. The TCP/IP protocol is one of the most complex and tricky to implement you’ll ever come across. So when engineers unused to designing kit with a network component come to do just that, they’re out of their depth. With global, interoperable open standards you reduce that complexity by encapsulating the intricacies of these network protocols, effectively outsourcing the trickiest work to the subject matter experts. They then create and maintain the most secure standards and frameworks possible for your hardware or firmware developers to follow.
In silicon we trust
As we’ve discussed, the software in so many embedded devices contains a potentially fatal original sin – it’s not signed . This means that an attacker could reverse engineer the code, modify it, reflash the firmware and reboot to execute arbitrary code. It’s what cybercriminals who hacked Cisco routers recently did, giving them highly persistent and privileged access to all the data flowing in and out of the devices. So what can be done? After all, software on the device needs to be updateable so that vendors can apply security patches.
The answer is to ensure that the system boots up only if the very first piece of software to execute is cryptographically signed by a trusted entity – ie the vendor. It needs to match on the other side with a public key or certificate which is somehow hard-coded into the device, so it is completely unreplaceable. By anchoring this “root of trust” into the hardware, it becomes impossible to tamper with. A determined attacker might still be able to extract the original firmware via JTAG, for example, reverse engineer and modify it but it won’t match the public key burned into the hardware, so the first stage of the boot up will fail, and the system will simply refuse to come to life.
Once the root of trust has been established, that initial piece of software will make identity and integrity checks with the next piece in the boot chain and so on until the system is fully and securely operational – the integrity check will eventually continue at runtime to make sure no modifications are applied after boot.
Security by separation
Too many embedded systems allow for lateral movement within the hardware, allowing researchers – or more worryingly, attackers – to jump around inside until they find a way to exploit what they’re really after. It’s understandable that manufacturers are trying to rationalize, collapsing as many functions as possible within one single piece of hardware – ie board or SoC. But from a software perspective there’s no reason why these separate functional domains should be visible to each other. There’s no way a researcher should be able to access an airplane flight control system via its on-board entertainment platform. Similarly, white hats like Miller and Valasek should not have been able to move from the Jeep’s head unit to its CAN bus to take control of the vehicle.
The answer lies in hardware-assisted virtualization to containerize each software entity, keeping critical components safe, secure and isolated from the rest. It requires a secure hypervisor – a lightweight, compact piece of software with relatively few lines of code – to provide a virtual environment for each software element to run in parallel. From a risk management perspective, no software is 100% safe from exploitation, so having this secure separation means if one piece is compromised, at least the attackers will not be able to use it as a stepping stone into other areas of the system. Secure separation like this would have meant Miller and Valasek were able to interfere with the Jeep’s in-car entertainment system, but crucially not then move to the vital system which controlled steering and brakes.
Of course, these individual services may need to speak to each other. Inside a vehicle there may need to be communication between entertainment system and engine system so the volume of the radio turns up automatically as the car accelerates and outside noise increases, for example. The answer here is secure inter-process communication which allows instructions to travel across this secure separation in a strictly controlled mode. Incidentally, this architecture is a perfect fit for companies whose very business models are based on copyrighted content, like Netflix. If there’s no secure separation between a video stream and, say, a rogue Android app in a modern smart TV, that content could leak with disastrous financial implications.
The security journey
The model I’ve described above is obviously an ideal – the “promised land” of hardware security. But in practical terms not everyone is going to be able to get there straightaway. Chip support for the hardware-assisted virtualization mentioned is not yet widespread. But a good intermediate step would be to use Linux containers, which will enable you to run multiple isolated applications from the one kernel. Secure elements and root of trust might not be available in most hardware today, but this should not prevent security conscious manufacturers fom encrypting and signing their firmware and from making security patches timely available.
Let’s be in no doubt though – it’s a journey we must take as an industry if we’re going to manage the potentially fatal security issues which have broken the Internet of Things.
* * *