Trust platform provides security from concept to deployment
• Electronics • Technical Articles • South-East European INDUSTRIAL Мarket - issue 4/2021 • 03.11.2021
Nicolas Demoulin, Microchip Technology
Security is now a key requirement for embedded systems. The desire to connect devices to the internet to make it easier to control them and pull live data from their sensors brings with it a high risk of hacking. The hacking activity does not put at risk just individual devices but entire networks.
RELATED ARTICLES
Dual core solutions for advanced digital signal controller applications
Hybrid Power Controllers – Bridging the Analog and Digital Domain
Compiler qualification, certification and ISO 26262
How to prevent dropped communication in critical IoT applications
IoT security requires an end-to-end approach
Ensuring security when connecting industrial embedded systems to the cloud
The direction is clear: vendors cannot go to market without an IoT product secure by design. The issue for device manufacturers as they look to harness the power of the IoT for their systems is the complexity of implementing effective and relevant security mechanisms. It is easy to see the fundamental need for authentication and encryption in these systems. But implementation has been much harder to achieve.
There are multiple components, both software and hardware, that are needed to create a secure foundation for an embedded system. A weakness in any one of them can easily lead to the hardware being compromised and loaded with malware that is used to attack an operator’s network or to leak sensitive data to cybercriminals. At the same time, many design teams are confronting for the first time the development difficulties presented by security concerns.
One of the core requirements for effective security is that each deployed device should have its own unique identity. A common flaw exploited by hackers is to have devices provide a common password or login for use by service and maintenance engineers. The details of this login are often easy to guess and even if they are not they are usually easily obtained by a hacker. With this login, it is possible to gain access to not just one device but the entire fleet. Cybercriminals were able to create botnets – armies of identical computers used in denial-of-service attacks – through the use of simple automated scripts. The scripts identified and logged into each device of a certain type that had an internet connection.
With a unique identity, it is possible to give each system its own security credentials and greatly reduce the chance of giving hackers an easy way to construct botnets. Only if an authorized user has the right credentials for a particular device should they be allowed access. However, this increased level of protection has ramifications for the design, development and service management processes.
Implementing effective security in a way that facilitates rather than impedes development involves careful choices. The first choice is over the hardware foundations employed to protect the integrity of the target device. This foundation ensures that it is impossible not just to access the core firmware of the device without authorization but to ensure its functions cannot be subverted and the device used to attack the network. For example, if a hacker has obtained access credentials for one device, it must be impossible to convert another to accept those same credentials in order to form, for example, a botnet. As a result, identity and integrity are intimately tied.
The public-key infrastructure (PKI) provides a means for establishing and proving a unique trusted identity not only within the device itself but across a network. PKI relies on the concept of asymmetric cryptography, a technique that links two numeric keys together mathematically. One is a public key, which is typically used to verify messages. As its name suggests, this key can be distributed widely without compromising security. And it provides an easy way for anyone to send secure messages to a device, just as long as they know which public key to use. The device itself needs the private key which allows to sign messages sent to it which will be verified by the corresponding public key.
From the basic PKI operations, it is possible to construct more structured authentication models, such as digital certificates that demonstrate the identity of a device. To create a digital certificate, a device signs a message or challenge creating a signature using the private key. The corresponding public key is used by the recipient to determine the validity of the signature.
The private key clearly needs strong protection. It is not enough to just program a key into non-volatile memory on a device before it is deployed as it’s easily accessible. The private key should never be disclosed. If there is a disclosure, it is possible for hackers to build their own clone devices. These are then able to impersonate and spoof the authentic device and so compromise the security of networked applications which depend on the data sent by the device.
A problem for a conventional microcontroller-based design is that any cryptographic software running on the processor core needs access to the private key in order to perform the necessary calculations assuming the key is in the controller. The core hardware requirement therefore is a secure element used to fold those cryptographic operations into a standalone piece of protected hardware together with secure storage for the private keys. As the key and cryptographic functions stored together inside the same physical secure boundary, there is no need to send sensitive data over the system’s internal bus.
Figure 1. A secure element is a vault that protects secrets, it is a companion device to the microcontroller
Instead, when the system needs to communicate securely or prove its identity, it calls upon the secure element to respond to a random challenge. The response to this challenge is a code derived arithmetically from the random part of the challenge and the relevant private key stored inside the secure element. In other words, the random challenge is signed by the private key. In this way, the secure element can demonstrate that it holds the appropriate secret but does not need to disclose the sensitive private key itself.
The secure element can also protect the device from counterfeit code that an attacker might attempt to run and use to try to compromise the system. The protection mechanism that is needed to prevent this is a code verification, sometimes known as a secured boot or a runtime code verification. In this case, the challenge sent to the secure element is a signature obtained from the signed boot image stored on the device. Any updates to the code have to be signed by the OEM using its private key. Through secured boot and runtime verification procedures, the system can support over-the-air updates provided by the manufacturer without the risk of running updates provided by a third party using a man-in-the-middle attack or a similar approach.
This key used to verify the code signature is a sensitive credential that will and so should live in a protected and immutable memory zone. If the key can be altered, the system would simply not work. If the key pair can be altered then the code can similarly be tampered with.
An example of effective protection can be found in the Microchip Technology ATECC608A. This is a secure element that can be used with in any microcontroller-based system thanks to its use of a standard IІC or Single-wire communication link. The device combines a non-volatile memory with several crypto-accelerators designed to support algorithms based on elliptic-curve algorithms, for example, in a secure silicon. The device never reveals private keys through the communication link and includes a number of anti-tampering hardware features that make it practically not feasible to discover its contents.
Although a secure element coupled to a microcontroller provides an effective foundation for building connected embedded devices that can guarantee high security, this combination is only part of the overall solution. There are many use-cases that involve constructing complex protocols in embedded software out of the core functions a secure element provides. For example, in addition to secure boot, an IoT device will need to be able to communicate with remote hosts using encrypted protocols such as TLS and generate certificates on demand that show that the device has not been compromised when it wants to connect to a new service. When the manufacturer or service operator wants to send a code update, that firmware’s signature will need to be verified before the flash memory is updated and the system rebooted.
A further requirement may be the ability to detect system accessories or consumable cartridges and determine whether they are authentic. This function can be performed using protocols that are similar to those used to build the code verification example, but with some key differences. For example, each peripheral may have its own secure element that is used to check that the host system into which it is plugged is itself authentic.
Although the principles behind each of the protocols that implements these functions are reasonably straightforward, implementation can be difficult because the ability to debug problems is constrained by the need for the system to obey the secure protocols.
A common assumption during development is that pressing the reset button or flushing the contents of memory will let engineers gain access to an unresponsive device. Debug modes generally give the developer privileged access to the system. But when the higher levels of security required for systems that will be connected to the internet are introduced, some of these assumptions no longer apply. Failure to implement software in the right way can lead to the prototype device becoming unreachable. The most troublesome parts of secure-system development lie in the debugging of the core protocols. For example, it is easy to introduce bugs into the code used to process passcodes or security certificate that cause the device to be unable to respond to valid requests. If it were possible to reset the device to gain access, the facility would provide hackers with an easily exploitable backdoor into the system. As a result, security-focused development introduces hurdles into the development process. They are difficult to deal with if the team does not have experience of the techniques that are required.
Figure 2. The development flow with Microchip’s Trust Platform
However, one advantage of systems built on a PKI infrastructure is that applications can be built on top of core protocols and use-cases, such as the verification of signed executables and certificate creation, that can be reused across many projects. Such insight helped lead to the creation of Microchip’s Trust Platform. This platform provides a suite of configurations, source code, hardware and software tools designed to make it easy for customers to implement a wide variety of use-cases in a workflow that guides the user from concept to implementation based on hardware that includes a secure element such as the ATECC608A.
The Trust Platform divides into three main offerings. The simplest is Trust&GO, which provides a fixed set of functions, such as giving a device access to cloud services hosted on AWS, Google Cloud, Microsoft Azure or a private cloud. Another configuration supported by Trust&GO is a complete secure authentication solution for devices that need to connect to a LoRaWAN wireless network.
TrustFLEX provides an additional level of customization with support for a wide range of operations from secure boot to certificate generation. The third option, TrustCUSTOM, provides customers with the ability to tune the creation and integration of secure elements into their desired security model.
Figure 3. The ordering/delivery flow of Microchip’s Trust Platform
An important element of the Trust Platform that eases access to security compared to other offerings is the way in which the secure key provisioning service can be deployed on low-volume applications. With competing secure-element supply chains, the minimum order quantity can be 100 000 because of the overhead involved in setting up the initial certificates and keys that need to be programmed into the hardware in the supplier’s secure manufacturing production line. With Trust&GO, customers can buy secure elements starting 10 units per order and have all the support of the Trust Platform infrastructure, including provisioning. For TrustFLEX, the minimum order quantity is as low as 2000 units also including provisioning, but still provides the user with the greater level of control over certificates, keys and applications that might be expected from customized secure supply-chain solutions.
Under the Microchip Trust Platform, customers have access to highly customizable security mechanisms with much lower development and deployment risk than existing solutions. The combination of tools, source code and supply infrastructure provide a path for embedded systems developers to gain access to a complete securely provisioned system that works from concept to deployment, reducing the development process from months to days.