29
Chapter 5 Hardware security in embedded systems 5.1. Introduction As demonstrated in the first chapter of this book, embedded systems are becoming more and more ubiquitous in many applications. Usually in the form of wireless and communicative systems, they result from complex design flows which juggle the tightly bound constraints of integration and functionality (size, power consumption, speed, etc.). Since the early 2000s, a new critical constraint has emerged: security. From an application-oriented point of view, development in e-commerce is limited by users’ fears of broadcasting bank details, or other private data, through an insecure system and communication channel. However, this does not appear to be curbing the emergence of payment methods coming built into future generations of mobile phones. They must provide security services in order to reassure the customer, for example Toshiba’s Smartphones G500 and G900 [TOS 07] which contain a digital fingerprint reader. But in this case we are concerned only with a user recognition system which does not guarantee the complete security of the system. From a system-oriented perspective, it is often necessary to protect internally saved data either completely or partially, and to guarantee the user with permanent control of the system. This is no simple task, since most embedded systems are Chapter compiled by Lilian BOSSUET and Guy GOGNIAT.

Hardware security in embedded systems

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Hardware security in embedded systems

Chapter 5

Hardware security in embedded systems

5.1. Introduction

As demonstrated in the first chapter of this book, embedded systems are becoming more and more ubiquitous in many applications. Usually in the form of wireless and communicative systems, they result from complex design flows which juggle the tightly bound constraints of integration and functionality (size, power consumption, speed, etc.). Since the early 2000s, a new critical constraint has emerged: security.

From an application-oriented point of view, development in e-commerce is limited by users’ fears of broadcasting bank details, or other private data, through an insecure system and communication channel. However, this does not appear to be curbing the emergence of payment methods coming built into future generations of mobile phones. They must provide security services in order to reassure the customer, for example Toshiba’s Smartphones G500 and G900 [TOS 07] which contain a digital fingerprint reader. But in this case we are concerned only with a user recognition system which does not guarantee the complete security of the system.

From a system-oriented perspective, it is often necessary to protect internally saved data either completely or partially, and to guarantee the user with permanent control of the system. This is no simple task, since most embedded systems are Chapter compiled by Lilian BOSSUET and Guy GOGNIAT.

Page 2: Hardware security in embedded systems

hardware communications systems which support embedded computers, and are highly at risk from software and hardware attacks.

Moreover, embedded systems are at the heart of a very dynamic market economy with many stakeholders and are subject to severe constraints (for example a very short time to market). Competition between the various parties in the supply chain can be tough. So protecting designs and intellectual property rights is currently a very important issue in the industry.

It appears that from many angles the issue of security in embedded systems, in terms of their design and the data stored in the systems and exchanged between them, is currently a major problem that cannot only be addressed from a software approach. More than other hardware systems, embedded systems tend to be concerned with hidden information.

Furthermore, hardware security has become a new dimension in the design space of electronic systems, whether embedded or not, in addition to size and energy and power consumption [KOC 04]. This chapter will explore issues in embedded systems security and propose some original and interesting hardware solutions.

5.2. Embedded systems and their security issues

If security is a new constraint to be taken into account during the early stages of developing an embedded system, then it must be considered simultaneously with the current constraints (consumption, size, etc.). Design choices can only be made through compromises between certain performance aspects and the level of security required by application. Security always comes at a cost, so it is necessary to carry out a precise evaluation before choosing the best compromise between security and performance.

5.2.1. Design constraints in embedded systems

The term embedded system encompasses several clearly distinct systems, both at application level and at the level of the required performance. So it is difficult to define some universal constraints. But it is possible to list a certain number of constraints that must be widely respected within the design framework of embedded systems, among which we find:

– cost. In the majority of cases this is the most important factor for consideration during development. This represents a complex constraint to be assessed due to the number of parameters that may be involved in its evaluation. This constraint is

Page 3: Hardware security in embedded systems

Index 3

particularly strong in the case of applications for the general public (large production volumes);

– the limitation in processing resources is related to limitations in the area available for the system and the components it is made of. Memory resources are also limited. These two constraints strongly restrict the complexity of the algorithms that can be effectively implemented in these systems;

– the data input and output flow is also limited. However, the current trend is for applications to move towards increased requirements in terms of speed and quantity of data, meaning also towards a strong increase in throughput. This is a bottleneck in design and a constraint that it is difficult to respect, due to the aforementioned constraints;

– the number of connections between the system and outside is limited physically (number of inputs/outputs per component) and also in terms of performance (as we have seen with the throughput limitations);

– the limitation in terms of power and energy consumption is often important as an embedded system is only a part of a host system which can itself have a limited energy supply. This is clear in the case of any battery powered system. Effectively, the size and weight of a battery are linked to its energy capacity, while its charge and discharge speeds are related to its power;

– software and hardware flexibility requirements are often considerable. On the one hand, flexibility allows for better integration of the system into a wide range of applications. On the other hand, it allows the system to evolve over time through software and/or hardware updates thanks to the use of programmable memory and reconfigurable hardware circuits.

We could continue to enumerate the many constraints encountered, while the design space is already severely reduced by this first approach. The solutions to these constraints are well known and in constant evolution. The heterogeneity of software and hardware resources available for these systems, and the use of joint design flows (e.g. Codesign), are examples of solutions currently in use.

5.2.2. Security issues in embedded systems

There are many intermediaries involved in the development, manufacture, implementation and use of an embedded system. The number of parties involved depends on the complexity of the application. The various entities’ requirements in terms of security (or protection) are not identical.

Page 4: Hardware security in embedded systems

For example, consider a mobile system for remote communications and multimedia applications (music, video on demand, etc.). As a first approach we can consider six separate groups participating in the life of the system (from development to use), each with their own security and protection requirements:

– manufacturers of software (e.g. operating systems) or hardware (virtual digital components, integrated circuits for baseband processing, discrete radio frequency components, etc.) components need to protect their intellectual property. In a highly competitive market, industrial espionage brings a risk of considerable financial losses. Companies involved at this level of design need to protect themselves against attackers intending to illegally copy a component in order to sell it as their own product, or to study a component in order to quickly offer the market an improved competitor part (this is known as reverse engineering);

– embedded systems manufacturers (integrators of software and hardware components) face the same security problems. They also have to be on their guard against industrial espionage;

– service providers must be wary of attempts to make fraudulent use of their services. They must secure access to services and set up systems to ensure that anybody using the service has permission to do so;

– suppliers of applications (such as remote payment services) must provide users with the necessary safeguards to protect their data. They must also ensure user authentication. Finally, they may also have to face intellectual property protection issues for their applications;

– content providers (music, video, etc.) need to consider digital rights management (DRM). This is to ensure that any use of content is permitted and to prevent unauthorised copying. The emergence of peer-to-peer file sharing techniques has highlighted the problem of protecting rights of use for digital content;

– the end user can store more and more personal information in his or her system, including sensitive data (consider the case of professional mobile phone use). This information must be protected against theft. Systems such mobile phones will quickly be able to embed electronic payment applications (with RFID1 chip integration for example). For such applications, the system must be able to guarantee the user that he or she is the only one who is able to make a payment.

As we have seen, there are many parties concerned with security in embedded systems. The example that we propose is academic but can easily be transposed to other examples with a greater or smaller number of stakeholders. It should be noted

1. RFID (Radio Frequency Identification) technology allows communications at very small distances (around fifteen centimetres), and is often used for contactless payment applications, such as in public transport (e.g. the Bordeaux city tram system).

Page 5: Hardware security in embedded systems

Index 5

that, for one system, different parties may well be victims or attackers. Attacks may involve, for example, destroying the system, altering its normal operation (taking control, denial of service) or extracting sensitive information from it. To these ends the attacker may use either software or hardware techniques.

5.2.3. The main security threats

A secure system cannot be developed without a threat model. This model allows us to determine which attacks the system must be able to respond to, as well as their costs. It is possible, for example, to establish that the cost of implementing a given threat may be prohibitive in relation to the value of the target. In this case, the attack should not necessarily be seen as a threat (unless the attacker has considerable means and does not care about the cost of the attack).

The worst case scenario, i.e. the system which is most exposed to threats, is a communicating mobile phone system carrying an embedded operating system, as shown in Figure 5.1. Effectively, these systems are sensitive to attacks on the communication channel. They must provide confidentiality, authentication and non-repudiation services for data transmitted over the communications channel. In addition, they are also sensitive to software and hardware attacks.

Figure 5.1. Threats to embedded systems

Page 6: Hardware security in embedded systems

The communications channel can be used to send a malicious program that leads to what is called a software attack. When the system downloads the program, hostile code is executed in the form of worms, viruses, Trojan horses or logic bombs. The interested reader can read [FIL 06] for a classification and description of these attacks. Embedded systems such as desktop and laptop computers are potential targets for software attacks, since they incorporate an operating system [DAG 04].

These systems, like smart cards, are hardware systems that can store sensitive information and make use of security primitives to provide the services necessary for data exchange protection (confidentiality, integrity, non-repudiation). These primitives are, for example, asymmetric (public or private key) or symmetric (where a single secret key is used for encryption and decryption) encryption algorithms, hash functions that produce a mark used to ensure data integrity, or a mixture of these to offer non-repudiation services, for example [MEN 96]. These primitives, once implemented in hardware, are susceptible to both invasive and non-invasive physical attacks.

Invasive hardware attacks generally affect the integrity of the component which no longer operates correctly (or at all) after the attack has been carried out. To attack an integrated circuit, the attacker extracts the chip from its casing. Once it is uncovered, the chip can be dissected so that the attacker can study its internal circuitry [AND 01]. This is a reverse engineering process. This technique, currently used for industrial espionage, is costly and requires large amounts of hardware, as well as extensive technical skill.

Some hardware attacks are generally referred to as ‘invasive’, although they do not involve destroying the circuit (these attacks can be called ‘semi-invasive’). An example of this would be an attack by error injection. These errors can take various forms, such as clock or battery voltage glitches (short term parasite entry), or fault injection by an intense light beam (such as a laser) [SKO 02]. Errors can be used to modify the functioning of the circuit [CHO 05] or to propagate an error along the data path (called ‘Differential Fault Analysis’, or DFA) [GIR 03]. In both cases, the circuit is made to give up information that can not be issued under normal operation. This technique allows sensitive data, such as an encryption key, to be extracted. Attacks by error injection also require expensive hardware and extensive technical skill. Obtaining a precise characterisation of the error injection, such as the physical location of the injection point for optical attacks, can be very time consuming and sometimes requires an invasive ‘pre-attack’ on the hardware with reverse engineering.

Page 7: Hardware security in embedded systems

Index 7

Non-invasive attacks are much simpler to implement and require much less hardware and skill. They are often called side channel attacks as the principle of these attacks is to analyse the behaviour of the circuit while it is in operation. The analysis can be carried out based on its power consumption. If it is possible to make a correlation between the measured power consumption and the key used for an encryption algorithm, for example, then the key can be deduced by analysing the measured values. This is the case for the use of a symmetric cryptography algorithm (same secret key for encryption and decryption process) such as AES [NBS 01] embedded in a hardware circuit. Effectively, the MOS (Metal Oxide Semiconductor) transistor technology used in this case characteristically has a dynamic power consumption which depends on the transistors’ commutations and therefore on the internal signals. This property is used very effectively in the DPA (Differential Power Analysis) attack, which was developed in 1999 [KOC 99]. This attack allows the 128 bit decryption key used in the AES algorithm to be discovered within a few minutes using minimal equipment (an oscilloscope and a computer), even if the key itself is mathematically unbreakable using current computational methods. Attacks by differential electromagnetic analysis (DEMA) [QUI 01] or time analysis of output signals [KOC 96] also allow sensitive information to be extracted.

Hardware attacks are numerous and varied. Due to their effectiveness and cost, they present an extensive range of attacks to which embedded systems can be vulnerable. Various countermeasures to these attacks are currently in development. Since research into attacks advances quickly, the protective measures have to be constantly updated. The important idea is that, once a system has been implemented into hardware, it becomes vulnerable to hardware attacks which can be surprisingly effective.

A combined software and hardware attack on the system and its communication channel can have serious repercussions for the security of the system and its data, and is therefore a complicated problem for which it is necessary to find solutions in a large software and hardware design space, which we will explore later in the chapter.

5.3. Security of the system and its data

An embedded system is neither a desktop computer nor a microchip card. This is why it is necessary to propose some security solutions which take account of the specifics of these systems, such as the development constraints presented in Section 5.2.1. Security in dedicated software systems (e.g. desktop computers) and secured hardware systems (e.g. chip cards) is currently a field of interest. It therefore makes

Page 8: Hardware security in embedded systems

sense to take inspiration from these systems when we are thinking about security applications in embedded systems.

5.3.1. The principle of deep security (ICTER project)

The ICTER [ICTER 06] project by the French National Research Agency (Agence Nationale de la Recherche) aims to analyse the potentialities, in terms of security, of hardware platforms which are reconfigurable for embedded systems. Reconfigurable hardware platforms (such as FPGA2 circuits) make a compromise between the flexibility of microprocessor based software platforms (thanks to structure updates by reconfiguration) and the performance of application specific integrated circuits (ASIC) (thanks, for example, to a parallel implementation of algorithms) [BOS 04a].

To begin this study, the participants in the ICTER project suggest looking at security from a software point of view and from a hardware point of view. From the hardware perspective, they suggest looking for solutions to various attacks from the system level to the physical (technological) level. Figure 5.2 shows a schematic pyramid diagram illustrating the concept of in-depth security.

Figure 5.2. Security pyramid: towards a deep layered defence

2. FPGAs (Field Programmable Gate Arrays) are digital integrated circuits. They are made up of a large number of elements with configurable functions (logical, arithmetic, memory, and input-output elements) often arranged in a matrix. These elements are linked by a very dense network of configurable interconnections. The configuration is saved to SRAM or Flash memory, or with fixed antifuse elements [BOS 06].

Page 9: Hardware security in embedded systems

Index 9

This deep approach to security can be contrasted with a ‘surface’ approach. Effectively, the vision proposed by the ICTER project is to consider the notion of security at each level, software and hardware, of the design process. In effect, if the designer develops a security solution at a particular level but neglects to consider the level above or below, he can leave a gap in security which an attacker will be quick to exploit using a variety of software and/or hardware attacks.

We often draw an analogy to the design of a classical low power system (an area well known to embedded systems designers). For example, optimisations introduced at a higher level can be degraded by a less well optimised technology elsewhere, and vice versa. [HAV 00].

On the left side of Figure 5.2 a few well known attacks can allow us to identify the characteristics that an embedded system should have in order to be secure (both in terms of system and embedded data security). We shall describe these attacks in detail in this chapter.

5.3.2. Properties of a secured embedded hardware system

A secured embedded system should have certain properties. It is not necessary for all these attributes to be present in a single system, this depends on the expected threats which we would like to address. In order to reinforce system security at hardware level and also prevent attacks or make them more difficult, the following points should be considered [GOG 06]:

– the system should be continually aware of its state and notably of its own weaknesses in order to be able to react if necessary. The system should be security aware.

– the system must be able to analyse its own state and that of its environment in order to detect any abnormal activity. It should incorporate embedded sensors and monitors in order to assess its activity. The system should be activity aware.

– the system should be capable of reacting rapidly to any attack, and of anticipating attacks. The system should be agile.

– the system should be able to update its own software and hardware security mechanisms depending on how attacks evolve. The system should be able to evolve,

– the system must not let information escape (data leakage), as this could lead to passive attacks. The system should be symptom free.

– the system should be able to withstand physical attacks. The system should be tamper resistant.

Page 10: Hardware security in embedded systems

In parallel, the system should also offer the high performance necessary to comply with the specifications of use. Throughput, latency, size, power, energy are all parameters which should be addressed simultaneously in order to effectively integrate current and future applications. It is therefore necessary to develop design flows while considering security as a supplementary constraint while still respecting the other constraints [SCH 03].

5.3.3. Hardware security solutions

An embedded system is a complex and often highly heterogeneous system. There are typically programmable parts (microprocessors), internal communication systems (bus, network on chip), memory (instructions, data, configurations), control units, inputs and outputs, hardware (reconfigurable or not), various peripherals (e.g. external communication). All these parts can be cleverly diverted from their intended function during an attack, but can also be used for system and data protection. The goal is therefore to provide the system with the abovementioned characteristics.

5.3.3.1. System level hardware security solutions

At this level we consider the system as a whole. It is at this level that it is possible to permanently analyse the internal and external activity of the system in order to detect any irregular operations. The external activity is measured by sensors (temperature, input power voltage, etc.) and the analysis compares measurements taken in current operation with the corresponding properties expected in normal operation [WOL 06]. The system should include a controller to automatically detect any suspicious external activity. Internal activity is inspected by monitors placed on the internal communications network(s). Data exchange between the various components of the internal architecture of the system is monitored and compared with the activity expected during normal operation [ARO 05].

Whether we are thinking of internal or external monitoring, we must have precise knowledge of the system’s behaviour during normal operation. This is not always simple, since normal operation can encompass a broad spectrum of behaviour. If the spectrum is too broad, there is a risk of the system’s operation being altered by undetected attacks. If it is too narrow, there would be a risk of false alerts due to normal fluctuations in the environment (such as an increase in temperature). In both cases, the system should behave deterministically (i.e. certain calculations should not proceed randomly).

Page 11: Hardware security in embedded systems

Index 11

When abnormal activity is detected, the system must react. It must therefore be agile. For example, it might have to go from its normal operational configuration to a secured mode. In this mode, some data might be destroyed (since it is sometimes preferable to lose data than to reveal it), some functions might be blocked (such as communications with the outside), and the user’s expertise might be needed. To be able to introduce these services, the system must be reconfigurable. This can be implemented by means of a programmable system or with reconfigurable hardware. The latter solution, although potentially complex to implement, is effective since it allows the system to remain highly capable in normal operation [GOG 06].

In all cases it is necessary to lean on a secure internal communications network. For example, it may be effective to encrypt the data exchanged on the bus. However this solution may strongly degrade the speed of the system (operational frequency, input-output bitrate). This is why it is very interesting to consider the security of the data exchanged from the design stage for a communications network. For example, [EVA 05] proposes an interesting solution for networks on chip integrating a data security system.

5.3.3.2. Hardware security solutions at architecture level

At this level a single module is considered (microprocessor, hardware accelerator, memory, etc.). The architecture of these modules should be flexible, efficient and reliable (fault tolerance) without supplying information over side channels.

Several studies have been carried out on the efficient implementation of algorithms ensuring data confidentiality, integrity and non-repudiation services (asymmetric and symmetric encryption and hash algorithms). Effectively, there can be a large distance between the mathematical expression of an encryption algorithm and its software and/or hardware realisation. For example during the development of the AES symmetric encryption standard [NBS 01] an international competition was opened to propose the best solution for software and hardware implementation. The winners of the competition were the Belgian researchers J. Daemen and V. Rijmen, thanks to the efficiency of their architecture for hardware implementations. The standard architecture for the AES encryption algorithm is now known as the Rijndael architecture [DAE 02].

Targets are particularly prized for the hardware implementation of data security algorithms. These are the FPGA reconfigurable hardware circuits. Effectively, the architecture of these circuits uses a very large number (several tens of thousands) of fine grained logical elements (capable of logical functions at four entries per bit). This granularity is very well adapted to the calculations used in several encryption

Page 12: Hardware security in embedded systems

algorithms [WOL 04]. Furthermore, when these components are made using SRAM or FLASH technology (around 90% of the market), they are reconfigurable. This property implies the possibility of modifying the algorithm embedded in the circuit by hardware reconfiguration. This enables the system to evolve in time by hardware updates. In this way, an algorithm can be replaced with a newer version, or a new architecture integrating countermeasures against attacks can be implemented.

As we have already seen, it is possible to attack a hardware implementation of an encryption algorithm (in order to find the encryption or decryption key) by error injection. These error injection techniques mainly originate from work on testing and assessment of the reliability of integrated components. These investigations led to the development of techniques enabling the component to resist these faults. This is called fault tolerance. The same techniques can be implemented to secure a hardware system against attacks by error injection. For example, current architectures can execute the same calculation in parallel, so that if one of the architectures is attacked the result can be validated on another architecture by a majority vote [KAR 02]. To be sure, this type of solution leads to considerable overheads in terms of silicon surface area and power consumption. Error detection and correction techniques (also used in telecommunication to avoid errors produced by a noisy channel) can be used with a reduced additional cost [BER 03].

At the architectural level, it is possible to reduce or remove some information from side channels which are exploited in non-invasive attacks (see Section 5.2.3). For example, power consumption is used in DPA attacks to retrieve the encryption key of a symmetric encryption algorithm. An internal device can scramble this information (added noise of consumption) or smooth it as in [MES 05]. The DPA attack, as we have explained, uses the correlation between the encryption key used when generating the cypher text and the power consumption of the circuit during this operation. It is possible to eliminate this correlation by adding a mask (random number) in the flow calculation [MES 01]. This addition has no effect on the outcome or the complexity of calculations.

True random number generators are circuits that are very widely used in contemporary secured or security systems. They are used to generate masks for countermeasures, but they are most often used to generate the keys used by encryption algorithms. The problem with these systems is how we can prove probabilistically and categorically that they do generate a random number stream [NEC 01]. Pseudo-random generation always brings with it a considerable risk of somebody reconstructing the keys. In order to satisfy the criteria for randomness, these generators most often use random phenomena in physics which are considered as nuisance noise by circuit designers, but can be used to our advantage in this case

Page 13: Hardware security in embedded systems

Index 13

(we should state that security does not always go in the same direction as classical ). An example of a physical phenomenon found in digital circuits is clock jitter. This is a phase noise caused by the accumulation of several sources of noise in the semi-conductor3. Simple set-ups implementing frequency control by phase locking (PLL, Phase-Locked Loop), allow us to produce efficient random number generators [FIS 04].

5.3.3.3. Hardware security solutions at the logical level

At this level, we consider logic gates (AND, OR, XOR, NOT, etc.). Concerning security, the essential characteristic is to construct gates which do not allow any information at all to escape through side channels. Because of this, it is becoming more and more difficult to mount a hardware attack by analysis.

Several techniques have been developed to this purpose in recent years. One major approach is Dual-Rail Pre-Charge Logic (DPL), which involves creating two parallel logic sequences [TIR 04]. With this technique, when a transistor TA commutes from the blocked state to the saturated state4, a dual transistor TA_DUAL commutes from the saturated state to the blocked state and vice versa. So, the dynamic power consumption of the set (TA, TA_DUAL) does not change. Effectively, with each commutation of TA there is always a transistor which commutes from blocked to saturated and a transistor which commutes from saturated to blocked. However, it is easy to see that the dynamic power consumption is doubled. Furthermore, the augmentation of the number of transistors in the circuit leads inevitably to an increase in leakage and therefore in the static power consumption of the circuit.

This technique has been improved. For example, in [POP 06] the authors propose making the power consumption random rather than constant. Their technique (MDPL, Masked DPL) combines DPL with a random masking technique.

Technologies based on asynchronous logic have their own means of protection against attacks by power consumption analysis (DPA), and against attacks by error injection (DFA) [MON 06]. However, today there are few easily usable circuits on the market that enable integration into complete systems. The design of 3. Three noises cause the appearance of jitter. Scattering noise, caused by interactions between electrons in the circuit and the crystal, is due to the random movements of charge carriers. Excess noise (such as flicker noise) varies as 1/f and is due, among other things, to variation in the conductivity of materials. Junction noise (such as shot noise) is due to charge carriers crossing a potential barrier. 4. A blocked transistor can be modelled approximately by an open interrupter, a saturated transistor by a closed interrupter.

Page 14: Hardware security in embedded systems

asynchronous systems is complex and automated design flows have some severe shortcomings which make them difficult to implement for industrial use. However, some research teams in France are investigating advances in this field, such as the SAFE project, where the team is developing an FPGA circuit in asynchronous logic [SAFE].

5.3.3.4. Hardware security solutions at the physical level

Transistors and the physical manufacturing processes are considered at this level in order to physically protect the component and its design. It is necessary to implement hardware techniques in order to improve resistance to attacks [AND 01]. It is also essential to imagine sensors allowing the analysis of a component’s state in order to prevent and detect attacks [CRA 02].

Electronic circuit manufacturing processes are complex and becoming more and more delicate with increasing miniaturisation (today the most refined technologies are 65 nm and 45 nm). Small variations are generally made at the manufacturing stage. These are enough to clearly identify one circuit from another, even if the two circuits were neighbours on the same silicon wafer during manufacture. The differences are easy to measure on lines connecting the transistors. This property of integrated circuits is used to manufacture useful circuit identification and authentication systems in security systems. Some functions, called Physical-Random Unclonable Functions (PUFs) are used for this [GAS 03].

The following section will present the architectures of secure systems which implement some of the solutions which we have just mentioned.

5.4. Secured hardware architectures for embedded systems

To illustrate the use of the hardware solutions described above, this section presents a few architectures which implement these solutions for three main protection objectives. The objective-oriented classification is not evident since some solutions can be used to respond to several security problems. However, the security mechanisms and techniques implemented for each of the architectures presented respond principally to an ensemble of objectives. This is why we have chosen this classification. The three types of objectives proposed are:

– protecting the operating system and embedded software against software attacks (viruses, Trojan horses, etc.), and protecting the embedded data,

– protecting intellectual property (DRM, design protection), – protecting communications and security applications.

Page 15: Hardware security in embedded systems

Index 15

This chapter does not intend to give an exhaustive treatment of all secure architectures, industrial and academic, available today. It presents some interesting systems which will allow the reader to have an overview of current solutions.

5.4.1. Software and embedded data protection architectures

The users of programmable systems would like to benefit from an open system that is flexible and generic, in order to be able to adapt swiftly to a large spectrum of applications. However, the same users also want to benefit at simultaneously from protection mechanisms which restrict access to sensitive data and authentication mechanisms which ensure the integrity of the data. As we described earlier, once a system has been embedded it can come under threat from software attacks as well as hardware attacks.

Today, a certain number of secure programmable systems have been proposed in academic and industrial environments. This subsection proposes the study of some solutions which deal with the issue of data security. There are two sorts of data: user-linked data and application-linked data. Sensitive data for a user would be, for example, the user’s confidential data, passwords, or encryption keys. With regards to the application, access to program code needs to be protected in order to limit the potential for software attacks. In both cases, the secured system should be able to generate and protect secret data, and also share it with the world.

We will quickly study the current principles of software attacks, before presenting some hardware protection solutions proposed in the literature. For this purpose we imagine that the operating system is not secure and could be affected by malicious code. Once the attacker has control of the operating system, he or she could have complete memory access privileges (and maybe also access to the battery), allowing the attacker to observe and modify the system. Furthermore, the attacker could have control of interruptions which would allow him or her to access the registers. For example, he or she could also position a random value on the bus to cause a malfunction and observe the system’s behaviour (a spoofing attack). The attacker could modify the contents of the instruction memory by address permutation, allowing himself or herself to modify, for example, the return address of a subroutine or interruption routine (a splicing attack). Finally, the attacker can modify the data memory to give a previous value to some data. So the data could be used several times with the same erroneous value (a replay attack).

Therefore, the attacker must not be able to change instructions and data in the memory without the processor noticing. The instructions and data must also be incomprehensible from outside the processor. The solutions proposed today rely on

Page 16: Hardware security in embedded systems

an inviolable hardware security zone called a trust zone, trust area or secure area which contains processor, the cache memory or memories and the memory access controller. This zone implements the hardware protection systems at a physical, logical and architectural level in order to arm itself against known hardware attacks. Data leaving and entering the trust zone is encrypted in order to ensure protection at the system level. The trust zone contains hardware primitives allowing encryption and authentication of the data (instructions, addresses, data) exchanged between the processor and the various memories.

The CryptoPage-2 architecture proposed by the Ecole Nationale Supérieure de Télécommunications (ENST) in Britanny, France [LAU 03] and the architecture of the secured processor AEGIS developed by researchers at MIT in the USA [SUH 03] are very close to the description that we have just made of a software protection architecture. The AEGIS architecture is shown schematically in Figure 5.3, with the secured and unsecured zones clearly separated. Furthermore, a second version of this processor proposes using random physical unclonable functions (PUFs) which take advantage of the small differences in the lengths of microelectronic connections due to variations in manufacture to provide an identifier unique to each circuit. These identifiers are used to generate unique encryption keys [SUH 05].

Figure 5.3. The AEGIS secured processor architecture

These architectures are close to the XOM (‘Execute-Only Memory’) architecture proposed by the University of Stanford [LIE 00]. The main difference is that each block of instructions, corresponding to a task to be executed on the processor, is encrypted using a symmetric key encryption algorithm, where each symmetric key is encrypted using an asymmetric encryption algorithm. To read a block of instructions, the XOM processor must first of all decrypt the corresponding symmetric key, then decrypt the instructions. This method, although more secure, is slower and consumes more silicon resources than the use of a single symmetric encryption with the key saved in the secure zone. Effectively, the secure zone must

Page 17: Hardware security in embedded systems

Index 17

incorporate two different encryption/decryption units and have enough memory to store the results of successive decryptions.

Key management in the secure zone is a sub-system that is often complex and needs to be developed carefully since all the security of the encryption algorithm relies on protecting the keys used. This is especially important in the architecture of the TSM (‘Trusted Software Module’) secured processor proposed by the University of Princeton in the USA [LEE 05], shown schematically in Figure 5.4. The various keys (there can be very many of them) are managed hierarchically within the architecture. Each key used is calculated from a parent key. The highest level key, the User Master Key, has no parent. This key needs to be particularly secure since all the other keys are bound to it hierarchically. The master key is associated with the user and not with the circuit as in the case of AEGIS. The key-user coupling can be made with biometric information. The other keys can be used by the peripherals of the TSM processor, but the master key can only be read and used by the user.

The PE-ICE project developed at the University of Montpellier, France [ELB 06], proposes reinforcing security in systems such as AEGIS by adding an authentication block. This is done by adding some extra bits to the data before it is encrypted. The bits are calculated from a random number and the storage address of the data in memory (in order to avoid spatial reallocations of data in memory). The main drawback to this method is the increase in the amount of data that needs to be stored in memory.

Figure 5.4. Structure of the TSM (Trusted Software Module) architecture

The solutions presented above are efficient at fulfilling security objectives but also have an adverse affect on performance. The time bound to encryption and decryption of instructions and data is non-negligible in comparison to the time normally necessary for reading and writing data to memory. In order to address this issue, the OTP-CRC (‘One-Time Pad and Cyclic Redundancy Code’) project at the University of South Britanny [VAS 07] proposes encrypting the data by the simple addition of a unique key generated from a symmetric encryption algorithm (AES). An error detection system classically used for communication by bus (CRC) is used

Page 18: Hardware security in embedded systems

to verify the integrity of the decrypted data. Concerning the reduction in the amount of memory (data or instructions), the same research team proposes mixing a dictionary compression algorithm with the encryption phase [WAN 07].

The architectures of the proposed processors are secured but relatively complex to implement. For industrial embedded systems with strong integration constraints, a simpler, higher performance secured processor is necessary. This is the suggestion of ARM, a company whose processors are extensively used in the embedded systems world, with its TrustZone extension [ARM 07]. This processor control system introduces an increase from 15 × 103 to 20 × 103 in the number of logical gates necessary. However, this only represents 5% of the additional silicon for an ARM11 processor core. When a software attack is detected, using privilege controls (access control for reading and writing to memory), the TrustZone system transfers the processor from a normal configuration to a secured configuration.

These different secure processors can be used in many applications requiring different levels of security, but applications for intellectual property protection may need additional security systems, as we will see later on in the chapter.

5.4.2. Architectures for protection of intellectual property

Intellectual property protection is a large field which includes security issues. The expression ‘intellectual property protection’ encompasses some very different applications. For example, digital rights management (DRM) for multimedia supports or design protection in the framework of industrial espionage. In this last case, it is convenient to protect the design of the system from copying and reverse engineering.

In the case of industrial systems design there is another design security and confidentiality issue. Current electronic systems, due to constant increases in complexity, require a joint development between several parties (subcontractors, suppliers, customers). Each party may wish that the part of the system that it develops is seen as a black box by the others. Although it is convenient to initially establish a legal device to prevent such problems, such as a Non-Disclosure Agreement (NDA), some technical solutions can help consolidate this device. This applies to the CodeGuard solution proposed by the company Microchip for some of these microcontrollers [MIC 07]. This hardware solution allows the read and write privileges for the instructions and data memory to be verified by controlling the microcontroller’s internal registers. Each party (in this case there can be up to three) involved in the design can be given a memory space and a security level. There are three security levels, with the most secure level being strongly read and write

Page 19: Hardware security in embedded systems

Index 19

restricted and the least secure being without restriction. This solution, although very limited, enables the development of a joint application for a programmable system while respecting the design information of the various parties.

The Security-Enhanced Communication Architecture (SECA) is a system-on-chip (SOC) architecture for applications in mobile telephony, centred around an address, data and transactions controller on an AMBA bus5. This architecture was proposed by the NEC laboratory in the USA [COB 05] for DRM applications. It uses three types of protection via a specific control unit bound to a global controller, the SEM (Security Enforcement Module). This module is directly connected to the AMBA bus and controls data exchanges between the processor(s), various memories (instructions, data), and the peripherals. The module allows control of access privileges for a component towards an addressable space in the memory or towards a peripheral (address-based protection). In addition, it monitors the incoming data accessible by certain memory areas or some peripheral registers (data-based protection). Finally, sequences of transactions between the different components in the architecture are monitored in order to verify the behaviour of the system (sequence-based protection).

Figure 5.5. An example of the SECA architecture with two processors

The SEM security controller, as we see from the schematic of the SECA architecture in Figure 5.5, is the central component for communications control. In order to control addresses, data and access sequences while keeping a limited

5. AMBA is a free on-bus communications standard for systems-on-chip developed by ARM, available at: http://www.arm.com/products/solutions/AMBAHomePage.html.

Page 20: Hardware security in embedded systems

complexity, the controller uses three separate units. A unit for controlling addresses (the address-based protection unit), based on a table of read-write access privileges, a unit for controlling the data in use (the data-based protection unit) based on a look-up table (LUT) which verifies the level of access to the data for each component. Finally, the transactions control unit (the sequence-based protection unit) is based on a finite state machine, constructed from a study of normal operational behaviour.

This architecture is flexible, the security controller can be limited for example to controlling the data used, such as in the application in the SOC NEC MP211 for mobile telephony [COB 05]. This architecture can be efficiently implemented for protection of digital rights in multimedia applications.

The solutions brought by the CodeGuard and SECA systems meet the problems of programmable systems centred around one or more microprocessors. However, for the sake of performance, hardware systems are increasingly used. They are centred around specialised circuits such as ASICs or around reconfigurable circuits such as FPGAs. In the latter case, the design of the system is held in a configuration file called a bitstream. If a competitor can easily extract the file system, they will copy it, or even come to understand it using reverse engineering processes. The FPGAs in SRAM technology have a critical security problem. The configuration backup technology in the circuit is the SRAM volatile memory technology. In order not to lose the configuration at each break in the supply of energy, the bitstream must be stored in external FLASH or ROM memory. Thus, every time the system is turned on it loads the bitstream from the external non-volatile memory into the internal configuration memory of the FPGA. It is easy for an attacker to read the bitstream during this transfer. To overcome this security flaw, manufacturers of SRAM FPGAs like Xilinx and Altera propose storing the encrypted bitstream in the external memory and decoding it within the FPGA (with a decryption circuit board). This solution, although simple to implement, is very rigid and leaves little choice to the developer. However, studies have shown that it is possible to provide more complete and flexible bitstream protection services [BOS 04b].

The design of hardware systems relies more and more on the use of virtual IP (intellectual property) components for the sake of efficiency (reduced time to market). IP trading is now an important market which is also subject to security problems. IP protection is an important issue in the development of this market. Its role is to enable sellers to protect their IP against unauthorised use or fraudulent resale ensuring traceability of IP for legal purposes. Marking techniques (such as watermarking or fingerprinting) can be used to meet these requirements. Among the many ways of marking hardware IP, we can cite the change in size of routing lines

Page 21: Hardware security in embedded systems

Index 21

to add undetectable physical information, the use of free silicon resources (such as free LUTs in a configured FPGA) or modifying the settings of the algorithm. This is a vast subject which would merit a chapter in its own right, so the authors direct the interested reader towards [ABD 04, VSI 06] and [YUA 06].

5.4.3. Crypto-architecture for protecting communications and security applications

The architectures presented above integrate security systems and primitives (encryption algorithms for example) which are used internally to protect sensitive information (keys, instructions, data). In the case of crypto-processors (or crypto-coprocessors) the security primitives are used in the framework of security applications such as secured chip cards, telecommunications and protocols (IPsec for example) and VPNs (virtual private networks). Most often in the form of coprocessors, a crypto-processor would generally incorporate, depending to the desired application, encryption primitives (symmetric or otherwise), hash functions for authentication, key generators based on random number generators, and a key backup and management system. The physical interior of the crypto-processor is secured against hardware attacks. In certain systems, sensors are embedded to monitor the internal and external environment of the circuit and detect attacks.

The first security processors, developed according to this principle, are intended mainly for network security applications. IBM is one of the main crypto-processor designers for these applications (such as the 4764 PCI-X processor6). These processors are principally installed in servers for which the constraints in development and performance are far from those of desktop and laptop computers. This is why a consortium was held to develop a standard for crypto-processors with these goals in mind. The Trusted Computing Group (TCG) includes the industrials in the IT sector in order to define open standards that meet the needs of security applications [TCG 07].

Among these various works, the TCG proposes a crypto-coprocessor architecture called TPM (Trusted Platform Module) [TCG 07]. A schematic block diagram of this architecture is shown in Figure 5.6. Here we see on the right the security primitives used, depending on the instructions to be executed. We can model these primitives using a specialised super-ALU which is used to perform the operations required for security applications. Of course, the use of such architectures requires the development of an operating system and software capable of using them effectively.

6. http://www-03.ibm.com/security/cryptocards/pcixcc/overview.shtml.

Page 22: Hardware security in embedded systems

The TCG has recently initiated a discussion to develop a TPM standard for mobile and wireless embedded systems (mainly mobile telephones, PDAs and ultra-portable computers). However, crypto-processors for embedded systems have already been developed and commercialised. Texas Instruments proposes a secured coprocessor for the third generation of its OMAP processor (TI OMAP 34307) for embedded mobile applications. This is based on the TrustZone ARM technology [ARM 07]. This processor implements symmetric encryption functions (AES, DES and triple-DES), hash functions (SHA-1, MD5), a random number generator and a key management system. With the same kind of idea, the company Discetix has developed the CryptoCell8 processor for mobile applications. This coprocessor embeds asymmetric encryption functions (RSA, ECC and DH), symmetric encryption functions (AES, DES, triple-DES, RC4), hash functions (SHA-1, SHA256/384/512/ MD5, HMAC), a random number generator and a key management system.

Figure 5.6. Architecture of a TPM (Trusted Platform Module)

So the potential exists for the development of embedded systems capable of supporting security applications. However, the solutions proposed today only meet current requirements. In a more long term perspective, it is indispensable to propose new architectures capable of evolving in time (updating algorithms) while guaranteeing high performance. The SANES architecture presented in the following section is one of these new evolutionary architectures.

7. hhtp://www.ti.com/omap. 8. http://discretix.com/CryptoCell/.

Page 23: Hardware security in embedded systems

Index 23

5.4.4. Case study: SANES, a reconfigurable secured hardware architecture

The SANES (‘Security Architecture for Embedded Systems’) reconfigurable architecture was developed simultaneously by the University of South Brittany in France and the University of Massachusetts in the USA [GOG 06]. It implements the conceptual principles mentioned in Section 5.3.3. into an architectural form. The architecture uses monitors which allow us to detect abnormal behaviour in the system. Hardware defence mechanisms can be implemented in order to counter attacks. The security mechanisms can be updated if necessary (dynamically) which ensures the durability of the protection system.

Figure 5.7 gives an overview of the architecture. As we can see, several monitors are considered in order to monitor various sources of information in the system. The number and complexity of the monitors are obviously important parameters because they directly affect the additional costs of security architecture as well as the level of security provided. The role of these monitors is to detect attacks on the system. For this purpose, the normal activity of the modules under review is characterised and continuously compared to the actual activity of the system.

The notions of autonomy and adaptability of the monitors are important if we are to build an effective surveillance network. In effect, the monitors are autonomous in order to correspond to a fault-tolerant system; if a monitor is attacked, the others must be able to continue to guarantee the security of the system. The monitors are distributed at different locations within the system, in order to be able to analyse weaknesses in the architecture (such as the battery, bus, security primitives and communication channels).

Page 24: Hardware security in embedded systems

Figure 5.7. The SANES reconfigurable secure architecture

Different levels of reactions can be considered depending on the type of attacks which the system must face. Reflex reactions are made directly by a monitor without consultation with the other security units. In this case the reaction is very fast. Global reactions are implemented when an attack involves a significant modification to the system. In this case, the monitors exchange information in order to define a new configuration. Such a scenario allows more complex attacks to be detected, but also implies a longer reaction time. The monitors are connected over a secure silicon network. This network is also connected to a global control unit called the SEP (Security Executive Processor) whose role is to ensure the secured link between the outside environment and the system. The SEP controller corresponds to a software layer which allows new monitors to be instantiated remotely and the security policies of the existing monitors to be updated. In cases of abnormal behaviour, the SEP controller can take control of the system from a hardware point of view. For example, it can cancel battery level management or disconnect the inputs and outputs in order to thwart an attack.

Page 25: Hardware security in embedded systems

Index 25

The reconfigurable part (FPGA) within the system allows hardware implementation of security primitives. This leads to the use of an adaptive hardware accelerator operating a security related algorithm (encryption, hashes, key management). In contrast with the crypto-processors mentioned above, the list of supported algorithms is not fixed. The user configures the system with the available security primitives which he or she wishes to implement. These can be updated by reconfiguration during the lifetime of the system. Thus the SANES architecture brings performance (hardware implantation) and flexibility (reconfigurable system) necessary for future secured embedded systems.

5.5. Conclusion

Embedded systems are at the heart of a large economical market which is an important drive in the technology sector. However, as these systems become more complex, mobile and communicating, they also become increasingly vulnerable to security problems, in terms of data, system or design security.

The development of embedded systems is highly constrained and prevents direct utilisation of the security solutions (software and hardware) available today, which were developed for other purposes (chip cards, desktop and laptop computers, and servers). It is therefore essential to develop solutions tailored to embedded systems in line with their specific characteristics and which meet the constraints of development.

Many academic and industrial solutions have been proposed to meet this challenge. However, much more research effort is required at present. Mainly, platforms need to be made more flexible while maintaining performance, and the global protection of the system should be improved while keeping within a reasonable budget and acknowledging the limited technology. For tools, it is necessary to develop automatic design flows integrating security constraints from the first stages of specification. These flows should make use of new secured design methods which remain to be developed.

One last point remains to be considered: when training engineers for development and research in the field of embedded systems, we should take security issues into account. While security may have been a part of traditional computer science and networking courses for a long time, this is not quite the case for the electronics curriculum. Fortunately, some initiatives within France and other countries indicate that this is changing.

Page 26: Hardware security in embedded systems

5.6. Bibliography

[ABD 04] ABDEL-HAMID A.T., TAHAR S., ABOULHAMID M., “A Survey on IP Watermarking Techniques”, Design Automation for Embedded Systems, p. 211-227, 2004.

[AND 01] ANDERSON R., Security Engineering, A Guide to Building Dependable Distributed Systems, Wiley Computer Publishing, 2001.

[ARM 07] http://www.arm.com/products/esd/trustzone_home.html.

[ARO 05] ARORA D., RAVI S., RAGHUNATHAN A., JHA N.K., “Secure Embedded Processing through Hardware-assisted Run-time Monitoring”, Proceedings of Design, Automation & Test in Europe Conference (DATE 2005), Munich, Germany, Mar 2005.

[BER 03] BERTONI G., BREVEGLIERI L., KOREN I., MAISTRI P., PIURU V., “Error Analysis and Detection Procedure for a Hardware Implementation of the Advanced Encryption Standard”, IEEE Transactions on Computers, vol. 52, No. 4, p. 492-505, Apr 2003.

[BOS 04a] BOSSUET L., Exploration de l’espace de conception des architectures reconfigurables, PhD thesis, University of South Britanny, Lorient, Sep 2004, freely available at http://www.lilianbossuet.com/fr/Doc/publications/These_Lilian_Bossuet.pdf.

[BOS 04b] BOSSUET L., GOGNIAT G., BURLESON W., “Dynamically Configurable Security for SRAM FPGA Bitstreams”, Proceedings of the 11th Reconfigurable Architectures Workshop (RAW 2004), Santa Fé, New Mexico, USA, Apr 2004.

[BOS 06] BOSSUET L., Architecture Conception et Utilisation des FPGA, Cours de l’ENSEIRB 2006, freely available at: http://www.lilianbossuet.com/fr/Doc/documents_pedagogiques/Bossuet_cours_FPGA_ENSEIRB.pdf.

[BUR 05] BURLESON W., WOLF T., TESSIER R., GONG W., GOGNIAT G., “Embedded System Security: A Configurable Approach”, Department of Homeland Security Conference, Boston, Massachusetts, USA, Apr 2005.

[CHO 05] CHOUKRI H., TUNSTALL M., “Round Reduction Usign Faults”, dans Breveglieri L. et Koren I. (dir.), Workshop on fault Diagnosis and Tolerance in Cryptography (FDTC 2005), p. 13-24, Edinburgh, UK, 2005.

[COB 05] COBURN J., RAVI S., RAGHUNATHAN A., CHAKRADHAR S., “SECA: Security-Enhanced Communication Architecture”, Proceeding of International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES’05), San Fransisco, USA, Sep 2005.

[CRA 02] CRAVOTTA N., “Prying eyes”, EDN, Sep 2002, http://www.edn.com/toc-archive/2002/20020926.html.

[DAE 02] DAEMEN J., RIJMEN V., The Design of Rijndael AES-The Advanced Encryption Standard, Springer-Verlag, 2002.

[DAG 04] DAGON D., MARTIN T., STANER T., “Mobile Phones as Computing Devices: The Viruses are Coming!”, IEEE Pervasive Computing, Oct-Dec 2004.

Page 27: Hardware security in embedded systems

Index 27

[ELB 06] ELBAZ R., Mécanismes Matériels pour des transferts Processeur Mémoire Sécurisés dans les Systèmes Embarqués, PhD thesis, University of Montpellier, France, Dec 2006.

[EVA 05] EVAIN S., DIGUET J.P., “From NoC Security Analysis To Design Solutions”, IEEE 2005 Workshop on Signal Processing Systems (SIPS 2005), Athens, Greece, Nov 2005.

[FIL 06] FILIOL E., “Virus et Ver informatiques”, in Mé L. and Deswarte Y. (Eds.), Sécurité des systèmes d’informations, chap. 6 of IC2 treatise, p. 187-219, Hermès, Paris, France, May 2006.

[FIS 04] FISCHER V., DRUTAROVSKÝ M., ŠIMKA M., BOCHARD N., “High Performance True Random Number Generator in Altera Stratix FPLDs”, in Becker J., Platzner M., and Vernalde S. (Eds.), Field-Programmable Logic and Applications (FPL 2004), vol. 3203 of Lecture Notes in Computer Science, p. 555-564, Springer-Verlag, Anvers, Belgium, Aug 2004.

[GAS 03] GASSEND B., CLARKE D., van DIJK M., DEVADAS S., “Delay-Based Circuit Authentication and Applications”, Proc. of the 18th Annual ACM Symposium on Applied Computing, Melbourne, USA, Mar 2003.

[GIR 03] GIRAUD C., DFA on AES, Technical Report 2003/2008, IACR eprint archive, 2003. http://eprint.iacr.org/2003/008.ps.

[GOG 06] GOGNIAT G., WOLF T., BURLESON W., “Reconfigurable security support for embedded systems”, Proc of the 39th IEEE Hawaii International Conference on System Science (HICSS-39), Poipu, HI, USA, Jan 2006.

[GUI 04] GUILLEY S., PACALET R., “SoC Security: a War Against Side-Channels”, Annals of the Telecommunications, Système sur puce électronique pour les télécommunications, vol. 59, No. 7-8, Jul-Aug 2004.

[HAV 00] HAVINGA P.J.M., SMIT G.J.M., “Design techniques for low power systems”, Journal of Systems Architecture, vol. 46, issue 1, 2000.

[ICTER 06] http://www.lirmm.fr/~w3mic/ANR/index.htm

[KAR O2] KARRI R., WU K., MISHRA P., KIM Y., “Concurrent Error Detection Schemes for Fault-Based Side-Channel Cryptanalysis of Symmetric Block Ciphers”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 21, No. 12, Dec 2002.

[KOC 96] KOCHER P., “Timing Attacks on Implementations of Diffe-Hellman, RSA, DSS, and Other Systems, Advances in Cryptology”, Proceedings of Annual International Cryp-tology Conference (CRYPTO '96), p. 104-113, Springer-Verlag, Santa Barbara, USA, Aug 1996.

[KOC 99] KOCHER P.C., JAFFE J., JUN B., “Differential Power Analysis”, in Wiener M. (Ed.), Proceedings of the 19th Annual International Cryptology Conference (CRYPTO'99), vol. 1666 of Lecture Notes in Computer Science, p. 388-397, Springer, Santa Barbara, USA, Aug 1999.

Page 28: Hardware security in embedded systems

[KOC 04] KOCHER P., LEE R., MCGRAW G., RAGHUNATHAN A., RAVI S., “Security as a New Dimension in Embedded System Design”, ACM/IEEE Design Automation Conference, San Diego, USA, Jun 2004.

[LAU 03] LAURADOUX C., KERYELL R., “CryptoPage-2: un processeur sécurisé contre le rejeu”, REMPAR’15/CFSE’3/SympAAA’2003, Oct 2004.

[LEE 05] LEE R.B., KWAN P.C.S., MCGREGOR J.P., DWOSKIN J., WANG Z., “Architecture for Protecting Critical Secrets in Microprocessors”, Proceedings of the 32nd International Symposium on computer Architecture (ISCA 2005), p. 2-13, Jun 2005.

[LIE 00] LIE D., THEKKATH C., MITCHELL M., LINCOLN P., BONEH D., MITCHELL J., HOROWITZ M., “Architectural Support for Copy and Tamper Resistant Software”, Proceedings of the 9th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS IX), p. 168-177, Nov Cambridge, MA, USA, 2000.

[MEN 96] MENEZES P., OORSCHOT V., VANSTONE S., Handbook of Applied Cryptography, CRC Press, Oct 1996.

[MIC 07] http://www.microchip.com/codeguard/.

[MON 06] MONNET Y., RENAUDIN M., LEVEUGLE R., “Designing Resistant Circuits against Malicious Faults Injection Using Asynchronous Logic”, IEEE Transaction of Computer, No. 55, vol. 9, p. 1104-1115, 2006.

[MES 01] MESSERGES T., “Securing the AES finalists Against Power Analysis Attacks”, Fast Encryption Workshop (FSE 2000), LNCS 1978, p. 150-164, Springer-Verlag, 2001.

[MES 05] MESTIQUA D., TECHER J.D., TORRES L., CAMBON G., SASSATELLI G., MORAES F.G., “Current Mask Generation: An Analogical Circuit to Thwart DPA Attacks”, International Conference on Very Large Scale Integration (VLSI-SOC’05), Perth, Australia, 2005.

[NBS 01] NATIONAL BUREAU OF STANDARDS, FIPS 197, Advanced Encryption Standard. Federal Information Processing Standard, NIST, U.S. Dept. of Commerce, 2001.

[NEC 01] NECHVATAL J., SMID M., BANKS D.L., RUKHIN A., SOTO J., “A Statistical Test Suite for Random and Pseudorandom Number Generators for Statistical Applications”, NIST Special Publication in Computer Security, p. 800-22, 2001.

[POP 06] POPP T., MANGARD S., “Implementation Aspects of the DPA-Resistant Logic Style MDPL”, Proceedings of the 2006 IEEE International Symposium on Circuits and Systems (ISCAS 2006), Island of Kos, Greece, May 2006.

[QUI 01] QUISQUATER J.J., SAMYDE D., “ElectroMagnetic Analysis (EMA): Measures and counter-measures for smart cards”, in Attali I. Jensen T.P. (Ed.), Proceedings of E-smart, vol. 2140 of Lecture Notes in Computer Science, p. 200-210, Springer-Verlag, 2001.

[SAFE] Secured Asynchronous FPGA for Embedded Systems: http://www.comelec.enst.fr/recherche/safe/.

Page 29: Hardware security in embedded systems

Index 29

[SCH 03] SCHAUMONT P., VERBAUWHEDE I., “DoMayn-Specific Codesign for Embedded Security”, IEEE Computer, Vol. 36, Issue 4, pp. 68-74, Apr 2003.

[SKO 02] SKOROBOGATOV S., ANDERSON R., “Optical Fault Induction Attacks”, Proceedings of Cryptographic Hardware and Embedded Systems Workshop (CHES 2002), Lecture Notes in Computer Science, No. 2532, p. 2-12, 2002.

[SUH 03] SUH G.E., CLARKE D., GASSEND B., VAN DIJK M., DEVADAS S., AEGIS: Architecture for Tamper-Evident and Tamper-Resistant Processing, MIT, Memo-461, Feb 2003.

[SUH 05] SUH G.E., O’DONNELL C.W., SACHDEV I., DEVADAS S., “Design and Implementation of the AEGIS Single-Chip Secure Processor Using Physical Random Functions”, Proceedings of the 32nd Annual International Symposium on Computer Architecture (ISCA 2005), p. 25-36, 2005.

[TCG 07] TRUSTED COMPUTING GROUPE, www.trustedcomputinggroup.org.

[TIR 04] TIRI K., VERBAUWHEDE I., “A Logic Level Design Methodology for a Secure DPA Resistant ASIC or FPGA Implementation”, Proc. of Design Automation and Test in Europe Conference (DATE 2004), p. 246-251, Feb 2004.

[TOS 07] http://www.toshiba-europe.com/mobile/.

[VAS 07] VASLIN R., GOGNIAT G., DIGUET J.P., WANDERLEY E., TESSIER R., BURLESON W., “Low Latency Solution for Confidentiality and Integrity Checking in Embedded Systems with Off-Chip Memory”, Reconfigurable Communication-centric SoCs (ReCoSoc’07), Montpellier, France, Jun 2007.

[VSI 01] VIRTUAL SOCKET INTERFACE ALLIANCE, Intellectual Property Protection Development Working Group, Intellectual Property Protection: Schemes, alternatives and discussion, White Paper, Jan 2001.

[WAN 07] WANDERLEY E., ELBAZ R., TORRES L., SASSASTELLI G., VASLIN R., GOGNIAT G., DIGUET J.P., “IBC-EI: An Instruction Based Compression Method with Encryption and Integrity Checking”, Reconfigurable Communication-Centric SoCs (ReCoSoc’07), Monpellier, France, Jun 2007.

[WOL 04] WOLLINGER T., GUAJARDO J., PAAR C., “Security on FPGAs: State of the Art Implementation and Attacks”, ACM Transactions on Embedded Computing Systems, vol. 3, No. 3, p. 534-574, Aug 2004.

[WOL 06] WOLF T., MAO S., KUMAR D., DATTA B., BURLESON W., GOGNIAT G., “Collaborative Monitors for Embedded System Security”, Proceedings of the First International Workshop on Embedded Systems Security, Seoul, Korea, Oct 2006.

[YUA 06] YUAN L., QU G., GHOUTI L., BOURIDANE A., “VLSI Design Ip Protections: Solutions, New Challenges, and Opportunities”, Proceedings of the first NASA/ESA Conference on Adaptative Hardware and Systems (AHS’06), Istanbul, Turkey, Jun 2006.