Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Are there any efforts planned around this ?
Is it feasible to have a "library" that could be integrated in different projects?
Cheers
FF
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
* provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected * use ECC RAM
Best regards
Heinrich
Are there any efforts planned around this ?
Is it feasible to have a "library" that could be integrated in different projects?
Cheers
FF _______________________________________________ boot-architecture mailing list boot-architecture@lists.linaro.org https://lists.linaro.org/mailman/listinfo/boot-architecture
Hi,
On Fri, Mar 26, 2021 at 07:36:34PM +0100, Heinrich Schuchardt wrote:
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
- provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected
- use ECC RAM
I agree and disagree. Hardware mitigations are probably better, but by applying software mitigations you can make it significantly harder to perform a successful SCA. Most glitching attacks use some kind of time offset from a known occurrence/event and then a single glitch. With software mitigations, you can introduce randomness and also write the code so it'd require multi-glitching attacks to make it a successful attack.
The greatest challenge with this is to convince and educate maintainers that writing code that might look very weird actually makes sense in certain areas of the code base. SCA software mitigations is no silver bullet, but if you care deeply about security, then I think it's just yet another mitigation pattern that you should apply, just as everything else we're doing when it comes to trying to securing our products (signatures, ASLR, measured boot, canaries, constant time functions, etc etc ...).
Regards, Joakim
On 29.03.21 09:40, Joakim Bech wrote:
Hi,
On Fri, Mar 26, 2021 at 07:36:34PM +0100, Heinrich Schuchardt wrote:
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
- provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected
- use ECC RAM
I agree and disagree. Hardware mitigations are probably better, but by applying software mitigations you can make it significantly harder to perform a successful SCA. Most glitching attacks use some kind of time offset from a known occurrence/event and then a single glitch. With software mitigations, you can introduce randomness and also write the code so it'd require multi-glitching attacks to make it a successful attack.
The greatest challenge with this is to convince and educate maintainers that writing code that might look very weird actually makes sense in certain areas of the code base. SCA software mitigations is no silver bullet, but if you care deeply about security, then I think it's just yet another mitigation pattern that you should apply, just as everything else we're doing when it comes to trying to securing our products (signatures, ASLR, measured boot, canaries, constant time functions, etc etc ...).
On devices that run the software completely from flash variables are a main attack surface for power glitches.
U-Boot and Linux run from memory. There the major attack surface are not critical variables but the code itself.
So while the approach taken on TF-M may fend off a significant percentage of power glitch attacks on the devices using it this might not hold true for U-Boot and Linux.
Do any measurements analyzing the proportions of code, data, and CPU involvement in successful power glitch attacks exist?
Best regards
Heinrich
On Mon, 29 Mar 2021 at 14:46, Heinrich Schuchardt xypron.glpk@gmx.de wrote:
On 29.03.21 09:40, Joakim Bech wrote:
Hi,
On Fri, Mar 26, 2021 at 07:36:34PM +0100, Heinrich Schuchardt wrote:
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
- provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected
- use ECC RAM
I agree and disagree. Hardware mitigations are probably better, but by applying software mitigations you can make it significantly harder to perform a successful SCA. Most glitching attacks use some kind of time offset from a known occurrence/event and then a single glitch. With software mitigations, you can introduce randomness and also write the code so it'd require multi-glitching attacks to make it a successful attack.
The greatest challenge with this is to convince and educate maintainers that writing code that might look very weird actually makes sense in certain areas of the code base. SCA software mitigations is no silver bullet, but if you care deeply about security, then I think it's just yet another mitigation pattern that you should apply, just as everything else we're doing when it comes to trying to securing our products (signatures, ASLR, measured boot, canaries, constant time functions, etc etc ...).
On devices that run the software completely from flash variables are a main attack surface for power glitches.
U-Boot and Linux run from memory. There the major attack surface are not critical variables but the code itself.
So while the approach taken on TF-M may fend off a significant percentage of power glitch attacks on the devices using it this might not hold true for U-Boot and Linux.
Do any measurements analyzing the proportions of code, data, and CPU involvement in successful power glitch attacks exist?
Yes it was demoed by Riscure in a past Connect, I can't remember which
one. Joakim may certainly be more specific. This demo triggered the concern with me and since then I have been monitoring advances in the topic. You may want to check https://www.riscure.com/uploads/2019/06/Riscure_Secure-Boot-Under-Attack-Sim...
Best regards
Heinrich
On 29.03.21 15:02, François Ozog wrote:
On Mon, 29 Mar 2021 at 14:46, Heinrich Schuchardt <xypron.glpk@gmx.de mailto:xypron.glpk@gmx.de> wrote:
On 29.03.21 09:40, Joakim Bech wrote: > Hi, > > On Fri, Mar 26, 2021 at 07:36:34PM +0100, Heinrich Schuchardt wrote: >> On 26.03.21 15:12, François Ozog wrote: >>> Hi >>> >>> Trusted Firmware M recently introduced protection against glitching at >>> key decision points: >>> https://github.com/mcu-tools/mcuboot/pull/776 <https://github.com/mcu-tools/mcuboot/pull/776> >>> >>> To me this is a key mitigation element for companies that target PSA >>> level 3 compliance which means hardware attacks resilience. >>> >>> I believe similar techniques need to be used in different projects >>> involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel). >> >> Power glitches can induce changes in data, in code, and in CPU state. >> Signing selected variables cannot be a sufficient counter-measure. >> >> If you want to protect against power glitches, it seems more promising >> to do so on the hardware side, e.g. >> >> * provide circuitry that resets the board if a power-glitch or an >> electromagnetic interference is detected >> * use ECC RAM >> > I agree and disagree. Hardware mitigations are probably better, but > by applying software mitigations you can make it significantly harder to > perform a successful SCA. Most glitching attacks use some kind of time > offset from a known occurrence/event and then a single glitch. With > software mitigations, you can introduce randomness and also write the > code so it'd require multi-glitching attacks to make it a successful > attack. > > The greatest challenge with this is to convince and educate maintainers > that writing code that might look very weird actually makes sense in > certain areas of the code base. SCA software mitigations is no silver > bullet, but if you care deeply about security, then I think it's just > yet another mitigation pattern that you should apply, just as everything > else we're doing when it comes to trying to securing our products > (signatures, ASLR, measured boot, canaries, constant time functions, etc > etc ...). On devices that run the software completely from flash variables are a main attack surface for power glitches. U-Boot and Linux run from memory. There the major attack surface are not critical variables but the code itself. So while the approach taken on TF-M may fend off a significant percentage of power glitch attacks on the devices using it this might not hold true for U-Boot and Linux. Do any measurements analyzing the proportions of code, data, and CPU involvement in successful power glitch attacks exist?
Yes it was demoed by Riscure in a past Connect, I can't remember which one. Joakim may certainly be more specific. This demo triggered the concern with me and since then I have been monitoring advances in the topic. You may want to check https://www.riscure.com/uploads/2019/06/Riscure_Secure-Boot-Under-Attack-Sim... https://www.riscure.com/uploads/2019/06/Riscure_Secure-Boot-Under-Attack-Simulation.pdf
Thanks for the reference.
The slides (75, 121) indicate that the major attack surface is changed code not changed variables.
The TF-M patches mentioned above focus on validating variables and introducing random delays when accessing them.
Riscures' emphasis on simulation suggests that it is not trivial to design code that is resilient to power glitches. Some design principles are provided in https://www.riscure.com/uploads/2018/11/201708_Riscure_Whitepaper_Side_Chann....
Best regards
Heinrich
On Mon, Mar 29, 2021 at 03:02:21PM +0200, François Ozog wrote:
On Mon, 29 Mar 2021 at 14:46, Heinrich Schuchardt xypron.glpk@gmx.de wrote:
On 29.03.21 09:40, Joakim Bech wrote:
Hi,
On Fri, Mar 26, 2021 at 07:36:34PM +0100, Heinrich Schuchardt wrote:
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
- provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected
- use ECC RAM
I agree and disagree. Hardware mitigations are probably better, but by applying software mitigations you can make it significantly harder to perform a successful SCA. Most glitching attacks use some kind of time offset from a known occurrence/event and then a single glitch. With software mitigations, you can introduce randomness and also write the code so it'd require multi-glitching attacks to make it a successful attack.
The greatest challenge with this is to convince and educate maintainers that writing code that might look very weird actually makes sense in certain areas of the code base. SCA software mitigations is no silver bullet, but if you care deeply about security, then I think it's just yet another mitigation pattern that you should apply, just as everything else we're doing when it comes to trying to securing our products (signatures, ASLR, measured boot, canaries, constant time functions, etc etc ...).
On devices that run the software completely from flash variables are a main attack surface for power glitches.
U-Boot and Linux run from memory. There the major attack surface are not critical variables but the code itself.
So while the approach taken on TF-M may fend off a significant percentage of power glitch attacks on the devices using it this might not hold true for U-Boot and Linux.
Do any measurements analyzing the proportions of code, data, and CPU involvement in successful power glitch attacks exist?
Yes it was demoed by Riscure in a past Connect, I can't remember which
one. Joakim may certainly be more specific. This demo triggered the concern with me and since then I have been monitoring advances in the topic. You may want to check https://www.riscure.com/uploads/2019/06/Riscure_Secure-Boot-Under-Attack-Sim...
To give a real example, one of the first security issues reported [1] to the OP-TEE project was the Bellcore attack [2] put in practice. Our implementation wasn't using blinding [3], so the people reporting the issue were able to recover the private key from OP-TEE running on the HiKey device using SCA. They first scanned the device with different EM probe antennas and when they found a location on the chip giving the best signal they did an EM Fault Injection attack making the RSA-CRT computations to give faulty signatures (with up to 50% repeatability). With faulty signatures, a good signature and maths you'll be able to recover the private key. The mitigation? A software mitigation. This was running an Armv8-A where everything was up and running (including Linux), i.e., certainly not a tiny M-class device running things from flash.
Related to Riscure as Francois referred to. They held workshop days a couple of months ago and majority of the talks was about glitching, fault injection etc. I can't remember the details, but in one talk they demoed a fault injection attack that had some percentage of success rate. Then they introduced a couple of software mitigation patterns one by one and at the end IIRC the success rate was almost zero (if not zero).
I fully agree with Ard. Even though you can abstract certain things into libraries and helper functions, it's in my opinion not a "create a lib and enable it and you're secure" operation. I think software mitigation patterns should be carefully applied around sensitive operations. But as said it needs some education and motivation to why you should do it.
From a logical view on the code it doesn't make sense to do identical
"if" checks twice in the same function. But from a SCA mitigation point of view it can actually make sense.
[1] https://www.op-tee.org/security-advisories/ [2] https://eprint.iacr.org/2012/553.pdf [3] https://en.wikipedia.org/wiki/Blinding_(cryptography)
// Regards Joakim
On Fri, 26 Mar 2021 at 19:36, Heinrich Schuchardt xypron.glpk@gmx.de wrote:
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
- provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected
- use ECC RAM
Of course it is more promising to do this on the hardware side, nobody is debating that. Whether it makes sense to harden the software as well is completely orthogonal to that: on hardened hardware, it adds another layer of protection, and on hardware that cannot be hardened for some reason (cost, or more often, the fact that it has already shipped), it adds a layer of protection where no protection whatsoever was available beforehand.
However, I do share some of your skepticism: incorporation of these techniques is not trivial, and commoditizing it by dropping it into some library that you can link your software against is *not* what I would prefer to see. This only encourages a check the box mentality, where people either assume that having the library makes everything magically secure, or ($DEITY forbid), it gets sucked into the MISRAs or FIPSes of this world, and it becomes a certification requirement, and using the library is mandatory, regardless of whether there is a point to it, or whether it is being used in the right way.
So I'd be more interested in seeing how the underlying methods can be applied more widely, and not how it gets shrinkwrapped and shipped with a .h file without any need on the part of the developer to understand what really goes on under the hood.
On Mon, 29 Mar 2021 at 15:23, Ard Biesheuvel ardb@kernel.org wrote:
On Fri, 26 Mar 2021 at 19:36, Heinrich Schuchardt xypron.glpk@gmx.de wrote:
On 26.03.21 15:12, François Ozog wrote:
Hi
Trusted Firmware M recently introduced protection against glitching at key decision points: https://github.com/mcu-tools/mcuboot/pull/776
To me this is a key mitigation element for companies that target PSA level 3 compliance which means hardware attacks resilience.
I believe similar techniques need to be used in different projects involved in Linux secure booting (TF-A, OP-TEE, U-Boot, Linux kernel).
Power glitches can induce changes in data, in code, and in CPU state. Signing selected variables cannot be a sufficient counter-measure.
If you want to protect against power glitches, it seems more promising to do so on the hardware side, e.g.
- provide circuitry that resets the board if a power-glitch or an electromagnetic interference is detected
- use ECC RAM
Of course it is more promising to do this on the hardware side, nobody is debating that. Whether it makes sense to harden the software as well is completely orthogonal to that: on hardened hardware, it adds another layer of protection, and on hardware that cannot be hardened for some reason (cost, or more often, the fact that it has already shipped), it adds a layer of protection where no protection whatsoever was available beforehand.
However, I do share some of your skepticism: incorporation of these techniques is not trivial, and commoditizing it by dropping it into some library that you can link your software against is *not* what I would prefer to see. This only encourages a check the box mentality, where people either assume that having the library makes everything magically secure, or ($DEITY forbid), it gets sucked into the MISRAs or FIPSes of this world, and it becomes a certification requirement, and using the library is mandatory, regardless of whether there is a point to it, or whether it is being used in the right way.
So I'd be more interested in seeing how the underlying methods can be applied more widely, and not how it gets shrinkwrapped and shipped with a .h file without any need on the part of the developer to understand what really goes on under the hood.
I also think this is first about guidelines with a set of helpers.
For instance, lets consider:
if (checkstuff(input)==0) { dostuff(); }
It Can be transformed into something like (yet insufficient to be really protected but that's gives an idea, more to be read in the TFM code and readme.md) /* SUCCESS is an arbitrary value != 0 as 0 is a typical value of a register when glitched */ if (checkstuff(input, &result) == SUCCESS) { letwaitarandomdelay(); if (doublecheck_successful_result(input, result) == SUCCESS) { /* this code is not protected against PC attacks */ dostuff(); }
tf-a@lists.trustedfirmware.org