Hi Sumit,
I completely agree with you! All i'm asking for is that what you are proposing is ratified in the spec clearly, without any ambiguities and that we don't implement what we *think* is correct, but implement the spec. If the spec specifies encryption, we should implement encryption, not authenticated encryption. One is not a direct substitute for the other and requires careful thinking. Similarly, the order of signing, encryption, decryption and authentication must be specified and explained clearly, specifically to avoid these kinds of discussions.
Thanks -Raghu
On January 27, 2020 at 10:49 PM, Sumit Garg sumit.garg@linaro.org wrote:
On Mon, 27 Jan 2020 at 22:54, Raghupathy Krishnamurthy raghu.ncstate@icloud.com wrote:
Sumit,
Great point. This perhaps needs to be added to the list of things that need clarification(Sandrine can you help with this too?) in the PSA-TBFU . I believe the answer to your concern lies in the PSA-TBFU in section 3.5, where it talks about optimizing the trusted boot process. To overcome the problem you're talking about, you would: 1) Verify asymmetric signature. 2) Decrypt firmware using SSK on successful signature verification. 3) Rekey the firmware using BSSK(or as the PSA specifies, a key derived from the HUK using a KDF). You will only verify the asymmetric signature on every firmware update, and use the rekeyed firmware(encrypted and mac;d with device specific key) on normal boot.
Following is the quote from PSA-TBFU spec:
"An implementation **can** optimize the trusted boot process at the expense of **simplicity**"
It doesn't seems to be a recommended practice from spec. And especially for DRM use-cases, this approach is NOT recommended due to following concerns raised by DRM vendors (original concerns were with respect to TAs but will equally apply for firmware as well): - allows the device to self sign code authorized to run on the device. - increase the attack surface by having two different ways to load firmware. - allow a break once break forever situation, if you defeat the RSA 'install' once, no matter how hard it is, now your firmware is nicely transformed in a secure firmware and can be reused.
Whereas on the other hand, considering "sign-then-encrypt-then-authenticate", it provides two mutually exclusive crypto layers (signature layer and authenticated encryption layer) which in turns provides implementation flexibility as follows:
Firmware update: - Require both layers.
Normal boot: - DRM use-case, require both layers. - Boot time optimization required, can use only authenticated encryption. - Platform provides secure on-chip NVM and boot time optimization required, can use only signature verification (or simply hash stored in secure on-chip NVM memory).
-Sumit
Thanks Raghu
On January 26, 2020 at 10:34 PM, Sumit Garg sumit.garg@linaro.org wrote:
On Fri, 24 Jan 2020 at 16:36, Sumit Garg sumit.garg@linaro.org wrote:
On Fri, 24 Jan 2020 at 04:02, Raghupathy Krishnamurthy
raghu.ncstate@icloud.com wrote:
I also just realized that both the TBBR and ARM PSA only talk about encryption of the image, and not authenticated encryption. The guarantees provided by both are completely different. Your review(https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/) talks about the requirement R060_TBBR_FUNCTION being implemented, which is technically not true(and potentially misleading). We must make a note of this difference and use the appropriate terminology, without mixing the two, in the documentation, commit messages, source code comments and error prints. The tool is also called 'encrypt_fw ' but should maybe be named appropriately to indicate it is doing authenticated encryption.
I wouldn't call it misleading. Since firmware encryption feature
essentially provides confidentiality protection and authenticated
encryption is the type of crypto algorithm which we have used to
implement it.
BTW, ARM PSA(file:///home/raghu/repos/fvp/DEN0072-PSA_TBFU_1-0-REL.pdf) expects that the image manifest(X509 certificate) contain the hash of the ENCRYPTED image(Table 2 and as described in my answer to your question "How would this ensure integrity of ciphertext"). The TBBR spec completely misses this fact, and is a crucial detail if we only implement encryption(as opposed to authenticated encryption).Build_macros.mk, in your change, passes the un-encrypted image to cert-tool. You can get away with it in your implementation, since you are using authenticated encryption, not if you were only implementing firmware encryption.
I have already highlighted the issue with signing the ciphertext in my
previous reply which deviates from security properties provided by
signature verification of plain firmware. So I think we need to
revisit ARM PSA TBFU spec.
In addition to this, there are implementation specific issues with "signing the ciphertext" too. It simply makes the ciphertext immutable for device and disallows meeting following firmware re-encryption requirement as per TBBR spec:
R070_TBBR_PROTECTION. The Trusted boot firmware may do the binding of software image updates at run- time by decrypting the updated SoC certificates and software images using the OTP/Fuse Secret Symmetric Key (SSK), followed by the re-encrypting these SoC certificates and software images using a reproducible secret unique per device symmetric key (BSSK), and then updating the ToC correspondingly.
Also, externally signing every firmware image encrypted with BSSK doesn't seem scalable as well. It also hampers the case where encryption key is never exposed out from device eg. encryption key is only accessible to hardware crypto engine etc.
-Sumit
Is it possible for somebody from ARM to have the TBBR spec updated to reflect this? Also perhaps talk to the spec writers about incorporating authenticated encryption into TBBR and PSA? This patch set is somewhat trailblazing in this regard.
-Raghu
On January 23, 2020 at 12:08 PM, Raghupathy Krishnamurthy via TF-A tf-a@lists.trustedfirmware.org wrote:
Hi Sumit,
Thanks for your response.
So firstly I would suggest you to revisit TBBR spec [1],
[RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its firmware?
[RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems. The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
How would this ensure integrity of ciphertext?
[RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
Have a look at some defective sign and encrypt techniques here [2]
[RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
Why would one not use TBBR CoT here?
[RK] see above. Not all systems are designed equal.
and why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
[RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory. The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
[RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
If you are really paranoid about authentication of FIP header...
[RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
Thanks
-Raghu
On January 23, 2020 at 12:38 AM, Sumit Garg sumit.garg@linaro.org wrote:
Hi Raghu,
I guess you have completely misunderstood this feature. This is an
optional feature which allows to load encrypted FIP payloads using
authenticated decryption which MUST be used along with signature
verification (or TBBR CoT).
So firstly I would suggest you to revisit TBBR spec [1], especially
requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
raghu.ncstate@icloud.com wrote:
Hello,
The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
It looks like you have mixed both TBBR CoT and this authenticated
decryption feature. They both are completely different and rather
complement each other where TBBR CoT establishes
secure/signed/verified boot and this authenticated decryption feature
provides confidentiality protection for FIP payloads.
At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its
firmwares? We must use signature verification for TBBR CoT (see
section: 2.1 Authentication of Code Images by Certificate in TBBR spec
[1]).
The authentication framework is already well designed and well equipped to handle these types of extensions.
- This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
- This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
How would this ensure integrity of ciphertext? This approach may be
vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
part of AES-GCM provides integrity protection for ciphertext.
- This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
Have a look at some defective sign and encrypt techniques here [2].
The order can't be any arbitrary one, we need to be careful about
this.
One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
Why would one not use TBBR CoT here?
If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
Again you are confusing TBBR CoT with authenticated decryption
feature. And why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
There seems to be no benefit to having a flag in the FIP header.
How would one handle a case where BL31 is in plain format and BL32 is
in encrypted format?
Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
If you are really paranoid about authentication of FIP header then you
should look at implementing optional requirement: R010_TBBR_TOC as per
TBBR spec [1].
[1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requirement...
[2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
-Sumit
-Raghu
On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A tf-a@lists.trustedfirmware.org wrote:
Hi Sandrine,
On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
Sandrine.Bailleux@arm.com wrote:
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
Thanks for your review.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Okay, will wait for CI testing.
-Sumit
Regards,
Sandrine
--
TF-A mailing list
TF-A@lists.trustedfirmware.org
--
TF-A mailing list
TF-A@lists.trustedfirmware.org