Sumit,
Great point. This perhaps needs to be added to the list of things that need clarification(Sandrine can you help with this too?) in the PSA-TBFU . I believe the answer to your concern lies in the PSA-TBFU in section 3.5, where it talks about optimizing the trusted boot process. To overcome the problem you're talking about, you would:
1) Verify asymmetric signature.
2) Decrypt firmware using SSK on successful signature verification.
3) Rekey the firmware using BSSK(or as the PSA specifies, a key derived from the HUK using a KDF).
You will only verify the asymmetric signature on every firmware update, and use the rekeyed firmware(encrypted and mac;d with device specific key) on normal boot.
Thanks
Raghu
On January 26, 2020 at 10:34 PM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
On Fri, 24 Jan 2020 at 16:36, Sumit Garg <sumit.garg(a)linaro.org> wrote:
On Fri, 24 Jan 2020 at 04:02, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
>
> I also just realized that both the TBBR and ARM PSA only talk about encryption of the image, and not authenticated encryption. The guarantees provided by both are completely different. Your review(https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/) talks about the requirement R060_TBBR_FUNCTION being implemented, which is technically not true(and potentially misleading). We must make a note of this difference and use the appropriate terminology, without mixing the two, in the documentation, commit messages, source code comments and error prints. The tool is also called 'encrypt_fw ' but should maybe be named appropriately to indicate it is doing authenticated encryption.
I wouldn't call it misleading. Since firmware encryption feature
essentially provides confidentiality protection and authenticated
encryption is the type of crypto algorithm which we have used to
implement it.
>
> BTW, ARM PSA(file:///home/raghu/repos/fvp/DEN0072-PSA_TBFU_1-0-REL.pdf) expects that the image manifest(X509 certificate) contain the hash of the ENCRYPTED image(Table 2 and as described in my answer to your question "How would this ensure integrity of ciphertext"). The TBBR spec completely misses this fact, and is a crucial detail if we only implement encryption(as opposed to authenticated encryption).Build_macros.mk, in your change, passes the un-encrypted image to cert-tool. You can get away with it in your implementation, since you are using authenticated encryption, not if you were only implementing firmware encryption.
I have already highlighted the issue with signing the ciphertext in my
previous reply which deviates from security properties provided by
signature verification of plain firmware. So I think we need to
revisit ARM PSA TBFU spec.
In addition to this, there are implementation specific issues with
"signing the ciphertext" too. It simply makes the ciphertext immutable
for device and disallows meeting following firmware re-encryption
requirement as per TBBR spec:
R070_TBBR_PROTECTION. The Trusted boot firmware may do the binding of
software image updates at run-
time by decrypting the updated SoC certificates and software images
using the OTP/Fuse Secret
Symmetric Key (SSK), followed by the re-encrypting these SoC
certificates and software images using a
reproducible secret unique per device symmetric key (BSSK), and then
updating the ToC correspondingly.
Also, externally signing every firmware image encrypted with BSSK
doesn't seem scalable as well. It also hampers the case where
encryption key is never exposed out from device eg. encryption key is
only accessible to hardware crypto engine etc.
-Sumit
>
> Is it possible for somebody from ARM to have the TBBR spec updated to reflect this? Also perhaps talk to the spec writers about incorporating authenticated encryption into TBBR and PSA? This patch set is somewhat trailblazing in this regard.
>
> -Raghu
>
>
> On January 23, 2020 at 12:08 PM, Raghupathy Krishnamurthy via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
>
> Hi Sumit,
>
>
> Thanks for your response.
>
>
> So firstly I would suggest you to revisit TBBR spec [1],
>
>
>
> [RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
>
>
> Why would one use authenticated decryption only to establish TBBR
>
> Chain of Trust providing device the capability to self sign its firmware?
>
> [RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems. The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
>
>
> How would this ensure integrity of ciphertext?
>
>
>
> [RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
>
>
> Have a look at some defective sign and encrypt techniques here [2]
>
>
>
> [RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
>
>
> Why would one not use TBBR CoT here?
>
>
>
> [RK] see above. Not all systems are designed equal.
>
>
>
> and why would one like to hardcode in a device during
>
> provisioning to boot only either an encrypted or a plain firmware
> image?
>
>
> [RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory. The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
>
>
> How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
>
>
>
> [RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
>
>
> If you are really paranoid about authentication of FIP header...
>
> [RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
>
>
> Thanks
> -Raghu
>
>
> On January 23, 2020 at 12:38 AM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
>
>
> Hi Raghu,
>
> I guess you have completely misunderstood this feature. This is an
> optional feature which allows to load encrypted FIP payloads using
> authenticated decryption which MUST be used along with signature
> verification (or TBBR CoT).
>
> So firstly I would suggest you to revisit TBBR spec [1], especially
> requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
>
> On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
> <raghu.ncstate(a)icloud.com> wrote:
>
>
>
> Hello,
>
>
> The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
>
> It looks like you have mixed both TBBR CoT and this authenticated
> decryption feature. They both are completely different and rather
> complement each other where TBBR CoT establishes
> secure/signed/verified boot and this authenticated decryption feature
> provides confidentiality protection for FIP payloads.
>
>
> At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
>
> Why would one use authenticated decryption only to establish TBBR
> Chain of Trust providing device the capability to self sign its
> firmwares? We must use signature verification for TBBR CoT (see
> section: 2.1 Authentication of Code Images by Certificate in TBBR spec
> [1]).
>
>
> The authentication framework is already well designed and well equipped to handle these types of extensions.
> 1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
> 2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
>
> How would this ensure integrity of ciphertext? This approach may be
> vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
> part of AES-GCM provides integrity protection for ciphertext.
>
>
> 3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
>
>
>
> Have a look at some defective sign and encrypt techniques here [2].
> The order can't be any arbitrary one, we need to be careful about
> this.
>
>
> One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
>
> Why would one not use TBBR CoT here?
>
>
> If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
>
> Again you are confusing TBBR CoT with authenticated decryption
> feature. And why would one like to hardcode in a device during
> provisioning to boot only either an encrypted or a plain firmware
> image?
>
>
> There seems to be no benefit to having a flag in the FIP header.
>
> How would one handle a case where BL31 is in plain format and BL32 is
> in encrypted format?
>
>
> Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
>
>
>
> If you are really paranoid about authentication of FIP header then you
> should look at implementing optional requirement: R010_TBBR_TOC as per
> TBBR spec [1].
>
> [1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requiremen…
> [2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
>
> -Sumit
>
>
> -Raghu
>
>
> On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
>
>
> Hi Sandrine,
>
>
> On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
> <Sandrine.Bailleux(a)arm.com> wrote:
>
>
>
>
> Hello Sumit,
>
>
>
>
> Thank you for reworking the patches and addressing all of my review
>
>
> comments. I am happy with the latest version of these and consider them
>
>
> ready to go. I plan to leave them in Gerrit for another week to give
>
>
> extra time for other potential reviewers to have a look and comment.
>
>
>
>
>
>
> Thanks for your review.
>
>
> To everyone on the list: Please raise any concerns you may have about
>
>
> these patches in the coming week. If I don't hear anything by 29th
>
>
> January 2020, I will merge these patches.
>
>
>
>
> @Sumit: One of the next actions for this patch stack would be to have
>
>
> some level of testing in the CI system to detect any potential
>
>
> regressions. We (at Arm) can quite easily add a few build tests but then
>
>
> testing the software stack on QEMU is a bit more involved for various
>
>
> reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
>
>
> ...) so this might have to wait for some time.
>
>
>
>
>
>
> Okay, will wait for CI testing.
>
>
> -Sumit
>
>
> Regards,
>
>
> Sandrine
>
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Thanks Soby, Sumit, Sandrine and Joakim!
Agree with Joakim/Soby about encrypt then sign and sign then encrypt being okay, when authenticated encryption is used. I'd like to point out again, that the link provided by Sumit talks about encrypt-then-sign and sign-then-encrypt, using asymmetric encryption and asymmetric signing only, and as Joakim rightly pointed out(and as i did in my earlier email), does not necessarily apply here. They are not talking about symmetric encryption with asymmetric signing, which is what PSA-TBFU and TBBR are talking about.
I usually don't like signing plain text and encrypt(not authenticated encryption, just encryption like aes-cbc etc) plain text due to https://moxie.org/blog/the-cryptographic-doom-principle/(written by the creator of the the WhatsApp end-to-end encryption protocol, this applies to symmetric encryption and symmetric signing/MAC's, but it applies to any ciphertext that is decrypted without verifying its integrity). If we were to sign the plain text then encrypt the firmware, the size of the encrypted file needs to be integrity protected as well, not just the bit that indicates that the file is encrypted. If not, when we decrypt the firmware image, we can perform some creative attacks on symmetric encryption, as specified in the link, and requires careful implementation of error handling/reporting on decryption errors.
I also like Soby's approach of having an attribute EP_ENCRYPTED and relying on platform_pre_image_load() and platform_post_image_load() to do the decryption and that addresses my non-spec related concerns. It also allows for decrypt-then-authenticate(which i wouldn't use) and authenticate-then-decrypt. It also puts the attribute that indicates firmware encryption in the image descriptor table which is integrity protected by virtue of it being in the ROM or being signed, and also leaves the FIP layer unaltered.
Thanks
-Raghu
On January 27, 2020 at 6:42 AM, Soby Mathew <Soby.Mathew(a)arm.com> wrote:
On 27/01/2020 12:34, Joakim Bech via TF-A wrote:
on raw binaries are there so we can be sure that
we're loading unmodified firmware coming from the one owning a private key
corresponding to the public key hardcoded into the device (or via a hash of
the public key). I think we all can agree
OK, I have finally managed to catch up on this thread. Apologies for the
delayed response.
As Joakim mentions, I think both the mentioned cases ie. encrypt plain
text -> sign and sign -> encrypt are valid and it depend on the threat
model and security requirement.
I have had a brief look at the patch stack and coupling the feature to
the FIP does not seem like a good idea to me( + the added complexity
protecting the ToC). Currently meta data of the firmware images is
passed OOB via the `bl_mem_params_node_t` descriptor to the BL images
whereas this patch breaks that convention. It is better to follow the
set convention and avoid dependency on FIP format (btw platforms need
not use FIP format and can use other packaging formats).
The iv data to decrypt seems to be prepended to the encrypted file in
the fip which is making custom manipulations file pointer manipulations
which is raising some red flags.
https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/8/drive…
IMO, this should not visible to be FIP driver. I am not sure of the best
practice for sending iv data but if it has to be prepended, it should be
FIP agnostic.
The TBBR assumes a single owner for all images whereas the TBFU
supercedes this spec by allowing multiple owners for different images.
We are in the process of enhancing the CoT for different Root of Trusts
for different images and there needs to be capability to encrypt with
different keys for different images based on ownership. The current
implementation has some limitations like introducing a platform API
invocation within the driver layer to get the single key which is not
ideal IMO.
I haven't fully flushed out my ideas, but thoughts are based on
enhancements done by Masahiro for adding decompression support for BL
images. See https://github.com/ARM-software/arm-trusted-firmware/pull/1224 .
Basically, this patch series allows any filter to be setup before/after
the images are loaded. It relies on pre-load and post load hooks which
are platform specific to perform the filter operation. So, if a new
image attribute `EP_ENCRYPTED` is added ep_info_exp.h, then BL2 needs to
do the following in bl2_plat_handle_pre_image_load()
{
if EP_ENCRYPTED is set :
load the image to SRAM and decrypt using crypto module
}
This will cater for `decrypt-then-authenticate` flow.
The fip_tool no longer needs to be aware of encryption and the build
process just needs to pipe the encrypted binaries to the fip_tool.
Similarly if `authenticate-then-decrypt` needs to be supported, then all
that the platform needs to do is implement decrypt in
bl2_plat_handle_post_image_load().
The platform can now use different keys to use for different BL images
if it needs to do so.
Some usecases require the firmware images to be re-encrypted using HUK
after firmware update (aka firmware binding) and following this design
will allow this to be done as well.
Best Regards
Soby Mathew
On 24/01/2020 20:20, Raghupathy Krishnamurthy via TF-A wrote:
> It appears that the BL1 FWU SMC's are written under the assumption that only one core can call the SMC's at any given time but i don't see anything enforcing it. What prevent's this ?
>
>
> -Raghu
>
Hi Raghu
The BL1 itself is uni-processor [except the early assembly code which
differentiates primary core from secondaries]. Hence it makes no attempt
to provide protection for the SMCs from multiple cores.
Best Regards
Soby Mathew
Thanks Sumit. We are beginning to go off topic and i don't think you and i agree entirely. I'll lay out my concerns, to see if others on the list share my opinion/concerns, so i hope you can let them respond. If nobody else shares my concerns or responds, the patch will be merged soon anyway, so no point dragging this out :) I apologize for the long email. I don't know how to keep this topic short and detailed. :)
Here are my concerns:
1) The patch set claims to implement R060_TBBR_FUNCTION from the TBBR spec. I don't agree with this. The patch is using authenticated encryption(which does provide confidentiality) but is different from implementing firmware encryption as described in the TBBR(and PSA) in my view. If we are implementing R060_TBBR_FUNCTION, in my view, this should be implemented such that the firmware is first encrypted(symmetric), and the encrypted firmware should be asymmetrically signed(as per PSA). Also, the order for verification should be asymmetric signature verification and then decryption(symmetric). In this patch, plain text firmware is asymmetrically signed, plain text firmware is encrypted and authenticated with a symmetric key using authenticated encryption. The order for verification in this patch is authenticate and decrypt using the symmetric key and THEN asymmetric signature verification on the decryted(and now plain text) firmware. The ordering of verification and decryption are subtly different from what the spec(TBBR and PSA) expects. Security does not *seem* to be broken as far as i can tell, but the patch is not strictly an implementation of the spec, as it claims. This can throw off people reading the spec and trying to match code to spec. This is not good when someone wants to audit and gain confidence in the security of the implementation. At a minimum, we need someone writing the spec to address this.
2) The TBBR spec does not talk about authenticated encryption of firmware and only talks about encryption of firmware. It also does not specify if the encrypted firmware needs to be signed or the plain text need to be signed. This is critical detail. It also does not specify the order in which encryption and signing, decryption and verification must be done. These are important details and need to be addressed explicitly(perhaps in both TBBR and PSA). Use of authenticated encryption needs to be explicitly discussed as well.
3) TBBR specifies that the TOC(table of contents) *may* be authenticated. The TOC contains bits that dictate security policy, in this case, if the firmware is encrypted or not. I don't think it is good to consume security policy from unauthenticated data. In my view, this information must be in the signed image manifest(certificate in ATF) since an attacker can at-least cause simple DoS attacks by flipping bits in the FIP header, undetected. If the bit is flipped, a firmware image going out to a million devices may not boot, since an attacker decided to flip a bit, and signature verification fails, since the boot loader decided not decrypt the firmware based in an unsigned bit of information. PSA appears to have a better approach and makes no provision such data to be outside the image manifest, which is always signed.
4) This last one is a matter of opinion and can live with the current design. The FIP layer is now aware of the crypto module. FIP has also become coupled with authenticated encryption(If we do this, why not have the FIP layer call auth_mod_verify_img() and why stop at cyrpto_mod_auth_decrypt()) . FIP is security aware. I'd like to keep things the way they are today, where IO and security are separate modules and glued together by another layer(load_auth_image() for example). This is cleaner in my view since security policy(should i decrypt? should i verify signatures? should i apply any security at all?) is separate from IO.
Thanks
Raghu
On January 24, 2020 at 3:06 AM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
On Fri, 24 Jan 2020 at 04:02, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
I also just realized that both the TBBR and ARM PSA only talk about encryption of the image, and not authenticated encryption. The guarantees provided by both are completely different. Your review(https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/) talks about the requirement R060_TBBR_FUNCTION being implemented, which is technically not true(and potentially misleading). We must make a note of this difference and use the appropriate terminology, without mixing the two, in the documentation, commit messages, source code comments and error prints. The tool is also called 'encrypt_fw ' but should maybe be named appropriately to indicate it is doing authenticated encryption.
I wouldn't call it misleading. Since firmware encryption feature
essentially provides confidentiality protection and authenticated
encryption is the type of crypto algorithm which we have used to
implement it.
BTW, ARM PSA(file:///home/raghu/repos/fvp/DEN0072-PSA_TBFU_1-0-REL.pdf) expects that the image manifest(X509 certificate) contain the hash of the ENCRYPTED image(Table 2 and as described in my answer to your question "How would this ensure integrity of ciphertext"). The TBBR spec completely misses this fact, and is a crucial detail if we only implement encryption(as opposed to authenticated encryption).Build_macros.mk, in your change, passes the un-encrypted image to cert-tool. You can get away with it in your implementation, since you are using authenticated encryption, not if you were only implementing firmware encryption.
I have already highlighted the issue with signing the ciphertext in my
previous reply which deviates from security properties provided by
signature verification of plain firmware. So I think we need to
revisit ARM PSA TBFU spec.
-Sumit
Is it possible for somebody from ARM to have the TBBR spec updated to reflect this? Also perhaps talk to the spec writers about incorporating authenticated encryption into TBBR and PSA? This patch set is somewhat trailblazing in this regard.
-Raghu
On January 23, 2020 at 12:08 PM, Raghupathy Krishnamurthy via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
Hi Sumit,
Thanks for your response.
So firstly I would suggest you to revisit TBBR spec [1],
[RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its firmware?
[RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems. The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
How would this ensure integrity of ciphertext?
[RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
Have a look at some defective sign and encrypt techniques here [2]
[RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
Why would one not use TBBR CoT here?
[RK] see above. Not all systems are designed equal.
and why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
[RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory. The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
[RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
If you are really paranoid about authentication of FIP header...
[RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
Thanks
-Raghu
On January 23, 2020 at 12:38 AM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
Hi Raghu,
I guess you have completely misunderstood this feature. This is an
optional feature which allows to load encrypted FIP payloads using
authenticated decryption which MUST be used along with signature
verification (or TBBR CoT).
So firstly I would suggest you to revisit TBBR spec [1], especially
requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
Hello,
The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
It looks like you have mixed both TBBR CoT and this authenticated
decryption feature. They both are completely different and rather
complement each other where TBBR CoT establishes
secure/signed/verified boot and this authenticated decryption feature
provides confidentiality protection for FIP payloads.
At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its
firmwares? We must use signature verification for TBBR CoT (see
section: 2.1 Authentication of Code Images by Certificate in TBBR spec
[1]).
The authentication framework is already well designed and well equipped to handle these types of extensions.
1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
How would this ensure integrity of ciphertext? This approach may be
vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
part of AES-GCM provides integrity protection for ciphertext.
3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
Have a look at some defective sign and encrypt techniques here [2].
The order can't be any arbitrary one, we need to be careful about
this.
One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
Why would one not use TBBR CoT here?
If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
Again you are confusing TBBR CoT with authenticated decryption
feature. And why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
There seems to be no benefit to having a flag in the FIP header.
How would one handle a case where BL31 is in plain format and BL32 is
in encrypted format?
Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
If you are really paranoid about authentication of FIP header then you
should look at implementing optional requirement: R010_TBBR_TOC as per
TBBR spec [1].
[1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requiremen…
[2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
-Sumit
-Raghu
On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
Hi Sandrine,
On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
<Sandrine.Bailleux(a)arm.com> wrote:
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
Thanks for your review.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Okay, will wait for CI testing.
-Sumit
Regards,
Sandrine
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
On 27/01/2020 12:34, Joakim Bech via TF-A wrote:
> on raw binaries are there so we can be sure that
> we're loading unmodified firmware coming from the one owning a private key
> corresponding to the public key hardcoded into the device (or via a hash of
> the public key). I think we all can agree
OK, I have finally managed to catch up on this thread. Apologies for the
delayed response.
As Joakim mentions, I think both the mentioned cases ie. encrypt plain
text -> sign and sign -> encrypt are valid and it depend on the threat
model and security requirement.
I have had a brief look at the patch stack and coupling the feature to
the FIP does not seem like a good idea to me( + the added complexity
protecting the ToC). Currently meta data of the firmware images is
passed OOB via the `bl_mem_params_node_t` descriptor to the BL images
whereas this patch breaks that convention. It is better to follow the
set convention and avoid dependency on FIP format (btw platforms need
not use FIP format and can use other packaging formats).
The iv data to decrypt seems to be prepended to the encrypted file in
the fip which is making custom manipulations file pointer manipulations
which is raising some red flags.
https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/8/drive…
IMO, this should not visible to be FIP driver. I am not sure of the best
practice for sending iv data but if it has to be prepended, it should be
FIP agnostic.
The TBBR assumes a single owner for all images whereas the TBFU
supercedes this spec by allowing multiple owners for different images.
We are in the process of enhancing the CoT for different Root of Trusts
for different images and there needs to be capability to encrypt with
different keys for different images based on ownership. The current
implementation has some limitations like introducing a platform API
invocation within the driver layer to get the single key which is not
ideal IMO.
I haven't fully flushed out my ideas, but thoughts are based on
enhancements done by Masahiro for adding decompression support for BL
images. See https://github.com/ARM-software/arm-trusted-firmware/pull/1224 .
Basically, this patch series allows any filter to be setup before/after
the images are loaded. It relies on pre-load and post load hooks which
are platform specific to perform the filter operation. So, if a new
image attribute `EP_ENCRYPTED` is added ep_info_exp.h, then BL2 needs to
do the following in bl2_plat_handle_pre_image_load()
{
if EP_ENCRYPTED is set :
load the image to SRAM and decrypt using crypto module
}
This will cater for `decrypt-then-authenticate` flow.
The fip_tool no longer needs to be aware of encryption and the build
process just needs to pipe the encrypted binaries to the fip_tool.
Similarly if `authenticate-then-decrypt` needs to be supported, then all
that the platform needs to do is implement decrypt in
bl2_plat_handle_post_image_load().
The platform can now use different keys to use for different BL images
if it needs to do so.
Some usecases require the firmware images to be re-encrypted using HUK
after firmware update (aka firmware binding) and following this design
will allow this to be done as well.
Best Regards
Soby Mathew
On Mon, 2020-01-27 at 12:04 +0000, Sandrine Bailleux via TF-A wrote:
> I share this concern. I was actually surprised to see that the TBBR
> specification advocates putting this security policy bit in the
> unencrypted part of the FIP, I do not know the rationale for that.
Sorry, I meant: in the *unsigned* part of the FIP. This bit itself
cannot be encrypted, as it indicates how to decrypt data!
Hi,
On Fri, 2020-01-24 at 18:39 +0000, Raghupathy Krishnamurthy via TF-A
wrote:
> Here are my concerns:
> 1) The patch set claims to implement R060_TBBR_FUNCTION from the TBBR
> spec. I don't agree with this. The patch is using authenticated
> encryption(which does provide confidentiality) but is different from
> implementing firmware encryption as described in the TBBR(and PSA) in
> my view. If we are implementing R060_TBBR_FUNCTION, in my view, this
> should be implemented such that the firmware is first
> encrypted(symmetric), and the encrypted firmware should be
> asymmetrically signed(as per PSA). Also, the order for verification
> should be asymmetric signature verification and then
> decryption(symmetric). In this patch, plain text firmware is
> asymmetrically signed, plain text firmware is encrypted and
> authenticated with a symmetric key using authenticated encryption.
> The order for verification in this patch is authenticate and decrypt
> using the symmetric key and THEN asymmetric signature verification on
> the decryted(and now plain text) firmware. The ordering of
> verification and decryption are subtly different from what the
> spec(TBBR and PSA) expects. Security does not *seem* to be broken as
> far as i can tell, but the patch is not strictly an implementation of
> the spec, as it claims. This can throw off people reading the spec
> and trying to match code to spec. This is not good when someone wants
> to audit and gain confidence in the security of the implementation.
> At a minimum, we need someone writing the spec to address this.
I do not have a strong opinion on this one. As I mentioned at the start
of this email thread, I am quite new to the concept of authenticated
decryption and naively thought this would not differ too much from
using decryption + signature verification separately. However, if these
are fundamentally different things like you say, I get your point and I
too am worried about the confusion this could create.
> 2) The TBBR spec does not talk about authenticated encryption of
> firmware and only talks about encryption of firmware. It also does
> not specify if the encrypted firmware needs to be signed or the plain
> text need to be signed. This is critical detail. It also does not
> specify the order in which encryption and signing, decryption and
> verification must be done. These are important details and need to be
> addressed explicitly(perhaps in both TBBR and PSA). Use of
> authenticated encryption needs to be explicitly discussed as well.
TBBR is quite an old specification and it is unlikely that it will ever
be updated. PSA-TBFU is meant to supersede it so I think any
update/clarification/addition would only apply to TBFU.
I agree that it would be good for PSA-TBFU to provide some guidance
around encryption and signature together, in which order they should be
done and what security properties they bring together. I can raise this
to the PSA architects.
> 3) TBBR specifies that the TOC(table of contents) *may* be
> authenticated. The TOC contains bits that dictate security policy, in
> this case, if the firmware is encrypted or not. I don't think it is
> good to consume security policy from unauthenticated data. In my
> view, this information must be in the signed image
> manifest(certificate in ATF) since an attacker can at-least cause
> simple DoS attacks by flipping bits in the FIP header, undetected. If
> the bit is flipped, a firmware image going out to a million devices
> may not boot, since an attacker decided to flip a bit, and signature
> verification fails, since the boot loader decided not decrypt the
> firmware based in an unsigned bit of information. PSA appears to have
> a better approach and makes no provision such data to be outside the
> image manifest, which is always signed.
I share this concern. I was actually surprised to see that the TBBR
specification advocates putting this security policy bit in the
unencrypted part of the FIP, I do not know the rationale for that.
Given that TBBR will be deprecated in the future, I wonder whether it
would be better to adopt PSA's approach here and move this security
policy bit in the signed part of the FIP. This will make this part of
the implementation non-TBBR compliant but again, I don't see that as a
big concern, as we should look at PSA-TBFU as the future and not worry
too much about TBBR if we have good reasons to diverge.
> 4) This last one is a matter of opinion and can live with the current
> design. The FIP layer is now aware of the crypto module. FIP has also
> become coupled with authenticated encryption(If we do this, why not
> have the FIP layer call auth_mod_verify_img() and why stop at
> cyrpto_mod_auth_decrypt()) . FIP is security aware. I'd like to keep
> things the way they are today, where IO and security are separate
> modules and glued together by another layer(load_auth_image() for
> example). This is cleaner in my view since security policy(should i
> decrypt? should i verify signatures? should i apply any security at
> all?) is separate from IO.
I agree with you. I too found it more elegant and cleaner to have the
FIP layer and crypto layer as 2 independent modules. I too feel
slightly uncomfortable about introducing such a dependency. But I
could not think of a way to avoid it while keeping the security policy
bit in the FIP ToC header. That said, if the latter is something we're
considering to redesign then we might be able to remove this dependency and keep the design as it is.
Regards,
Sandrine
On Fri, 24 Jan 2020 at 04:02, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
>
> I also just realized that both the TBBR and ARM PSA only talk about encryption of the image, and not authenticated encryption. The guarantees provided by both are completely different. Your review(https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/) talks about the requirement R060_TBBR_FUNCTION being implemented, which is technically not true(and potentially misleading). We must make a note of this difference and use the appropriate terminology, without mixing the two, in the documentation, commit messages, source code comments and error prints. The tool is also called 'encrypt_fw ' but should maybe be named appropriately to indicate it is doing authenticated encryption.
I wouldn't call it misleading. Since firmware encryption feature
essentially provides confidentiality protection and authenticated
encryption is the type of crypto algorithm which we have used to
implement it.
>
> BTW, ARM PSA(file:///home/raghu/repos/fvp/DEN0072-PSA_TBFU_1-0-REL.pdf) expects that the image manifest(X509 certificate) contain the hash of the ENCRYPTED image(Table 2 and as described in my answer to your question "How would this ensure integrity of ciphertext"). The TBBR spec completely misses this fact, and is a crucial detail if we only implement encryption(as opposed to authenticated encryption).Build_macros.mk, in your change, passes the un-encrypted image to cert-tool. You can get away with it in your implementation, since you are using authenticated encryption, not if you were only implementing firmware encryption.
I have already highlighted the issue with signing the ciphertext in my
previous reply which deviates from security properties provided by
signature verification of plain firmware. So I think we need to
revisit ARM PSA TBFU spec.
-Sumit
>
> Is it possible for somebody from ARM to have the TBBR spec updated to reflect this? Also perhaps talk to the spec writers about incorporating authenticated encryption into TBBR and PSA? This patch set is somewhat trailblazing in this regard.
>
> -Raghu
>
>
> On January 23, 2020 at 12:08 PM, Raghupathy Krishnamurthy via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
>
> Hi Sumit,
>
>
> Thanks for your response.
>
>
> So firstly I would suggest you to revisit TBBR spec [1],
>
>
>
> [RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
>
>
> Why would one use authenticated decryption only to establish TBBR
>
> Chain of Trust providing device the capability to self sign its firmware?
>
> [RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems. The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
>
>
> How would this ensure integrity of ciphertext?
>
>
>
> [RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
>
>
> Have a look at some defective sign and encrypt techniques here [2]
>
>
>
> [RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
>
>
> Why would one not use TBBR CoT here?
>
>
>
> [RK] see above. Not all systems are designed equal.
>
>
>
> and why would one like to hardcode in a device during
>
> provisioning to boot only either an encrypted or a plain firmware
> image?
>
>
> [RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory. The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
>
>
> How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
>
>
>
> [RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
>
>
> If you are really paranoid about authentication of FIP header...
>
> [RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
>
>
> Thanks
> -Raghu
>
>
> On January 23, 2020 at 12:38 AM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
>
>
> Hi Raghu,
>
> I guess you have completely misunderstood this feature. This is an
> optional feature which allows to load encrypted FIP payloads using
> authenticated decryption which MUST be used along with signature
> verification (or TBBR CoT).
>
> So firstly I would suggest you to revisit TBBR spec [1], especially
> requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
>
> On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
> <raghu.ncstate(a)icloud.com> wrote:
>
>
>
> Hello,
>
>
> The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
>
> It looks like you have mixed both TBBR CoT and this authenticated
> decryption feature. They both are completely different and rather
> complement each other where TBBR CoT establishes
> secure/signed/verified boot and this authenticated decryption feature
> provides confidentiality protection for FIP payloads.
>
>
> At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
>
> Why would one use authenticated decryption only to establish TBBR
> Chain of Trust providing device the capability to self sign its
> firmwares? We must use signature verification for TBBR CoT (see
> section: 2.1 Authentication of Code Images by Certificate in TBBR spec
> [1]).
>
>
> The authentication framework is already well designed and well equipped to handle these types of extensions.
> 1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
> 2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
>
> How would this ensure integrity of ciphertext? This approach may be
> vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
> part of AES-GCM provides integrity protection for ciphertext.
>
>
> 3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
>
>
>
> Have a look at some defective sign and encrypt techniques here [2].
> The order can't be any arbitrary one, we need to be careful about
> this.
>
>
> One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
>
> Why would one not use TBBR CoT here?
>
>
> If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
>
> Again you are confusing TBBR CoT with authenticated decryption
> feature. And why would one like to hardcode in a device during
> provisioning to boot only either an encrypted or a plain firmware
> image?
>
>
> There seems to be no benefit to having a flag in the FIP header.
>
> How would one handle a case where BL31 is in plain format and BL32 is
> in encrypted format?
>
>
> Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
>
>
>
> If you are really paranoid about authentication of FIP header then you
> should look at implementing optional requirement: R010_TBBR_TOC as per
> TBBR spec [1].
>
> [1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requiremen…
> [2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
>
> -Sumit
>
>
> -Raghu
>
>
> On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
>
>
> Hi Sandrine,
>
>
> On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
> <Sandrine.Bailleux(a)arm.com> wrote:
>
>
>
>
> Hello Sumit,
>
>
>
>
> Thank you for reworking the patches and addressing all of my review
>
>
> comments. I am happy with the latest version of these and consider them
>
>
> ready to go. I plan to leave them in Gerrit for another week to give
>
>
> extra time for other potential reviewers to have a look and comment.
>
>
>
>
>
>
> Thanks for your review.
>
>
> To everyone on the list: Please raise any concerns you may have about
>
>
> these patches in the coming week. If I don't hear anything by 29th
>
>
> January 2020, I will merge these patches.
>
>
>
>
> @Sumit: One of the next actions for this patch stack would be to have
>
>
> some level of testing in the CI system to detect any potential
>
>
> regressions. We (at Arm) can quite easily add a few build tests but then
>
>
> testing the software stack on QEMU is a bit more involved for various
>
>
> reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
>
>
> ...) so this might have to wait for some time.
>
>
>
>
>
>
> Okay, will wait for CI testing.
>
>
> -Sumit
>
>
> Regards,
>
>
> Sandrine
>
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Never mind. No PSCI SMC's will be available.
On January 24, 2020 at 12:20 PM, Raghupathy Krishnamurthy via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
It appears that the BL1 FWU SMC's are written under the assumption that only one core can call the SMC's at any given time but i don't see anything enforcing it. What prevent's this ?
-Raghu
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
It appears that the BL1 FWU SMC's are written under the assumption that only one core can call the SMC's at any given time but i don't see anything enforcing it. What prevent's this ?
-Raghu
On 22/01/2020 12:29, Scott Branden via TF-A wrote:
> Please revert the removal of RSA PKCS#1 v1.5 support from cert_tool:
>
> https://github.com/ARM-software/arm-trusted-firmware/commit/6a415a508ea6ace…
>
> We have products shipping with such support. I think this problem came
> up before when somebody tried removing such support.
> They still need to run with the latest yocto codebase.
>
> Regards,
> Scott
>
Hi Scott,
It is untenable for us as maintainers to keep supporting deprecated
features in the tree. We need to be able to move the codebase forward.
As the commit message says, the RSA PKCS#1.5 support was removed from
BL1/BL2 images before this patch, and it no longer made sense to keep
the support for just the cert_tool.
Seems that you are not using the latest TF-A code for your platform
(since PKCS#1.5 is not supported), it does not make sense to pull the
latest master just for the tool. So my suggestion would be pin your
yocto scripts to a TF-A release that had the support for PKCS#1.5.
Best Regards
Soby Mathew
Hi Sumit,
Thanks for your response.
>>So firstly I would suggest you to revisit TBBR spec [1],
[RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
>>Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its firmware?
[RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems. The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
>>How would this ensure integrity of ciphertext?
[RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
>>Have a look at some defective sign and encrypt techniques here [2]
[RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
>>Why would one not use TBBR CoT here?
[RK] see above. Not all systems are designed equal.
>>and why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
[RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory. The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
>>How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
[RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
>>If you are really paranoid about authentication of FIP header...
[RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
Thanks
-Raghu
On January 23, 2020 at 12:38 AM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
Hi Raghu,
I guess you have completely misunderstood this feature. This is an
optional feature which allows to load encrypted FIP payloads using
authenticated decryption which MUST be used along with signature
verification (or TBBR CoT).
So firstly I would suggest you to revisit TBBR spec [1], especially
requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
Hello,
The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
It looks like you have mixed both TBBR CoT and this authenticated
decryption feature. They both are completely different and rather
complement each other where TBBR CoT establishes
secure/signed/verified boot and this authenticated decryption feature
provides confidentiality protection for FIP payloads.
At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its
firmwares? We must use signature verification for TBBR CoT (see
section: 2.1 Authentication of Code Images by Certificate in TBBR spec
[1]).
The authentication framework is already well designed and well equipped to handle these types of extensions.
1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
How would this ensure integrity of ciphertext? This approach may be
vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
part of AES-GCM provides integrity protection for ciphertext.
3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
Have a look at some defective sign and encrypt techniques here [2].
The order can't be any arbitrary one, we need to be careful about
this.
One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
Why would one not use TBBR CoT here?
If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
Again you are confusing TBBR CoT with authenticated decryption
feature. And why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
There seems to be no benefit to having a flag in the FIP header.
How would one handle a case where BL31 is in plain format and BL32 is
in encrypted format?
Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
If you are really paranoid about authentication of FIP header then you
should look at implementing optional requirement: R010_TBBR_TOC as per
TBBR spec [1].
[1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requiremen…
[2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
-Sumit
-Raghu
On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
Hi Sandrine,
On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
<Sandrine.Bailleux(a)arm.com> wrote:
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
Thanks for your review.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Okay, will wait for CI testing.
-Sumit
Regards,
Sandrine
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
+tf a list
On January 23, 2020 at 2:32 PM, Raghupathy Krishnamurthy <raghu.ncstate(a)icloud.com> wrote:
I also just realized that both the TBBR and ARM PSA only talk about encryption of the image, and not authenticated encryption. The guarantees provided by both are completely different. Your review(https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/2495/) talks about the requirement R060_TBBR_FUNCTION being implemented, which is technically not true(and potentially misleading). We must make a note of this difference and use the appropriate terminology, without mixing the two, in the documentation, commit messages, source code comments and error prints. The tool is also called 'encrypt_fw ' but should maybe be named appropriately to indicate it is doing authenticated encryption.
BTW, ARM PSA(file:///home/raghu/repos/fvp/DEN0072-PSA_TBFU_1-0-REL.pdf) expects that the image manifest(X509 certificate) contain the hash of the ENCRYPTED image(Table 2 and as described in my answer to your question "How would this ensure integrity of ciphertext"). The TBBR spec completely misses this fact, and is a crucial detail if we only implement encryption(as opposed to authenticated encryption).Build_macros.mk, in your change, passes the un-encrypted image to cert-tool. You can get away with it in your implementation, since you are using authenticated encryption, not if you were only implementing firmware encryption.
Is it possible for somebody from ARM to have the TBBR spec updated to reflect this? Also perhaps talk to the spec writers about incorporating authenticated encryption into TBBR and PSA? This patch set is somewhat trailblazing in this regard.
-Raghu
On January 23, 2020 at 12:08 PM, Raghupathy Krishnamurthy via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
Hi Sumit,
Thanks for your response.
So firstly I would suggest you to revisit TBBR spec [1],
[RK] I'm very familiar with the TBBR spec and the requirements. Note that not all SoC's adhere perfectly to the TBBR spec, since it does not apply to devices in all market segments. However, these devices do use arm trusted firmware and TBBR CoT in a slightly modified form, which is still perfectly valid. Also, the TBBR spec can be changed if required :)
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its firmware?
[RK] Fair point. However, you may have devices that don't have the processing power or hardware budget or cost factors(paying for HSM's to store private asymmetric keys), to implement asymmetric verification, in which case using authenticated decryption to verify firmware authenticity and integrity is perfectly valid. The attacks on devices that use symmetric keys to verify firmware authenticity and integrity are usually related to exploiting firmware flaws that leak the key or insiders leaking keys, but that is a different problem and requires different solutions. Fundamentally, there is nothing wrong with using symmetric keys for this purpose, so long as the key is well protected. Also note, security requirements and guarantees are different for different systems. The risk is taken by the system designer and should not be imposed by framework code. I don't advocate doing this but it is an option that your implementation does not provide(and perhaps rightly so).
How would this ensure integrity of ciphertext?
[RK] You sign the ciphertext. In your design, you pass bl31_enc.bin to cert_tool to sign. You don't decrypt the encrypted cipher text until you have verified the asymmetric signature(which provides integrity). As far as signature verification is concerned, whether you sign the plain text or ciphertext is immaterial, since you are simply verifying that the absolute bits you loaded have not been modified(assuming you use a secure signature scheme).
Have a look at some defective sign and encrypt techniques here [2]
[RK] Again, very familiar with [2]. In the S/MIME case, you have multiple parties. With secure boot, you have one party, effectively verifying its own messages across time. There is only one key used to verify signatures. 1.1 and 1.2 does not apply. Also you are encrypting and signing with completely different keys and algorithms. Section 1.2 applies when you use RSA/El-gamal encryption. Here you use symmetric encryption and asymmetric signing.
Why would one not use TBBR CoT here?
[RK] see above. Not all systems are designed equal.
and why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
[RK] Why would you not? You typically want to have the same security policy for a class of devices and not be modifiable by an attacker. It isn't common for the same class of devices to use encrypted firmware some times, and un-encrypted firmware other times. If it is common, there is no problem with setting the bit in the FIP header, as long as verified boot is mandatory. The only concern(as my original email said) is the coupling of the FIP layer and the crypto module, in the implementation. I still don't like that fact that the bit saying the file is encrypted is not signed and this may require talking to the TBBR spec writers. Page 22 of the TBBR spec calls out ToC as "Trusted Table of Contents". The FIP header cannot be "trusted", if it is not in ROM or its integrity is verified by some means! R010_TBBR_TOC should perhaps be mandatory then. Also see R080_TBBR_TOC that says the TOC MUST be ROM'ed or tied by hardware in readable registers. This requirement seems contradictory to R010_TBBR_TOC, given that the FIP header(TOC) is copied from mutable NVM by ROM or some boot stage and then ROM'd or loaded into registers. I may be misunderstanding R080_TBBR_TOC, but i'd interpret it as the TOC(FIP header in ATF implementation of TBBR) as being in ROM or integrity verified.
How would one handle a case where BL31 is in plain format and BL32 is in encrypted format?
[RK]TBBR CoT is equipped to do this. The table is defined on a per image basis.
If you are really paranoid about authentication of FIP header...
[RK] I don't mean to pontificate but there are real world customers buying real hardware, running ATF code, who care about such details and ask about such things routinely. It is not just me being paranoid and is definitely not a minor matter to think of such details. We should discuss more and consider the implications of R080_TBBR_TOC and R010_TBBR_TOC, perhaps on a different thread, without blocking your code review. Can somebody from ARM clarify these requirements with the spec writers?
Thanks
-Raghu
On January 23, 2020 at 12:38 AM, Sumit Garg <sumit.garg(a)linaro.org> wrote:
Hi Raghu,
I guess you have completely misunderstood this feature. This is an
optional feature which allows to load encrypted FIP payloads using
authenticated decryption which MUST be used along with signature
verification (or TBBR CoT).
So firstly I would suggest you to revisit TBBR spec [1], especially
requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
Hello,
The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
It looks like you have mixed both TBBR CoT and this authenticated
decryption feature. They both are completely different and rather
complement each other where TBBR CoT establishes
secure/signed/verified boot and this authenticated decryption feature
provides confidentiality protection for FIP payloads.
At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its
firmwares? We must use signature verification for TBBR CoT (see
section: 2.1 Authentication of Code Images by Certificate in TBBR spec
[1]).
The authentication framework is already well designed and well equipped to handle these types of extensions.
1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
How would this ensure integrity of ciphertext? This approach may be
vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
part of AES-GCM provides integrity protection for ciphertext.
3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
Have a look at some defective sign and encrypt techniques here [2].
The order can't be any arbitrary one, we need to be careful about
this.
One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
Why would one not use TBBR CoT here?
If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
Again you are confusing TBBR CoT with authenticated decryption
feature. And why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
There seems to be no benefit to having a flag in the FIP header.
How would one handle a case where BL31 is in plain format and BL32 is
in encrypted format?
Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
If you are really paranoid about authentication of FIP header then you
should look at implementing optional requirement: R010_TBBR_TOC as per
TBBR spec [1].
[1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requiremen…
[2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
-Sumit
-Raghu
On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
Hi Sandrine,
On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
<Sandrine.Bailleux(a)arm.com> wrote:
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
Thanks for your review.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Okay, will wait for CI testing.
-Sumit
Regards,
Sandrine
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi Raghu,
I guess you have completely misunderstood this feature. This is an
optional feature which allows to load encrypted FIP payloads using
authenticated decryption which MUST be used along with signature
verification (or TBBR CoT).
So firstly I would suggest you to revisit TBBR spec [1], especially
requirements: R040_TBBR_TOC, R060_TBBR_FUNCTION etc.
On Thu, 23 Jan 2020 at 00:14, Raghupathy Krishnamurthy
<raghu.ncstate(a)icloud.com> wrote:
>
> Hello,
>
> The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer.
It looks like you have mixed both TBBR CoT and this authenticated
decryption feature. They both are completely different and rather
complement each other where TBBR CoT establishes
secure/signed/verified boot and this authenticated decryption feature
provides confidentiality protection for FIP payloads.
> At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT.
Why would one use authenticated decryption only to establish TBBR
Chain of Trust providing device the capability to self sign its
firmwares? We must use signature verification for TBBR CoT (see
section: 2.1 Authentication of Code Images by Certificate in TBBR spec
[1]).
> The authentication framework is already well designed and well equipped to handle these types of extensions.
> 1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
> 2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
How would this ensure integrity of ciphertext? This approach may be
vulnerable to Chosen Ciphertext Attacks (CCAs). Authentication tag as
part of AES-GCM provides integrity protection for ciphertext.
> 3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
>
Have a look at some defective sign and encrypt techniques here [2].
The order can't be any arbitrary one, we need to be careful about
this.
> One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform.
Why would one not use TBBR CoT here?
> If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses).
Again you are confusing TBBR CoT with authenticated decryption
feature. And why would one like to hardcode in a device during
provisioning to boot only either an encrypted or a plain firmware
image?
> There seems to be no benefit to having a flag in the FIP header.
How would one handle a case where BL31 is in plain format and BL32 is
in encrypted format?
> Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
>
If you are really paranoid about authentication of FIP header then you
should look at implementing optional requirement: R010_TBBR_TOC as per
TBBR spec [1].
[1] https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requiremen…
[2] http://world.std.com/~dtd/sign_encrypt/sign_encrypt7.html
-Sumit
> -Raghu
>
> On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
>
> Hi Sandrine,
>
> On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
> <Sandrine.Bailleux(a)arm.com> wrote:
>
>
> Hello Sumit,
>
>
> Thank you for reworking the patches and addressing all of my review
>
> comments. I am happy with the latest version of these and consider them
>
> ready to go. I plan to leave them in Gerrit for another week to give
>
> extra time for other potential reviewers to have a look and comment.
>
>
>
> Thanks for your review.
>
> To everyone on the list: Please raise any concerns you may have about
>
> these patches in the coming week. If I don't hear anything by 29th
>
> January 2020, I will merge these patches.
>
>
> @Sumit: One of the next actions for this patch stack would be to have
>
> some level of testing in the CI system to detect any potential
>
> regressions. We (at Arm) can quite easily add a few build tests but then
>
> testing the software stack on QEMU is a bit more involved for various
>
> reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
>
> ...) so this might have to wait for some time.
>
>
>
> Okay, will wait for CI testing.
>
> -Sumit
>
> Regards,
>
> Sandrine
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Forgot to add TF-A list(why is it not automatically on when you hit reply all ?)
On January 22, 2020 at 10:44 AM, Raghupathy Krishnamurthy <raghu.ncstate(a)icloud.com> wrote:
Hello,
The patch stack looks good. The only comment i have is that the FIP layer has now become security aware and supports authenticated decryption(only). This is a deviation from the secure/signed/verified boot design, where we use the TBBR COT to dictate the security operations on the file. This is nice, because file IO is decoupled from the security policy. This may be a big deviation(i apologize if this was considered and shot down for some other reason), but it may be worthwhile to consider making authenticated decryption a part of the authentication framework as opposed to coupling it with the FIP layer. At a high level, this would mean adding a new authentication method(perhaps AUTH_METHOD_AUTHENTICATED_DECRYPTION), and having the platform specify that the image is using authenticated encryption in the TBBR COT. The authentication framework is already well designed and well equipped to handle these types of extensions.
1) This would make the change simpler, since you would not require changes to the FIP tool and the FIP layer.
2) This would also allow for future cases where a platform may want to only encrypt the file and use public key authentication on the encrypted file(for ex. the soc does not have a crypto accelerator for aes-gcm but only for AES and public key verification, for whatever reason).
3) This would let you choose the order in which you want to do the authenticated decryption(or just decryption) and signature verification, if you use both, one or the other.
One other thing i'm not entirely comfortable with is that the flag indicating if there are encrypted files or not in the FIP, is in the *unsigned* portion of the FIP header. An attacker could simply flip bits that dictate security policy in the header and avoid detection(in this case, the indication that the file needs authenticated decryption). If a platform only uses authenticated encryption, but not verified boot, an attacker could flip the bit in the FIP header and have any image loaded on the platform. If authenticated encryption cannot be used without verified boot(which requires build time flags), having a flag to indicate that there are encrypted files in the FIP header is moot, since this can come at build time through the TBBR COT. In any case, it seems like the security policy that firmware images need to be decrypted or authenticated with authenticated decryption, seems like a firmware build time or manufacturing time decision(perhaps a bit set in the e-fuses). There seems to be no benefit to having a flag in the FIP header. Otherwise, I cant think of any attacks due to this and it may be completely okay, but generally, consuming data that dictates security policy/operations before verifying its integrity seems like a recipe for disaster.
-Raghu
On January 22, 2020 at 3:51 AM, Sumit Garg via TF-A <tf-a(a)lists.trustedfirmware.org> wrote:
Hi Sandrine,
On Wed, 22 Jan 2020 at 15:43, Sandrine Bailleux
<Sandrine.Bailleux(a)arm.com> wrote:
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
Thanks for your review.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Okay, will wait for CI testing.
-Sumit
Regards,
Sandrine
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hello Sumit,
Thank you for reworking the patches and addressing all of my review
comments. I am happy with the latest version of these and consider them
ready to go. I plan to leave them in Gerrit for another week to give
extra time for other potential reviewers to have a look and comment.
To everyone on the list: Please raise any concerns you may have about
these patches in the coming week. If I don't hear anything by 29th
January 2020, I will merge these patches.
@Sumit: One of the next actions for this patch stack would be to have
some level of testing in the CI system to detect any potential
regressions. We (at Arm) can quite easily add a few build tests but then
testing the software stack on QEMU is a bit more involved for various
reasons (first instance of QEMU testing, dependencies on OPTEE, UEFI,
...) so this might have to wait for some time.
Regards,
Sandrine
Hi Iñigo,
We needs a little bit more information to know how best to answer your question. Is this just an academic question of is there a real use case you are trying to decide on which way to go?
Its not so much a question of which architecture is easier its just that they are different with Cortex-M powering the most energy-efficient embedded devices where as Cortex-A is offering supreme performance at optimal power. Each architecture offers different hardware constraints and the secure TrustZone is implemented to handle each and an ecosystem of solutions surrounds each.
If this is an academic question I would refer you to Arm's pages on TrustZone as a starting point:
https://www.arm.com/why-arm/technologies/trustzone-for-cortex-mhttps://www.arm.com/why-arm/technologies/trustzone-for-cortex-a
If there is need for guidance on a specific solution or difficulties you are trying to overcome while building a solution please give us some more details.
Thanks
Joanna
On 22/01/2020, 08:07, "TF-A on behalf of Iñigo Vicente Waliño via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hi,
Can someone explain to me why it is easier to implement a secure TrustZone
environment with secure boot and storage and PSA certificate in cortex-M
than in cortex-A?
Who better implements a safe environment: cortex-M or cortex-A? Why?
Thanks.
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi,
Can someone explain to me why it is easier to implement a secure TrustZone
environment with secure boot and storage and PSA certificate in cortex-M
than in cortex-A?
Who better implements a safe environment: cortex-M or cortex-A? Why?
Thanks.
Please revert the removal of RSA PKCS#1 v1.5 support from cert_tool:
https://github.com/ARM-software/arm-trusted-firmware/commit/6a415a508ea6ace…
We have products shipping with such support. I think this problem came
up before when somebody tried removing such support.
They still need to run with the latest yocto codebase.
Regards,
Scott
Hi Varun,
It can be a challenge to keep on top of the reviews and we are aware of the backlog increase after the end of year break and trying different approaches to get reviews completed.
Any suggestions on process improvements from the developer community at large are welcome.
Cheers
Joanna
On 16/01/2020, 17:34, "TF-A on behalf of Varun Wadekar via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hello,
We (NVIDIA) frequently post changes on the TF-A gerrit dashboard and wait for the maintainers to gradually get to them. This is very time consuming and non-deterministic.
Curious to know if other developers face the same problems or are we missing something when asking for reviews? If most of us face the same problems, then we should try to formulate a process to reduce the latency.
Thoughts?
-Varun
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hello,
We (NVIDIA) frequently post changes on the TF-A gerrit dashboard and wait for the maintainers to gradually get to them. This is very time consuming and non-deterministic.
Curious to know if other developers face the same problems or are we missing something when asking for reviews? If most of us face the same problems, then we should try to formulate a process to reduce the latency.
Thoughts?
-Varun
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
On 13/12/2019 22:04, Julius Werner via TF-A wrote:
> On Fri, Dec 13, 2019 at 6:20 AM Joanna Farley <Joanna.Farley(a)arm.com> wrote:
>> On the subject of DebugFS's purpose it was envisages and is today as Sandrine describes as a debug build only capability. Saying that though there has been some early thoughts that it could evolve into a Secure Debug feature where this type of capability or something like it is always on requiring debug certificates for authenticated access. This is something very much for a possible future evolution and is not in the patches available today. We would welcome any thoughts on such an evolution in this space.
>
> I guess this gets into a bit of a philosophy discussion and becomes a
> matter of opinion, so there's probably no one right answer.
> Personally, adding authentication on top of this doesn't really
> resolve my concerns and adds yet more on top. I'm a strong proponent
> of the concept of a minimal Trusted Computing Base, i.e. keeping the
> amount of code executing at the highest privilege level as small and
> low-complexity as possible. Any code can have bugs, so the idea is
> that the more complicated the code you run in EL3 is (and the more
> complicated APIs it exposes), the more likely it becomes that you
> accidentally have an exploitable vulnerability in there. Like a p9
> filesystem driver, a certificate-based authentication system
> (especially if it's based on x509/ASN.1 which are notoriously hard to
> implement safely) is a pretty complex piece of code with a pretty
> large attack surface that I'd rather not have in my EL3 firmware if I
> can avoid it. I understand that for certain use cases you may need
> something like this (if you really want a very extensive and
> extensible debugging API that must be restricted to a few
> authenticated actors), but in my use case I really just need the
> ability to trigger one small debugging feature and that feature itself
> doesn't need to be restricted, so a minimal SMC interface would work
> much better for that case.
Hi Julius,
Just to trying to understand, if TF-A were to expose a crash inducing
SMC, this would still be restricted to special builds for your test runs
? This would not make it to production for Chromebook right ?
I agree 9p filesystem is not desirable in a EL3 runtime firmware. We
could enhance it to use a more tight data structure, if there is a
desire in that direction.
If that is the case, leaving aside the 9p filesystem issues, can
DebugFS serve this requirement (we can remove the limitation that it is
restricted to only Debug builds) ?
The intention that DebugFS can prove useful atleast in the
verification/testing space and if there is more we can do to get there,
it would be good to know.
>
>> On 13/12/2019, 13:01, "TF-A on behalf of Sandrine Bailleux via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
>> Going back to the SMC-based solution then, I am not quite convinced
>> SYSTEM_RESET2 is the right interface for intentionally triggering a
>> panic in TF-A. I think the semantics do not quite match. If anything, a
>> firmware crash seems more like a shutdown operation to me rather than a
>> reset (we don't recover from a firmware crash). I am not even sure we
>> should look into the PSCI SMC range, as it's not a power-management
>> operation.
>
> Crash recovery behavior is platform dependent (via
> plat_panic_handler()). On all the platforms we use in Chrome OS we
> have that implemented as a system reboot. I think for most systems
> (whether it's a Chromebook, a server or some embedded device) that's
> probably what you want for random runtime crashes (and least in a
> production environment), but I agree that TF doesn't enforce any
> standard behavior so it's hard to clearly match it to one or the other
> SMC.
>
>> So it sounds like it's not the first time that you hit this issue, is
>> it? Do you have any other example of Normal World OS feature you would
>> have liked to expose through a generic SMC interface? I am wondering
>> whether this could help choosing the right SMC range, if we can identify
>> some common criteria among a set of such features.
>
> No, it's the first time I've really run into this. But I think we
> might quickly come up with more uses for a "non-secure OS" SMC range
> if we had one. We often see roughly the same SMC again on different
> platforms, because fundamentally they usually need to do the same
> kinds of things -- for example, most platforms have some kind of DDR
> frequency scaling which always needs part of it implemented in EL3, so
> they all need some kind of SMC to switch to a new DDR frequency. Many
> also need some kind of "write value to secure register" SMC that just
> allows the non-secure OS to write a few whitelisted registers that are
> only accessible in EL3 for some reason. If we could standardize these
> interfaces in a non-vendor-specific SMC range, we might be able to
> reduce some code duplication both on the TF and the Linux side.
>
> I guess none of these things are really Linux-specific, now that I
> think of it. So really, I guess the problem is that it would be great
> to have a range of "generic" SMC IDs that can be easily and
> unbureaucratically allocated to try out new features, without having
> to ask Arm to write a big specification document about it every time.
> It's sort of a development velocity issue.
>
We have utilized the ARM SiP range for some "generic" purposes in the
past (see PMF and the execution state switch SMCs). This could be
direction for the some of use-cases. But if the SMCs are meant to be
truly generic and to be relied on for use by generic normal world
software components, it would need to be properly specified I would think.
For dynamically modifying some EL3 registers, it would be good to get
these requirements out. Perhaps there is scope for architecting some of
them as an ARM specification. If not, we could revert to a TF-A standard
if there is enough pull for them (perhaps utilizing the ARM SiP range).
Best Regards
Soby Mathew
Hi
Just to add to this: Given that:
* Your BL31 is nearly 64KB on its own, and
* DDR attacks are more feasible on runtime resident code (e.g. BL31)
Perhaps another more secure solution would be to do DDR init in BL1 or a new transient BL1.5 stage in SRAM. That way you could run BL31 in SRAM, and BL2 and other boot stages in DDR. You would still need to cut down the BL1_RW size though for this to work.
One way to cut down the memory used for page tables is to map all the required memory/devices in larger/fewer address ranges, and align the ranges on block boundaries (i.e. 2MB or 1GB for 4KB translation granules). Although this means mapping in unused address space, which might be less secure, it uses less memory for page tables.
Regards
Dan.
> -----Original Message-----
> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Soby Mathew
> via TF-A
> Sent: 13 January 2020 15:13
> To: Raghupathy Krishnamurthy <raghu.ncstate(a)icloud.com>; "严鸿亮(鸿先)"
> <hongxian.yhl(a)alibaba-inc.com>; tf-a(a)lists.trustedfirmware.org
> Cc: nd <nd(a)arm.com>
> Subject: Re: [TF-A] 回复:回复: MMU Table in ARM Trust-Firmware-A
>
> [Adding back TF-A list]
> Hi Raghu
> Since the same DRAM is partitioned to run both non-secure and secure software,
> it is susceptible to Row Hammer attacks. Also depending on how accessible the
> DRAM interface is, the DDR PHY can be probed for transactions. If the DRAM is
> removable, then it can be deep frozen to retain values and plugged into
> another system to extract contents. Hence the less `secure` comment.
>
> Best Regards
> Soby Mathew
>
> From: Raghupathy Krishnamurthy <raghu.ncstate(a)icloud.com>
> Sent: 13 January 2020 14:36
> To: "严鸿亮(鸿先)" <hongxian.yhl(a)alibaba-inc.com>
> Cc: Soby Mathew <Soby.Mathew(a)arm.com>; Xie, Shaolin <shaolin.xie@alibaba-
> inc.com>; nd <nd(a)arm.com>
> Subject: Re: [TF-A] 回复:回复: MMU Table in ARM Trust-Firmware-A
>
> Hi Soby,
>
> >>It is possible to create a secure carve-out in DDR and execute BL31
> >>from there. It is less `secure` than running from SRAM but it is
> >>acceptable for some market segments depending on the threat model.
>
>
> Can you elaborate on why you say it is less "secure" to run out of DDR?
>
>
> -Raghu
>
> On January 13, 2020 at 4:36 AM, "严鸿亮(鸿先) via TF-A" <tf-
> a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>> wrote:
> Hi Soby,
>
> Many thanks for your advice. I will try to RUN BL2 as Boot ROM and try to
> reduce the BL31 size. Hope BL2_NOPROGBITS+BL31 can be restricted below 64KB.
> Wish me good luck:).
>
> Best Regards
> Yan Hongliang
>
>
>
>
>
>
>
> ------------------------------------------------------------------
> 发件人:Soby Mathew <Soby.Mathew(a)arm.com<mailto:Soby.Mathew@arm.com>>
> 发送时间:2020年1月13日(星期一) 19:37
> 收件人:严鸿亮(鸿先) <hongxian.yhl(a)alibaba-inc.com<mailto:hongxian.yhl@alibaba-
> inc.com>>; "Xie, Shaolin" <shaolin.xie@alibaba-
> inc.com<mailto:shaolin.xie@alibaba-inc.com>>; tf-
> a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org> <tf-
> a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>>
> 抄 送:nd <nd(a)arm.com<mailto:nd@arm.com>>; Ash Wilding
> <Ash.Wilding(a)arm.com<mailto:Ash.Wilding@arm.com>>
> 主 题:RE: 回复:[TF-A] MMU Table in ARM Trust-Firmware-A
>
> Hi Yan,
> OK, sounds good. From your description of the platform, having BL2 as the
> Boot ROM is worth exploring. This way, you avoid the BL1_RW , BL2_PROGBITS
> memory overhead. BL2_NOPROGBITS will need to be in SRAM. See the BL2_AT_EL3
> and BL2_IN_XIP_MEM build flags for details. This depends on BL31 being able
> to fit in the free space available after this is done. One option is using
> the RECLAIM_INIT_CODE option which reclaims BL31 init-only code for runtime
> data but it needs platform specific linker script support. There are other
> ways to reduce the BL31 size, but trading security like
> SEPARATE_CODE_AND_RODATA=0 or trading performance like USE_COHERENT_MEM=0.
> Hope that helps.
> Best Regards
> Soby Mathew
> From: 严鸿亮(鸿先) <hongxian.yhl(a)alibaba-inc.com<mailto:hongxian.yhl@alibaba-
> inc.com>>
> Sent: 11 January 2020 06:04
> To: Soby Mathew <Soby.Mathew(a)arm.com<mailto:Soby.Mathew@arm.com>>; Xie,
> Shaolin <shaolin.xie(a)alibaba-inc.com<mailto:shaolin.xie@alibaba-inc.com>>;
> tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>
> Cc: nd <nd(a)arm.com<mailto:nd@arm.com>>; Ash Wilding
> <Ash.Wilding(a)arm.com<mailto:Ash.Wilding@arm.com>>
> Subject: 回复:[TF-A] MMU Table in ARM Trust-Firmware-A Hi Soby, Thanks for
> your replies.
> Currently, out BL2 size is 60KB, BL1_RW is 28KB, BL31_PROGBITS is 48KB,
> BL31_NOPROGBITS is 24KB. For our testchip, we don't have too strictly
> security requirement, so maybe putting BL31 into DDR is a way to save SRAM
> and we don't need to worry about PROGBITS limit between BL31 and BL2.
> For BL1_RW and BL2, there is 40KB used for MMU table. After I disable the MMU
> related code in arm_bl1_plat_arch_setup()/arm_bl2_plat_arch_setup() and also
> disable the build of xlat_table_lib, this 40KB space is saved. If the
> consequence is decreasing performance and losing MMU protection without any
> other functionality problem, then we might try this.
>
> Best Regards
> Yan Hongliang
>
>
>
>
>
>
> ------------------------------------------------------------------
> 发件人:Soby Mathew <Soby.Mathew(a)arm.com<mailto:Soby.Mathew@arm.com>>
> 发送时间:2020年1月10日(星期五) 18:27
> 收件人:"Xie, Shaolin" <shaolin.xie@alibaba-
> inc.com<mailto:shaolin.xie@alibaba-inc.com>>; tf-
> a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org> <tf-
> a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>>
> 抄 送:nd <nd(a)arm.com<mailto:nd@arm.com>>; 严鸿亮(鸿先) <hongxian.yhl@alibaba-
> inc.com<mailto:hongxian.yhl@alibaba-inc.com>>; Ash Wilding
> <Ash.Wilding(a)arm.com<mailto:Ash.Wilding@arm.com>>
> 主 题:Re: [TF-A] MMU Table in ARM Trust-Firmware-A
>
> On 10/01/2020 09:05, shaolin.xie via TF-A wrote:
> > Hey all:
> >
> > We are trying to port ARM Trust-Firmware-A to our design, however, we are
> facing serious SRAM size problem.
> >
> > Our target SRAM is 64KB, but the compiled BL2 already taken 60K:
> >
> > After some evaluation, we found that the MMU table take most of the space.
> Our questions are:
> Hi Shaolin,
> There are some options to reduce the memory used for page-tables like using
> non-identity virtual to physical mappings. But given your SRAM size, I
> suspect you may not save enough to fit BL31.
>
> >
> > 1. Can we disable the MMU so we can fit the BL2 into 64KB space?
> > (However, in ARM's porting guide, enable MMU and icache/dcache is
> > mandatory. Refer to chapter 7.3 in
> > https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/por
> > ting-guide.html#introduction
> > <https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/po
> > rting-guide.html#introduction>.)
>
> This is a possibility although not recommended for performance reasons and
> you lose the protection of MMU during BL2 execution. This is not a
> configuration we support an I suspect some changes to BL2 code may be
> required for this.
>
> >
> > 2. In current design, BL31 is run in SRAM. If we enable DDR in BL2 and load
> BL31 image to DDR, is there any potential issue?
> >
> > Thanks in advances,
> >
> >
>
> It is possible to create a secure carve-out in DDR and execute BL31 from
> there. It is less `secure` than running from SRAM but it is acceptable for
> some market segments depending on the threat model.
>
> You target SRAM of 64 KB is on the lower side and fitting both BL2 and
> BL31 in that space could be a challenge.
>
> Could you let us know your BL2 and BL31 size and could you break down the
> size in terms of PROGBITS and NOPROG BITS? The BL31 NOPROG BITS are usually
> overlayed on top of BL2 memory and this will let us know whether fitting both
> in SRAM is even possible. If both cannot fit together in SRAM, there are
> other options you can explore like loading BL31 directly from BootROM or
> using BL2 as an XIP BootROM image.
>
> Best Regards
> Soby Mathew
>
>
>
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org<mailto:TF-A@lists.trustedfirmware.org>
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
[Adding back TF-A list]
Hi Raghu
Since the same DRAM is partitioned to run both non-secure and secure software, it is susceptible to Row Hammer attacks. Also depending on how accessible the DRAM interface is, the DDR PHY can be probed for transactions. If the DRAM is removable, then it can be deep frozen to retain values and plugged into another system to extract contents. Hence the less `secure` comment.
Best Regards
Soby Mathew
From: Raghupathy Krishnamurthy <raghu.ncstate(a)icloud.com>
Sent: 13 January 2020 14:36
To: "严鸿亮(鸿先)" <hongxian.yhl(a)alibaba-inc.com>
Cc: Soby Mathew <Soby.Mathew(a)arm.com>; Xie, Shaolin <shaolin.xie(a)alibaba-inc.com>; nd <nd(a)arm.com>
Subject: Re: [TF-A] 回复:回复: MMU Table in ARM Trust-Firmware-A
Hi Soby,
>>It is possible to create a secure carve-out in DDR and execute BL31 from
>>there. It is less `secure` than running from SRAM but it is acceptable
>>for some market segments depending on the threat model.
Can you elaborate on why you say it is less "secure" to run out of DDR?
-Raghu
On January 13, 2020 at 4:36 AM, "严鸿亮(鸿先) via TF-A" <tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>> wrote:
Hi Soby,
Many thanks for your advice. I will try to RUN BL2 as Boot ROM and try to reduce the BL31 size. Hope BL2_NOPROGBITS+BL31 can be restricted below 64KB. Wish me good luck:).
Best Regards
Yan Hongliang
------------------------------------------------------------------
发件人:Soby Mathew <Soby.Mathew(a)arm.com<mailto:Soby.Mathew@arm.com>>
发送时间:2020年1月13日(星期一) 19:37
收件人:严鸿亮(鸿先) <hongxian.yhl(a)alibaba-inc.com<mailto:hongxian.yhl@alibaba-inc.com>>; "Xie, Shaolin" <shaolin.xie(a)alibaba-inc.com<mailto:shaolin.xie@alibaba-inc.com>>; tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org> <tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>>
抄 送:nd <nd(a)arm.com<mailto:nd@arm.com>>; Ash Wilding <Ash.Wilding(a)arm.com<mailto:Ash.Wilding@arm.com>>
主 题:RE: 回复:[TF-A] MMU Table in ARM Trust-Firmware-A
Hi Yan,
OK, sounds good. From your description of the platform, having BL2 as the Boot ROM is worth exploring. This way, you avoid the BL1_RW , BL2_PROGBITS memory overhead. BL2_NOPROGBITS will need to be in SRAM. See the BL2_AT_EL3 and BL2_IN_XIP_MEM build flags for details. This depends on BL31 being able to fit in the free space available after this is done. One option is using the RECLAIM_INIT_CODE option which reclaims BL31 init-only code for runtime data but it needs platform specific linker script support. There are other ways to reduce the BL31 size, but trading security like SEPARATE_CODE_AND_RODATA=0 or trading performance like USE_COHERENT_MEM=0.
Hope that helps.
Best Regards
Soby Mathew
From: 严鸿亮(鸿先) <hongxian.yhl(a)alibaba-inc.com<mailto:hongxian.yhl@alibaba-inc.com>>
Sent: 11 January 2020 06:04
To: Soby Mathew <Soby.Mathew(a)arm.com<mailto:Soby.Mathew@arm.com>>; Xie, Shaolin <shaolin.xie(a)alibaba-inc.com<mailto:shaolin.xie@alibaba-inc.com>>; tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>
Cc: nd <nd(a)arm.com<mailto:nd@arm.com>>; Ash Wilding <Ash.Wilding(a)arm.com<mailto:Ash.Wilding@arm.com>>
Subject: 回复:[TF-A] MMU Table in ARM Trust-Firmware-A
Hi Soby,
Thanks for your replies.
Currently, out BL2 size is 60KB, BL1_RW is 28KB, BL31_PROGBITS is 48KB, BL31_NOPROGBITS is 24KB. For our testchip, we don't have too strictly security requirement, so maybe putting BL31 into DDR is a way to save SRAM and we don't need to worry about PROGBITS limit between BL31 and BL2.
For BL1_RW and BL2, there is 40KB used for MMU table. After I disable the MMU related code in arm_bl1_plat_arch_setup()/arm_bl2_plat_arch_setup() and also disable the build of xlat_table_lib, this 40KB space is saved. If the consequence is decreasing performance and losing MMU protection without any other functionality problem, then we might try this.
Best Regards
Yan Hongliang
------------------------------------------------------------------
发件人:Soby Mathew <Soby.Mathew(a)arm.com<mailto:Soby.Mathew@arm.com>>
发送时间:2020年1月10日(星期五) 18:27
收件人:"Xie, Shaolin" <shaolin.xie(a)alibaba-inc.com<mailto:shaolin.xie@alibaba-inc.com>>; tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org> <tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>>
抄 送:nd <nd(a)arm.com<mailto:nd@arm.com>>; 严鸿亮(鸿先) <hongxian.yhl(a)alibaba-inc.com<mailto:hongxian.yhl@alibaba-inc.com>>; Ash Wilding <Ash.Wilding(a)arm.com<mailto:Ash.Wilding@arm.com>>
主 题:Re: [TF-A] MMU Table in ARM Trust-Firmware-A
On 10/01/2020 09:05, shaolin.xie via TF-A wrote:
> Hey all:
>
> We are trying to port ARM Trust-Firmware-A to our design, however, we are facing serious SRAM size problem.
>
> Our target SRAM is 64KB, but the compiled BL2 already taken 60K:
>
> After some evaluation, we found that the MMU table take most of the space. Our questions are:
Hi Shaolin,
There are some options to reduce the memory used for page-tables like
using non-identity virtual to physical mappings. But given your SRAM
size, I suspect you may not save enough to fit BL31.
>
> 1. Can we disable the MMU so we can fit the BL2 into 64KB space?
> (However, in ARM's porting guide, enable MMU and icache/dcache is mandatory. Refer to chapter 7.3 in https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/porting-… <https://trustedfirmware-a.readthedocs.io/en/latest/getting_started/porting-…>.)
This is a possibility although not recommended for performance reasons
and you lose the protection of MMU during BL2 execution. This is not a
configuration we support an I suspect some changes to BL2 code may be
required for this.
>
> 2. In current design, BL31 is run in SRAM. If we enable DDR in BL2 and load BL31 image to DDR, is there any potential issue?
>
> Thanks in advances,
>
>
It is possible to create a secure carve-out in DDR and execute BL31 from
there. It is less `secure` than running from SRAM but it is acceptable
for some market segments depending on the threat model.
You target SRAM of 64 KB is on the lower side and fitting both BL2 and
BL31 in that space could be a challenge.
Could you let us know your BL2 and BL31 size and could you break down
the size in terms of PROGBITS and NOPROG BITS? The BL31 NOPROG BITS are
usually overlayed on top of BL2 memory and this will let us know whether
fitting both in SRAM is even possible. If both cannot fit together in
SRAM, there are other options you can explore like loading BL31 directly
from BootROM or using BL2 as an XIP BootROM image.
Best Regards
Soby Mathew
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org<mailto:TF-A@lists.trustedfirmware.org>
https://lists.trustedfirmware.org/mailman/listinfo/tf-a