This is interesting. It appears that there is no way on entry to EL3 to
guarantee that the out-of-context(el2 and el1) translation regimes are
in a consistent state and on every entry into EL3, we have to
conservatively assume that it is in an inconsistent state. This is
because of the situation Andrew mentioned(interrupts to EL3 can occur at
any time).
If this is the case, on EL3 entry:
1) For EL1, we will need to save SCTLR_EL1, set SCTLR_EL1.M = 1,.EPDx = 0
2) Set whatever bits we need to for EL2 and S2 translations to not
succeed(What are these?)
3) DSB, to ensure no speculative AT can be issued until completion of
DSB, so any AT that occurs will not fill the TLB with bad translations.
On exit(right before ERET), we need to restore the registers saved on
entry, and have the ERET followed by a DSB so that there can be no
speculative execution of AT instructions.
Thanks
Raghu
On 7/1/20 5:54 AM, Marc Zyngier via TF-A wrote:
> Hi Manish,
>
> On Wed, 1 Jul 2020 at 13:14, Manish Badarkhe <Manish.Badarkhe(a)arm.com> wrote:
>> Hi Andrew,
>>
>> As per current implementation, in “el1_sysregs_context_restore” routine do below things:
>>
>> 1. TCR_EL1.EPD0 = 1
>> 2. TCR_EL1.EPD1 = 1
>> 3. SCTR_EL1.M = 0
>> 4. Isb
>> Code snippet:
>> mrs x9, tcr_el1
>> orr x9, x9, #TCR_EPD0_BIT
>> orr x9, x9, #TCR_EPD1_BIT
>> msr tcr_el1, x9
>> mrs x9, sctlr_el1
>> bic x9, x9, #SCTLR_M_BIT
>> msr sctlr_el1, x9
>> isb
>> This is to avoid PTW through while updating system registers at step 5
> Unfortunately, this doesn't prevent anything.
>
> If SCTLR_EL1.M is clear, TCR_EL1.EPDx don't mean much (S1 MMU is
> disabled, no S1 page table walk), and you can still have S2 PTWs
> (using an idmap for S1) and creating TLB corruption if these entry
> alias with any S1 mapping that exists at EL1.
>
> Which is why KVM does *set* SCTLR_EL1.M, which prevents the use of a
> 1:1 mapping at S1, and at which point the TCR_EL1.EPDx bits are
> actually useful in preventing a PTW.
>
>> 5. Restore all system registers for El1 except SCTLR_EL1 and TCR_EL1
>> 6. isb()
>> 7. restore SCTLR_EL1 and TCR_EL1
>> Code Snippet:
>> ldr x9, [x0, #CTX_SCTLR_EL1] -> saved value from "el2_sysregs_context_save"
>> msr sctlr_el1, x9
>> ldr x9, [x0, #CTX_TCR_EL1]
>> msr tcr_el1, x9
>>
>> As per above steps. SCTLR_EL1 get restored back with actual settings at step 7.
>> Similar flow is present for “el2_sysregs_context_restore” to restore SCTLR_EL1 register.
>>
>> In conclusion, this routine temporarily clear M bit of SCTLR_EL1 to avoid speculation but restored it back
>> to its original setting while leaving back to its caller. Please let us know whether this align with KVM
>> workaround for speculative AT erratum.
> It doesn't, unfortunately. I believe this code actively creates
> problems on a system that is affected by speculative AT execution.
>
> I don't understand your rationale for touching SCTLR_EL2.M either if
> you are not context-switching the EL2 S1 state: as far as I understand
> no affected cores have S-EL2, so no switch should happen at this
> stage.
>
> Thanks,
>
> M.
Hi Andrew,
As per current implementation, in “el1_sysregs_context_restore” routine do below things:
1. TCR_EL1.EPD0 = 1
2. TCR_EL1.EPD1 = 1
3. SCTR_EL1.M = 0
4. Isb
Code snippet:
mrs x9, tcr_el1
orr x9, x9, #TCR_EPD0_BIT
orr x9, x9, #TCR_EPD1_BIT
msr tcr_el1, x9
mrs x9, sctlr_el1
bic x9, x9, #SCTLR_M_BIT
msr sctlr_el1, x9
isb
This is to avoid PTW through while updating system registers at step 5
5. Restore all system registers for El1 except SCTLR_EL1 and TCR_EL1
6. isb()
7. restore SCTLR_EL1 and TCR_EL1
Code Snippet:
ldr x9, [x0, #CTX_SCTLR_EL1] -> saved value from "el2_sysregs_context_save"
msr sctlr_el1, x9
ldr x9, [x0, #CTX_TCR_EL1]
msr tcr_el1, x9
As per above steps. SCTLR_EL1 get restored back with actual settings at step 7.
Similar flow is present for “el2_sysregs_context_restore” to restore SCTLR_EL1 register.
In conclusion, this routine temporarily clear M bit of SCTLR_EL1 to avoid speculation but restored it back
to its original setting while leaving back to its caller. Please let us know whether this align with KVM
workaround for speculative AT erratum.
Thanks
Manish Badarkhe
On 01/07/2020, 17:02, "TF-A on behalf of Joanna Farley via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Thanks Andrew for the notification. We will look into it and get back to this thread.
Thanks
Joanna
On 01/07/2020, 10:16, "TF-A on behalf of Andrew Scull via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Whilst looking at the speculative AT workaround in KVM, I compared it
against the workaround in TF-A and noticed an inconsistency whereby TF-A
**breaks** KVM's workaround.
In `el1_sysregs_context_restore`, the M bit of SCTRL_EL1 is cleared
however Linux requires this to be set for its workaround to be correct.
If an exception is taken to EL3 partway through a VM context switch,
e.g. a secure interrupt, causing a switch to the secure world, TF-A will
reintroduce the possibility of TLB corruption.
The above explains how it is broken for Linux's chosen workaround
however TF-A will also have to be compatible with whatever workaround
the EL2 software is using.
Starting this thread with the issue identified and we can add more
details as needed.
Thanks
Andrew
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
+Updated comment for SCTLR_EL2 register restoration.
On 01/07/2020, 17:45, "TF-A on behalf of Manish Badarkhe via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hi Andrew,
As per current implementation, in “el1_sysregs_context_restore” routine do below things:
1. TCR_EL1.EPD0 = 1
2. TCR_EL1.EPD1 = 1
3. SCTR_EL1.M = 0
4. Isb
Code snippet:
mrs x9, tcr_el1
orr x9, x9, #TCR_EPD0_BIT
orr x9, x9, #TCR_EPD1_BIT
msr tcr_el1, x9
mrs x9, sctlr_el1
bic x9, x9, #SCTLR_M_BIT
msr sctlr_el1, x9
isb
This is to avoid PTW through while updating system registers at step 5
5. Restore all system registers for El1 except SCTLR_EL1 and TCR_EL1
6. isb()
7. restore SCTLR_EL1 and TCR_EL1
Code Snippet:
ldr x9, [x0, #CTX_SCTLR_EL1] -> saved value from "el2_sysregs_context_save"
msr sctlr_el1, x9
ldr x9, [x0, #CTX_TCR_EL1]
msr tcr_el1, x9
As per above steps. SCTLR_EL1 get restored back with actual settings at step 7.
Similar flow is present for “el2_sysregs_context_restore” to restore SCTLR_EL2 register.
In conclusion, this routine temporarily clear M bit of SCTLR_EL1 to avoid speculation but restored it back
to its original setting while leaving back to its caller. Please let us know whether this align with KVM
workaround for speculative AT erratum.
Thanks
Manish Badarkhe
On 01/07/2020, 17:02, "TF-A on behalf of Joanna Farley via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Thanks Andrew for the notification. We will look into it and get back to this thread.
Thanks
Joanna
On 01/07/2020, 10:16, "TF-A on behalf of Andrew Scull via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Whilst looking at the speculative AT workaround in KVM, I compared it
against the workaround in TF-A and noticed an inconsistency whereby TF-A
**breaks** KVM's workaround.
In `el1_sysregs_context_restore`, the M bit of SCTRL_EL1 is cleared
however Linux requires this to be set for its workaround to be correct.
If an exception is taken to EL3 partway through a VM context switch,
e.g. a secure interrupt, causing a switch to the secure world, TF-A will
reintroduce the possibility of TLB corruption.
The above explains how it is broken for Linux's chosen workaround
however TF-A will also have to be compatible with whatever workaround
the EL2 software is using.
Starting this thread with the issue identified and we can add more
details as needed.
Thanks
Andrew
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Thanks Andrew for the notification. We will look into it and get back to this thread.
Thanks
Joanna
On 01/07/2020, 10:16, "TF-A on behalf of Andrew Scull via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Whilst looking at the speculative AT workaround in KVM, I compared it
against the workaround in TF-A and noticed an inconsistency whereby TF-A
**breaks** KVM's workaround.
In `el1_sysregs_context_restore`, the M bit of SCTRL_EL1 is cleared
however Linux requires this to be set for its workaround to be correct.
If an exception is taken to EL3 partway through a VM context switch,
e.g. a secure interrupt, causing a switch to the secure world, TF-A will
reintroduce the possibility of TLB corruption.
The above explains how it is broken for Linux's chosen workaround
however TF-A will also have to be compatible with whatever workaround
the EL2 software is using.
Starting this thread with the issue identified and we can add more
details as needed.
Thanks
Andrew
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Whilst looking at the speculative AT workaround in KVM, I compared it
against the workaround in TF-A and noticed an inconsistency whereby TF-A
**breaks** KVM's workaround.
In `el1_sysregs_context_restore`, the M bit of SCTRL_EL1 is cleared
however Linux requires this to be set for its workaround to be correct.
If an exception is taken to EL3 partway through a VM context switch,
e.g. a secure interrupt, causing a switch to the secure world, TF-A will
reintroduce the possibility of TLB corruption.
The above explains how it is broken for Linux's chosen workaround
however TF-A will also have to be compatible with whatever workaround
the EL2 software is using.
Starting this thread with the issue identified and we can add more
details as needed.
Thanks
Andrew
Hi Sandrine,
Sounds like good changes in general! I'm curious what the ACLs are for
Code-Owner-Review? Is it tied to docs/about/maintainers.rst or just
based on the honor system? (I notice that I seem to be able to give a
+1 for code I'm not owning, but maybe that's because I am a
maintainer?) Also, are code owners allowed to +1 themselves (I think
we said we didn't want maintainers to do that, but for code owners I
could see how we might want to allow it since there are usually not
that many)? What do we do when someone uploads the first patch for a
new platform, do they just COR+1 themselves (since there is no code
owner yet)?
I think it might still be useful to retain the existing Code-Review as
a +1/-1 label next to the two new ones, just to allow other interested
parties to record their opinion (without it having any enforcing
effect on the merge). In other Gerrit instances I have used people
often like to give CR+1 as a "I'm not the one who needs to approve
this but I have looked at it and think it's fine" or a CR-1 as a "I
can't stop you from doing this but I still want to be clear that I
don't think it's a good idea". It allows people outside the official
approval process a clearer way to participate and can aid the official
approvers in their decisions (e.g. when I am reviewing a patch as a
maintainer that already has a CR-1 from someone else I know to pay
extra attention to their concerns, and it's more visible than just
some random comment further up in the list). What do you think?
Best regards,
Julius
Hi Sandrine,
If my understanding is right, Code-Owner-Review is the equivalent to the old Code-Review+1 and therefore anyone can add a vote there, owner or not. If that is so, I think the current name might be misleading. Wouldn't it be better just "Code-Review", for instance?
As an example, I am not code owner for Measure Boot, but sometimes I am requested to review the patches for it, so in that case it might lead to confusion if I give Code-Owner-Review score.
What do you think?
Best regards,
Javier
-----Original Message-----
From: Sandrine Bailleux via TF-A <tf-a(a)lists.trustedfirmware.org<mailto:Sandrine%20Bailleux%20via%20TF-A%20%3ctf-a@lists.trustedfirmware.org%3e>>
Reply-To: Sandrine Bailleux <sandrine.bailleux(a)arm.com<mailto:Sandrine%20Bailleux%20%3csandrine.bailleux@arm.com%3e>>
To: tf-a <tf-a(a)lists.trustedfirmware.org<mailto:tf-a%20%3ctf-a@lists.trustedfirmware.org%3e>>
Subject: [TF-A] Announcing some changes around the code review label in Gerrit
Date: Tue, 30 Jun 2020 13:29:00 +0000
Hello all,
If you recall, the tf.org Project Maintenance Process [1] advocates a
code review model where both code owners and maintainers need to review
and approve a patch before it gets merged.
Until now, the way we would record this in Gerrit was a bit cumbersome
and ambiguous.
* Code-Review+1 was for code owners to approve a patch.
* Code-Review+2 was for maintainers to approve a patch.
The submission rules in Gerrit were such that a patch needed a
Code-Review+2 to be merged. This meant that if a maintainer was first to
take a look at a patch and approved it (Code-Review+2), Gerrit would
allow the patch to be merged without any code owners review.
To align better with the Project Maintenance Process, I've done some
changes in our Gerrit configuration. You will notice that the
Code-Review label is gone and has been replaced by 2 new labels:
* Code-Owner-Review
* Maintainer-Review
Stating the obvious but code owners are expected to vote on the
Code-Owner-Review label and maintainers on the Maintainer-Review.
Note that maintainers might also be code owners for some modules. If
they are doing a "code owner" type of review on a patch, they would vote
on the Code-Owner-Review label in this case.
Any doubt, please ask. I hope I didn't break anything but if you notice
any permissions problems (things you used to be able to do and cannot do
anymore, or vice versa!) please let me know.
One annoying consequence of this change is that if you've got some
patches in review right now and got some votes on the former
'Code-Review' label, these have been partially lost. The history of
review comments in Gerrit will still show that somebody had voted
Code-Review -1/+1 on your patch but this will no longer appear in the
labels summary frame and won't count towards the approvals required to
get the patch merged.
I could not think of a way to avoid this issue... This should only be
temporary, while we transition all in-flight patches to the new code
review model.
Roughly:
* Existing "Code-Review -1/+1" votes will have to be replayed as
"Code-Owner-Review -1/+1".
* Existing "Code-Review -2/+2" votes will have to be replayed as
"Maintainer-Review -1/+1".
Sorry for the inconvenience. I hope, once we've gone through the
transition period, these changes will make the code review process
clearer to everybody.
I will update the TF-A documentation to explain all of this in the
coming future.
Best regards,
Sandrine
[1]
https://developer.trustedfirmware.org/w/collaboration/project-maintenance-p…
Hi all
The new TrustedFirmware.org security incident process is now live. This process is described here:
https://developer.trustedfirmware.org/w/collaboration/security_center/repor…
Initially the process will be used for the following projects: TF-A, TF-M, OP-TEE and Mbed TLS. The security documentation for each project will be updated soon to reflect this change.
If you are part of an organization that believes it should receive security vulnerability information before it is made public then please ask your relevant colleagues to register as Trusted Stakeholders as described here:
https://developer.trustedfirmware.org/w/collaboration/security_center/trust…
Note we prefer individuals in each organization to coordinate their registration requests with each other and to provide us with an email alias managed by your organization instead of us managing a long list of individual addresses.
Best regards
Dan.
(on behalf of the TrustedFirmware.org security team)
Hello all,
If you recall, the tf.org Project Maintenance Process [1] advocates a
code review model where both code owners and maintainers need to review
and approve a patch before it gets merged.
Until now, the way we would record this in Gerrit was a bit cumbersome
and ambiguous.
* Code-Review+1 was for code owners to approve a patch.
* Code-Review+2 was for maintainers to approve a patch.
The submission rules in Gerrit were such that a patch needed a
Code-Review+2 to be merged. This meant that if a maintainer was first to
take a look at a patch and approved it (Code-Review+2), Gerrit would
allow the patch to be merged without any code owners review.
To align better with the Project Maintenance Process, I've done some
changes in our Gerrit configuration. You will notice that the
Code-Review label is gone and has been replaced by 2 new labels:
* Code-Owner-Review
* Maintainer-Review
Stating the obvious but code owners are expected to vote on the
Code-Owner-Review label and maintainers on the Maintainer-Review.
Note that maintainers might also be code owners for some modules. If
they are doing a "code owner" type of review on a patch, they would vote
on the Code-Owner-Review label in this case.
Any doubt, please ask. I hope I didn't break anything but if you notice
any permissions problems (things you used to be able to do and cannot do
anymore, or vice versa!) please let me know.
One annoying consequence of this change is that if you've got some
patches in review right now and got some votes on the former
'Code-Review' label, these have been partially lost. The history of
review comments in Gerrit will still show that somebody had voted
Code-Review -1/+1 on your patch but this will no longer appear in the
labels summary frame and won't count towards the approvals required to
get the patch merged.
I could not think of a way to avoid this issue... This should only be
temporary, while we transition all in-flight patches to the new code
review model.
Roughly:
* Existing "Code-Review -1/+1" votes will have to be replayed as
"Code-Owner-Review -1/+1".
* Existing "Code-Review -2/+2" votes will have to be replayed as
"Maintainer-Review -1/+1".
Sorry for the inconvenience. I hope, once we've gone through the
transition period, these changes will make the code review process
clearer to everybody.
I will update the TF-A documentation to explain all of this in the
coming future.
Best regards,
Sandrine
[1]
https://developer.trustedfirmware.org/w/collaboration/project-maintenance-p…
Hi,
Please find the latest report on new defect(s) introduced to ARM-software/arm-trusted-firmware found with Coverity Scan.
1 new defect(s) introduced to ARM-software/arm-trusted-firmware found with Coverity Scan.
New defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 360213: Null pointer dereferences (NULL_RETURNS)
/plat/arm/common/arm_dyn_cfg.c: 96 in arm_bl1_set_mbedtls_heap()
________________________________________________________________________________________________________
*** CID 360213: Null pointer dereferences (NULL_RETURNS)
/plat/arm/common/arm_dyn_cfg.c: 96 in arm_bl1_set_mbedtls_heap()
90 * In the latter case, if we still wanted to write in the DTB the heap
91 * information, we would need to call plat_get_mbedtls_heap to retrieve
92 * the default heap's address and size.
93 */
94
95 tb_fw_config_info = FCONF_GET_PROPERTY(dyn_cfg, dtb, TB_FW_CONFIG_ID);
>>> CID 360213: Null pointer dereferences (NULL_RETURNS)
>>> Dereferencing "tb_fw_config_info", which is known to be "NULL".
96 tb_fw_cfg_dtb = tb_fw_config_info->config_addr;
97
98 if ((tb_fw_cfg_dtb != 0UL) && (mbedtls_heap_addr != NULL)) {
99 /* As libfdt use void *, we can't avoid this cast */
100 void *dtb = (void *)tb_fw_cfg_dtb;
101
________________________________________________________________________________________________________
To view the defects in Coverity Scan visit, https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPkl…
Hi All,
The next TF-A Tech Forum is scheduled for Thu 2nd July 2020 16:00 – 17:00 (BST). A reoccurring meeting invite has been sent out to the subscribers of this TF-A mailing list. If you don’t have this please let me know.
Agenda:
* PSA Fuzz Testing - Presented by Gary Morrison
* Originally developed for TF-M PSA API’s this is a different approach to Fuzz testing to that was previously presented in the TF-A SMC Fuzz testing session a few weeks back and provides a testing approach for a the different level of PSA API testing coming to TF-A.
* Optional TF-A Mailing List Topic Discussions
If TF-A contributors have anything they wish to present at any future TF-A tech forum please contact me to have that scheduled.
Previous sessions, both recording and presentation material can be found on the trustedfirmware.org TF-A Technical meeting webpage: https://www.trustedfirmware.org/meetings/tf-a-technical-forum/
A scheduling tracking page is also available to help track sessions suggested and being prepared: https://developer.trustedfirmware.org/w/tf_a/tf-a-tech-forum-scheduling/ Final decisions on what will be presented will be shared a few days before the next meeting and shared on the TF-A mailing list.
Thanks
Joanna
My take on this is perhaps not unexpectantly fairly aligned with Sandrine's.
We as a project have over 30 platforms now up-streamed and the expectations is these are managed by the identified platform owners. We look to them to decide on which of any new capabilities offered are supported in their platforms. We have a strategy not to break up-streamed platforms except where we have clearly announced deprecation and this is all documented here https://trustedfirmware-a.readthedocs.io/en/latest/process/platform-compati… and where interfaces have changed and the work is trivial the submitter is expected to attempt to make a good effort to migrate platforms.
To support this the current CI system hosted by Arm can only verify the builds of most platforms which is one of the gerrit approvals needed to merge a patch. Of course that does not mean the executables produced will necessarily run correctly which is where platform owners would need to assist with their own approvals and own testing if possible. With the Arm Juno platform and various Arm FVP models we do have the capability to test but that misses out all the other platforms up-streamed.
With the OpenCI coming, which we owe a Tech-Forum session on BTW, the Juno and Arm FVP models will also be made available and if platform owners want they will be able to add their platforms to be tested if they want to invest the effort. Sadly migrating to the OpenCI will take time and will have to be delivered in phases but the hope is this will be useful to platform owners as well as for core changes with the visibility of the CI results. The various Arm platforms are generally owned and managed by different teams and go through the same processes as other non Arm platforms on getting patched up-streamed.
So when new core features are added for new Arm hardware IP support or new software features that change behaviour as previously we still look to the platform owners to decide if they want support for these in their platforms. For some trivial changes that may be in the form of reviewing changes prepared but for others that may be deciding if they want to invest the effort to implement themselves. For core changes as well as making patches available via Gerrit for review we have been making more use of the TF-A mailing list to announce the patches are available. Indeed we are trying to make use of the mailing list while work is still in design to discuss openly decisions that need to be made which may or may not effect platforms. The Tech-Forum is also being used to present and discuss ideas and ongoing work. This is all being done to be more open about upcoming changes. We would welcome discussions on the mailing list and tech-forum sessions from platform owners about any subjects. This could be around coordinated support across platforms of new capabilities vs each platform following its own path. That of course holds challenges in coordination of effort and who's effort.
Hopefully the direction of travel here for the project helps with some of the suggestions but happy to discuss more here on the mailing list of in a tech-forum meeting.
Thanks
Joanna
On 25/06/2020, 03:16, "TF-A on behalf of Varun Wadekar via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hi,
>> As you said, even a small change might break a platform and unfortunately we're not in a position to detect that with the CI yet.
I agree that this is a limitation today. If you remember, I have been advocating for announcing on the email alias when in doubt. That way we minimize surprises for the platform owners.
>> IMO we cannot reasonably announce all such changes and I would think that contributors are responsible for keeping an eye on patches and testing them if they think they might be affected.
I understand. We should announce on the mailing list, when in doubt. I strongly disagree with the second part of the statement though. Asking platform owners to keep an eye on every change defeats the purpose of upstreaming. The burden must lie on the implementer of a change to try and upgrade consumers.
>> But I think that the same change might be considered as an improvement by some, and a useless/undesirable change by others.
Possible. And then we take call as a community. If it makes sense for some platforms to move ahead, then that should be OK too.
>> We might be able to reduce the number of build options in some cases but there will always be a need to retain most of them IMO
Thinking out loud - maybe moving to runtime checks makes sense in some cases instead of makefile variables e.g. CPU erratas. We should look at each case individually.
I guess, the main problem seems to be low level of testing. So any implementer wont be confident making a change that affects other platforms. That's when sending an email to the community makes sense. We tackle the situation as a team, instead of one person doing all the work.
-Varun
-----Original Message-----
From: Sandrine Bailleux <sandrine.bailleux(a)arm.com>
Sent: Wednesday, June 24, 2020 4:54 AM
To: Varun Wadekar <vwadekar(a)nvidia.com>
Cc: tf-a(a)lists.trustedfirmware.org
Subject: Re: [TF-A] How should we manage patches affecting multiple users?
External email: Use caution opening links or attachments
Hi Varun,
Thanks for sharing your opinion.
On 6/20/20 1:55 AM, Varun Wadekar wrote:
> Hello @Sandrine Bailleux,
>
> Thanks for starting the email chain. My response to the issues you flagged in the email below.
>
> 1. This seems to be a bit tricky. As a consumer of an API, one would expect that it works as advertised. If the behavior changes, thus affecting the expectations or assumptions surrounding the API, then they should be announced. With so many platforms in the tree, we should always assume that even a small change will break a platform.
I agree. Whenever there is clear intention to change the expectations or assumptions of an API, this should be announced and discussed.
However, there are cases where we might want to rework the implementation of an API (to clean the code for example), make some improvements or extend its functionality. All of that might be done with no intent to step outside of the API original scope but still subtly break some code. As you said, even a small change might break a platform and unfortunately we're not in a position to detect that with the CI yet. IMO we cannot reasonably announce all such changes and I would think that contributors are responsible for keeping an eye on patches and testing them if they think they might be affected.
> 2. For improvements, we should strive to upgrade all consumers of the API/makefile/features. This will ensure that all platforms remain current and we reduce maintenance cost in the long run. I assume, that such improvements will give birth to additional configs/makefile settings/platform variables etc. We would be signing up for a lot of maintenance by allowing some platforms to lag.
>
> For straight forward changes (e.g. the change under discussion) I assume that the platforms will respond and we would get the changes merged relatively fast. For controversial features, this will spark a healthy discussion in the community.
I agree with you in principle. But I think that the same change might be considered as an improvement by some, and a useless/undesirable change by others.
For example, the change under discussion is an improvement from our point of view. It will lower the maintenance cost, as any new file added in the GIC driver in the future will be automatically pulled in by platforms that use the newly introduced GIC makefile, without the need to update all platforms makefiles. But this might not be what everyone wants, some platform owners might prefer continue cherry-picking specific GIC source files to retain control over what they include in their firmware. In this case, is it the right thing to do to "upgrade"
them? I think this is debatable.
Maybe a better alternative is to simply make people aware of the change, provide a link to how the migration was done for Arm platforms in this case, so that they've got an example to guide them if they wish to embrace the change for their own platform ports.
> 3. For a case where the change breaks platforms, it makes sense to just upgrade all of them.
>
>>> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
>
> [VW] IMO, the author of such a patch is the best person to upgrade all affected platforms/users. The main reason being that he/she is already an expert for the patch.
Agree that the patch author is an expert for his own patch but that does not mean he's also an expert on how his patch should be applied for another platform port he's not familiar with.
> But, this might not be true in some cases where the author will need help from the community. For such cases, we should ask for help on the mailing list.
Yes, I think such cases require collaboration indeed.
> I also want to highlight the dangers of introducing make variables as a solution. The current situation is already unmanageable with so many build variables and weak handlers. We should try and avoid adding more.
Not sure how this ties with the original discussions but I agree with you, we've got far too many build options in this project. This makes testing a lot harder, as it means we have hundreds of valid combinations of build options to verify. It also makes the code harder to read and understand because it is sprinkled with #ifdefs.
Hopefully, switching to a KConfig-like configuration system in the future will help alleviate this issue, as at least it will allow us to express the valid combinations of build options in a more robust manner (today the build system tries to forbid invalid combinations but I am sure it's far from exhaustive in these types of checks) and also visualize what's been enabled/disabled more easily for a specific build.
Even though I acknowledge this problem, I am not sure we can completely solve it. Firmware is not general-purpose software and it requires a lot more control over the features you include or not because of stricter memory and CPU constraints. I think people need a way to turn on/off features at a fine granularity. We might be able to reduce the number of build options in some cases but there will always be a need to retain most of them IMO.
> Finally, I agree that there's no silver bullet and we have to deal with each situation differently. Communication is key, as you rightly said. As long we involve the community, we should be OK.
>
> -Varun
>
> -----Original Message-----
> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of
> Sandrine Bailleux via TF-A
> Sent: Friday, June 19, 2020 2:57 AM
> To: tf-a <tf-a(a)lists.trustedfirmware.org>
> Subject: [TF-A] How should we manage patches affecting multiple users?
>
> External email: Use caution opening links or attachments
>
>
> Hello everyone,
>
> I would like to start a discussion around the way we want to handle changes that affect all platforms/users. This subject came up in this code review [1].
>
> I'll present my views on the subject and would like to know what others think.
>
> First of all, what do we mean by "changes that affect all platforms"? It would be any change that is not self-contained within a platform port.
> IOW, a change in some file outside of the plat/ directory. It could be in a driver, in a library, in the build system, in the platform interfaces called by the generic code... the list goes on.
>
> 1. Some are just implementation changes, they leave the interfaces unchanged. These do not break other platforms builds since the call sites don't need to be updated. However, they potentially change the responsibilities of the interface, in which case this might introduce behavioral changes that all users of the interface need to be aware of and adapt their code to.
>
> 2. Some other changes might introduce optional improvements. They introduce a new way of doing things, perhaps a cleaner or more efficient one. Users can safely stay with the old way of doing things without fear of breakage but they would benefit from migrating.
>
> This is the case of Alexei's patch [1], which introduces an
> intermediate
> GICv2 makefile, that Arm platforms now include instead of pulling each individual file by themselves. At this point, it is just an improvement that introduces an indirection level in the build system. This is an improvement in terms of maintainability because if we add a new essential file to this driver in the future, we'll just need to add it in this new GICv2 makefile without having to update all platforms makefiles. However, platform makefiles can continue to pull in individual files if they wish to.
>
> 3. Some other changes might break backwards compatibility. This is something we should avoid as much as possible and we should always strive to provide a migration path. But there are cases where there might be no other way than to break things. In this case, all users have to be migrated.
>
>
> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
>
> I think it's hard to come up with a one-size-fits-all answer. It really depends on the nature of changes, their consequences, the amount of work needed to do the migration, the ability for the patch author to test these changes, to name a few.
>
> I believe the patch author should migrate all users on a best-effort basis. If it's manageable then they should do the work and propose a patch to all affected users for them to review and test on their platforms.
>
> On the other hand, if it's too much work or impractical, then I think the best approach would be for the patch author to announce and discuss the changes on this mailing list. In some scenarios, the changes might be controversial and not all platform owners might agree that the patch brings enough benefit compared to the migration effort, in which case the changes might have to be withdrawn or re-thought. In any case, I believe communication is key here and no one should take any unilateral decision on their own if it affects other users.
>
> I realize this is a vast subject and that we probably won't come up with an answer/reach an agreement on all the aspects of the question. But I'd be interested in hearing other contributors' view and if they've got any experience managing these kinds of issues, perhaps in other open source project.
>
> Best regards,
> Sandrine
>
> [1]
> https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/4538
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi,
On 6/25/20 3:24 AM, Raghu K via TF-A wrote:
> One option you can look at is in bl1_fwu.c if you already haven't, in
> bl1_fwu_image_auth() function, that uses the auth_mod_verify_img, which
> is provided an image id, address and size. You can repeatedly call this
> function on the different image ID's that you have in your FIP. You will
> likely need to calculate the address to pass to this function based on
> reading the FIP header(maybe you can use the IO framework for this by
> opening different image ID's).
> Don't think there is a straight forward way but what you are trying to
> do should be achievable using existing code, rearranged to fit your needs.
Yes, Raghu's approach sounds right to me. As he pointed out, this is not
something that's supported out of the box because the firmware update
feature is normally invoked through an SMC into BL1 as part of the
firmware update flow early during the boot. Things are different in your
case as you're looking to provide this feature much later as a runtime
service so this will surely require some amount of tweaking but sounds
feasible to me.
Best regards,
Sandrine
>
> Thanks
> Raghu
>
> On 6/24/20 12:42 PM, arm in-cloud via TF-A wrote:
>> Hi Sandrine,
>>
>> Waiting for your suggestions on this query.
>>
>> Thanks
>>
>> On Mon, Jun 22, 2020 at 3:40 PM arm in-cloud via TF-A
>> <tf-a(a)lists.trustedfirmware.org
>> <mailto:tf-a@lists.trustedfirmware.org>> wrote:
>>
>> <Posting the question to tf-a mailing list from Maniphest and
>> copied all previous conversation>
>>
>> Hi,
>>
>> On our boards I have an external security chip which communicates
>> with AP (application processor) over a encrypted communication
>> channel. I have a communication driver/PTA running in a Secure
>> Playload (OPTEE) running in S-EL1. My firmware update workflow
>> (FIP.BIN) is as below.
>>
>> In our design, we have QSPI from where AP boots up (regular TF-A
>> workflow) this QSPI is also accessible from external Security chip
>> (which is responsible for AP/SOC's firmware update)
>> On software side, I have a Non-Secure application which receives
>> the prebuilt binary (FIP.BIN) it chopps the binary in fixed sized
>> buffers and send to OPTEE-PTA which internally sends the binary to
>> Security chip (which then validates and writes to QSPI).
>>
>> Now in this firmware update flow, I want to do following things.
>>
>> 1. I want to validate/authenticate the FIP image before sending
>> to security chip. If it authentication is successful only then
>> send the image to security chip.
>>
>> This is what I am planning:-
>>
>> 1. Receive the whole FIP.BIN from NS-APP and store it in RAM (in
>> OPTEE's RAM)
>> 2. I am planning to implement a SIP service in TF-A which will
>> receive the address and size of FIP.BIN in RAM.
>> 3. Calls in to auth_mod driver for authentication of the image.
>>
>> I don't want to update the images immediately I just needs to
>> authenticate the images within FIP.BIN
>>
>> To design this feature, I started looking in to BL1 & BL2 code but
>> not sure how much piece if code I can reuse or use.
>> Also I looked in to fwu test apps if I can mimic something but
>> looks the fwu implementation is absolutely different.
>>
>> My question is how do I just use the authentication functionality
>> available in TF-A for my purpose.
>>
>> Appreciate your help!
>>
>>
>>
>>
>> sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added
>> a subscriber: sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/>.Tue,
>> Jun 16, 6:50 AM <https://developer.trustedfirmware.org/T763#9015>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Hi,
>>
>> Apologies for the delay, I had missed this ticket... Generally
>> speaking, it's better to post questions on the TF-A mailing list
>> [1], it's got far better visibility than Maniphest (which few
>> people monitor I believe) and you are more likely to get a quick
>> answer there.
>>
>> First of all, I've got a few questions that would help me
>> understand your design.
>>
>> 1. You say that OPTEE-PTA would store the FIP image in its RAM. I
>> am assuming this is secure RAM, that only secure software can
>> access, right? If not, this would not seem very secure to me,
>> as the normal world software could easily change the FIP image
>> after it has been authenticated and replace it with some
>> malicious firmware.
>>
>> 2. You'd like to implement a SiP service in TF-A to authenticate
>> the FIP image, given its base address and size. Now I am
>> guessing this would be an address in OPTEE's RAM, right?
>>
>> 3. What cryptographic key would sign the FIP image? Would TF-A
>> have access to this key to authenticate it?
>>
>> 4. What would need authentication? The FIP image as a monolithic
>> blob, or each individual image within this FIP? And in the
>> latter case, would all images be authenticated using the same
>> key or would different images be signed with different keys?
>>
>> Now, coming back to where to look into TF-A source code. You're
>> looking in the right place, all Trusted Boot code is indeed built
>> into BL1 and BL2 in TF-A. The following two documents detail how
>> the authentication framework and firmware update flow work in TF-A
>> and are worth reading if you haven't done so already:
>>
>> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>> https://trustedfirmware-a.readthedocs.io/en/latest/components/firmware-upda…
>>
>> I reckon you'd want to reuse the cryptographic module in your
>> case. It provides a callback to verify the signature of some
>> payload, see [2] and include/drivers/auth/crypto_mod.h. However, I
>> expect it won't be straight forward to reuse this code outside of
>> its context, as it expects DER-encoded data structures following a
>> specific ASN.1 format. Typically when used in the TF-A trusted
>> boot flow, these are coming from X.509 certificates, which already
>> provide the data structures in the right format.
>>
>> Best regards,
>> Sandrine Bailleux
>>
>> [1] https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>> [2] https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>>
>>
>> arm-in-cloud
>> <https://developer.trustedfirmware.org/p/arm-in-cloud/> added a
>> comment.Edited · Wed, Jun 17, 11:42 AM
>> <https://developer.trustedfirmware.org/T763#9018>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Thank you Sandrine for your feedback.
>>
>> Following are the answers to your questions:
>>
>> 1. Yes, the image will be stored in to OPTEEs secured memory. I
>> am guessing TF-A gets access to this memory.
>> 2. Yes, the OPTEE's secure memory address and Size will be passed
>> to the SiP service running in TF-A.
>> 3. These are RSA keys, in my case only TF-A has access to these
>> keys. On a regular boot these are the same keys used to
>> authenticate the images (BL31, BL32 & BL33) in the FIP stored
>> in the QSPI.
>> 4. Yes all images will be authenticated using same key.
>>
>> Thanks for the documentation links, from the firmware-update doc.
>> I am mainly trying to use IMAGE_AUTH part. In my case COPY is
>> already done just needs AUTH part and once AUTH is successful,
>> optee will pass that payload the security chip for update. And
>> after rebooting my device will boot with new firmware. (In my case
>> we always reboot after firmware updates).
>>
>> Do you want me to post this query again on the TF-A mailing list?
>>
>> Thank you!
>>
>>
>>
>> sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added
>> a comment.Fri, Jun 19, 3:09 AM
>> <https://developer.trustedfirmware.org/T763#9032>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Hi,
>>
>> OK I understand a bit better, thanks for the details.
>>
>> Do you want me to post this query again on the TF-A mailing list?
>>
>> If it's not too much hassle for you then yes, I think it'd be
>> preferable to continue the discussion on the TF-A mailing list. I
>> will withhold my comments until then.
>>
>>
>>
>> --
>> TF-A mailing list
>> TF-A(a)lists.trustedfirmware.org <mailto:TF-A@lists.trustedfirmware.org>
>> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>>
>>
>
>
Hello Lauren,
Thanks for answering the open items from the tech forum.
Just curious, do you know if CppUTest can also consider a set of C files as a unit instead of a single function? This is the policy we have adopted internally, but with an automated tool to generate the unit tests.
Cheers,
Varun
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Lauren Wehrmeister via TF-A
Sent: Thursday, June 25, 2020 10:52 AM
To: tf-a(a)lists.trustedfirmware.org
Subject: [TF-A] Unit Test tech forum follow-up
External email: Use caution opening links or attachments
Following up about some questions from my presentation at the tech forum on the TF-A unit testing framework.
In regards to the other tool that was investigated alongside CppUTest, it was Google Test<https://github.com/google/googletest>. The main reason for choosing CppUTest was that it is easily portable to new platforms and has a small footprint despite its feature set. This could be very useful if we want to use it on hardware (not on the host machine) in the future, even if it only has a small amount of memory. Google test is very good for testing user space applications, but most of its features require pthread, file system, and other services of the operating system which we usually don't have in an embedded environment. So we chose CppUTest because it fits for both host and target based testing so we won't end up using different frameworks in the same project. Something to note is that CppUTest has an option<https://cpputest.github.io/manual.html#gtest> for running Google Test based test cases too, but the intention is to keep the framework clean by using only CppUTest.
Also c-picker can support parsing large data structures since the tool uses PyYAML for parsing the config files and clang for processing the source files.
Thanks,
Lauren
Following up about some questions from my presentation at the tech forum on the TF-A unit testing framework.
In regards to the other tool that was investigated alongside CppUTest, it was Google Test<https://github.com/google/googletest>. The main reason for choosing CppUTest was that it is easily portable to new platforms and has a small footprint despite its feature set. This could be very useful if we want to use it on hardware (not on the host machine) in the future, even if it only has a small amount of memory. Google test is very good for testing user space applications, but most of its features require pthread, file system, and other services of the operating system which we usually don't have in an embedded environment. So we chose CppUTest because it fits for both host and target based testing so we won't end up using different frameworks in the same project. Something to note is that CppUTest has an option<https://cpputest.github.io/manual.html#gtest> for running Google Test based test cases too, but the intention is to keep the framework clean by using only CppUTest.
Also c-picker can support parsing large data structures since the tool uses PyYAML for parsing the config files and clang for processing the source files.
Thanks,
Lauren
Hello @Sandrine Bailleux,
Thanks for starting the email chain. My response to the issues you flagged in the email below.
1. This seems to be a bit tricky. As a consumer of an API, one would expect that it works as advertised. If the behavior changes, thus affecting the expectations or assumptions surrounding the API, then they should be announced. With so many platforms in the tree, we should always assume that even a small change will break a platform.
2. For improvements, we should strive to upgrade all consumers of the API/makefile/features. This will ensure that all platforms remain current and we reduce maintenance cost in the long run. I assume, that such improvements will give birth to additional configs/makefile settings/platform variables etc. We would be signing up for a lot of maintenance by allowing some platforms to lag.
For straight forward changes (e.g. the change under discussion) I assume that the platforms will respond and we would get the changes merged relatively fast. For controversial features, this will spark a healthy discussion in the community.
3. For a case where the change breaks platforms, it makes sense to just upgrade all of them.
>> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
[VW] IMO, the author of such a patch is the best person to upgrade all affected platforms/users. The main reason being that he/she is already an expert for the patch. But, this might not be true in some cases where the author will need help from the community. For such cases, we should ask for help on the mailing list.
I also want to highlight the dangers of introducing make variables as a solution. The current situation is already unmanageable with so many build variables and weak handlers. We should try and avoid adding more.
Finally, I agree that there's no silver bullet and we have to deal with each situation differently. Communication is key, as you rightly said. As long we involve the community, we should be OK.
-Varun
-----Original Message-----
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Sandrine Bailleux via TF-A
Sent: Friday, June 19, 2020 2:57 AM
To: tf-a <tf-a(a)lists.trustedfirmware.org>
Subject: [TF-A] How should we manage patches affecting multiple users?
External email: Use caution opening links or attachments
Hello everyone,
I would like to start a discussion around the way we want to handle changes that affect all platforms/users. This subject came up in this code review [1].
I'll present my views on the subject and would like to know what others think.
First of all, what do we mean by "changes that affect all platforms"? It would be any change that is not self-contained within a platform port.
IOW, a change in some file outside of the plat/ directory. It could be in a driver, in a library, in the build system, in the platform interfaces called by the generic code... the list goes on.
1. Some are just implementation changes, they leave the interfaces unchanged. These do not break other platforms builds since the call sites don't need to be updated. However, they potentially change the responsibilities of the interface, in which case this might introduce behavioral changes that all users of the interface need to be aware of and adapt their code to.
2. Some other changes might introduce optional improvements. They introduce a new way of doing things, perhaps a cleaner or more efficient one. Users can safely stay with the old way of doing things without fear of breakage but they would benefit from migrating.
This is the case of Alexei's patch [1], which introduces an intermediate
GICv2 makefile, that Arm platforms now include instead of pulling each individual file by themselves. At this point, it is just an improvement that introduces an indirection level in the build system. This is an improvement in terms of maintainability because if we add a new essential file to this driver in the future, we'll just need to add it in this new GICv2 makefile without having to update all platforms makefiles. However, platform makefiles can continue to pull in individual files if they wish to.
3. Some other changes might break backwards compatibility. This is something we should avoid as much as possible and we should always strive to provide a migration path. But there are cases where there might be no other way than to break things. In this case, all users have to be migrated.
Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
I think it's hard to come up with a one-size-fits-all answer. It really depends on the nature of changes, their consequences, the amount of work needed to do the migration, the ability for the patch author to test these changes, to name a few.
I believe the patch author should migrate all users on a best-effort basis. If it's manageable then they should do the work and propose a patch to all affected users for them to review and test on their platforms.
On the other hand, if it's too much work or impractical, then I think the best approach would be for the patch author to announce and discuss the changes on this mailing list. In some scenarios, the changes might be controversial and not all platform owners might agree that the patch brings enough benefit compared to the migration effort, in which case the changes might have to be withdrawn or re-thought. In any case, I believe communication is key here and no one should take any unilateral decision on their own if it affects other users.
I realize this is a vast subject and that we probably won't come up with an answer/reach an agreement on all the aspects of the question. But I'd be interested in hearing other contributors' view and if they've got any experience managing these kinds of issues, perhaps in other open source project.
Best regards,
Sandrine
[1] https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/4538
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi Sandrine,
Waiting for your suggestions on this query.
Thanks
On Mon, Jun 22, 2020 at 3:40 PM arm in-cloud via TF-A <
tf-a(a)lists.trustedfirmware.org> wrote:
> <Posting the question to tf-a mailing list from Maniphest and copied all
> previous conversation>
>
> Hi,
>
> On our boards I have an external security chip which communicates with AP
> (application processor) over a encrypted communication channel. I have a
> communication driver/PTA running in a Secure Playload (OPTEE) running in
> S-EL1. My firmware update workflow (FIP.BIN) is as below.
>
> In our design, we have QSPI from where AP boots up (regular TF-A workflow)
> this QSPI is also accessible from external Security chip (which is
> responsible for AP/SOC's firmware update)
> On software side, I have a Non-Secure application which receives the
> prebuilt binary (FIP.BIN) it chopps the binary in fixed sized buffers and
> send to OPTEE-PTA which internally sends the binary to Security chip (which
> then validates and writes to QSPI).
>
> Now in this firmware update flow, I want to do following things.
>
> 1. I want to validate/authenticate the FIP image before sending to
> security chip. If it authentication is successful only then send the image
> to security chip.
>
> This is what I am planning:-
>
> 1. Receive the whole FIP.BIN from NS-APP and store it in RAM (in
> OPTEE's RAM)
> 2. I am planning to implement a SIP service in TF-A which will receive
> the address and size of FIP.BIN in RAM.
> 3. Calls in to auth_mod driver for authentication of the image.
>
> I don't want to update the images immediately I just needs to authenticate
> the images within FIP.BIN
>
> To design this feature, I started looking in to BL1 & BL2 code but not
> sure how much piece if code I can reuse or use.
> Also I looked in to fwu test apps if I can mimic something but looks the
> fwu implementation is absolutely different.
>
> My question is how do I just use the authentication functionality
> available in TF-A for my purpose.
>
> Appreciate your help!
>
>
>
>
> sandrine-bailleux-arm
> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added a
> subscriber: sandrine-bailleux-arm
> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/>.Tue, Jun
> 16, 6:50 AM <https://developer.trustedfirmware.org/T763#9015>
>
> <https://developer.trustedfirmware.org/T763#>
>
> Hi,
>
> Apologies for the delay, I had missed this ticket... Generally speaking,
> it's better to post questions on the TF-A mailing list [1], it's got far
> better visibility than Maniphest (which few people monitor I believe) and
> you are more likely to get a quick answer there.
>
> First of all, I've got a few questions that would help me understand your
> design.
>
> 1. You say that OPTEE-PTA would store the FIP image in its RAM. I am
> assuming this is secure RAM, that only secure software can access, right?
> If not, this would not seem very secure to me, as the normal world software
> could easily change the FIP image after it has been authenticated and
> replace it with some malicious firmware.
>
>
> 1. You'd like to implement a SiP service in TF-A to authenticate the
> FIP image, given its base address and size. Now I am guessing this would be
> an address in OPTEE's RAM, right?
>
>
> 1. What cryptographic key would sign the FIP image? Would TF-A have
> access to this key to authenticate it?
>
>
> 1. What would need authentication? The FIP image as a monolithic blob,
> or each individual image within this FIP? And in the latter case, would all
> images be authenticated using the same key or would different images be
> signed with different keys?
>
> Now, coming back to where to look into TF-A source code. You're looking in
> the right place, all Trusted Boot code is indeed built into BL1 and BL2 in
> TF-A. The following two documents detail how the authentication framework
> and firmware update flow work in TF-A and are worth reading if you haven't
> done so already:
>
>
> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>
> https://trustedfirmware-a.readthedocs.io/en/latest/components/firmware-upda…
>
> I reckon you'd want to reuse the cryptographic module in your case. It
> provides a callback to verify the signature of some payload, see [2] and
> include/drivers/auth/crypto_mod.h. However, I expect it won't be straight
> forward to reuse this code outside of its context, as it expects
> DER-encoded data structures following a specific ASN.1 format. Typically
> when used in the TF-A trusted boot flow, these are coming from X.509
> certificates, which already provide the data structures in the right format.
>
> Best regards,
> Sandrine Bailleux
>
> [1] https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> [2]
> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>
>
> arm-in-cloud <https://developer.trustedfirmware.org/p/arm-in-cloud/> added
> a comment.Edited · Wed, Jun 17, 11:42 AM
> <https://developer.trustedfirmware.org/T763#9018>
>
> <https://developer.trustedfirmware.org/T763#>
>
> Thank you Sandrine for your feedback.
>
> Following are the answers to your questions:
>
> 1. Yes, the image will be stored in to OPTEEs secured memory. I am
> guessing TF-A gets access to this memory.
> 2. Yes, the OPTEE's secure memory address and Size will be passed to
> the SiP service running in TF-A.
> 3. These are RSA keys, in my case only TF-A has access to these keys.
> On a regular boot these are the same keys used to authenticate the images
> (BL31, BL32 & BL33) in the FIP stored in the QSPI.
> 4. Yes all images will be authenticated using same key.
>
> Thanks for the documentation links, from the firmware-update doc. I am
> mainly trying to use IMAGE_AUTH part. In my case COPY is already done just
> needs AUTH part and once AUTH is successful, optee will pass that payload
> the security chip for update. And after rebooting my device will boot with
> new firmware. (In my case we always reboot after firmware updates).
>
> Do you want me to post this query again on the TF-A mailing list?
>
> Thank you!
>
>
>
> sandrine-bailleux-arm
> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added a
> comment.Fri, Jun 19, 3:09 AM
> <https://developer.trustedfirmware.org/T763#9032>
>
> <https://developer.trustedfirmware.org/T763#>
>
> Hi,
>
> OK I understand a bit better, thanks for the details.
>
> Do you want me to post this query again on the TF-A mailing list?
>
> If it's not too much hassle for you then yes, I think it'd be preferable
> to continue the discussion on the TF-A mailing list. I will withhold my
> comments until then.
>
>
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>
Sorry for a delay.
I didn't have a bandwidth for doing it.
I asked my teammate Marcin Salnik for testing the fix https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3493 on our Veloce emulator.
Now, I can confirm that this fix works. ACC BL31 booted.
From: Varun Wadekar <vwadekar(a)nvidia.com>
Sent: Monday, June 01, 2020 8:57 PM
To: Witkowski, Andrzej <andrzej.witkowski(a)intel.com>
Cc: Lessnau, Adam <adam.lessnau(a)intel.com>; tf-a(a)lists.trustedfirmware.org
Subject: RE: Unhandled Exception at EL3 in BL31 for Neoverse N1 with direct connect of DSU
Hi,
I believe the fix for this issue is part of https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3493. Can you please try that change?
-Varun
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org<mailto:tf-a-bounces@lists.trustedfirmware.org>> On Behalf Of Witkowski, Andrzej via TF-A
Sent: Monday, June 1, 2020 7:08 AM
To: tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>
Cc: Witkowski, Andrzej <andrzej.witkowski(a)intel.com<mailto:andrzej.witkowski@intel.com>>; Lessnau, Adam <adam.lessnau(a)intel.com<mailto:adam.lessnau@intel.com>>
Subject: [TF-A] Unhandled Exception at EL3 in BL31 for Neoverse N1 with direct connect of DSU
External email: Use caution opening links or attachments
Hi,
I work on project which uses ARM CPU Neoverse N1 with direct connect of DSU.
I noticed the following error "Unhandled Exception at EL3" in BL31 bootloader after switching from ATF 2.1 to 2.2.
The root cause is that after invoking neoverse_n1_errata-report function in lib/cpus/aarch64/neoverse_n1.S file and reaching the line 525, the function check_errata_dsu_934 in lib/cpus/aarch64/dsu_helpers.S file is always invoked regardless of whether ERRATA_DSU_936184 is set to 0 or to 1.
In our case, ERRATA_DSU_936184 := 0.
[cid:image001.png@01D64A40.BBEF1AA0]
[cid:image002.png@01D64A40.BBEF1AA0]
Neoverse N1 with direct connect version of DSU doesn't contain SCU/L3 cache and CLUSTERCFR_EL1 register.
The error "Unhandled Exception at EL3" appears in BL31 bootloader during attempt to read the CLUSTERCFR_EL1 register which doesn't exist in our RTL HW.
Andrzej Witkowskie
Intel Technology Poland
---------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk Północ | VII Wydział Gospodarczy Krajowego Rejestru Sądowego - KRS 101882 | NIP 957-07-52-316 | Kapitał zakładowy 200.000 PLN.
Ta wiadomość wraz z załącznikami jest przeznaczona dla określonego adresata i może zawierać informacje poufne. W razie przypadkowego otrzymania tej wiadomości, prosimy o powiadomienie nadawcy oraz trwałe jej usunięcie; jakiekolwiek przeglądanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by others is strictly prohibited.
---------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Sowackiego 173 | 80-298 Gdask | Sd Rejonowy Gdask Pnoc | VII Wydzia Gospodarczy Krajowego Rejestru Sdowego - KRS 101882 | NIP 957-07-52-316 | Kapita zakadowy 200.000 PLN.
Ta wiadomo wraz z zacznikami jest przeznaczona dla okrelonego adresata i moe zawiera informacje poufne. W razie przypadkowego otrzymania tej wiadomoci, prosimy o powiadomienie nadawcy oraz trwae jej usunicie; jakiekolwiek przegldanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by others is strictly prohibited.
<Posting the question to tf-a mailing list from Maniphest and copied all
previous conversation>
Hi,
On our boards I have an external security chip which communicates with AP
(application processor) over a encrypted communication channel. I have a
communication driver/PTA running in a Secure Playload (OPTEE) running in
S-EL1. My firmware update workflow (FIP.BIN) is as below.
In our design, we have QSPI from where AP boots up (regular TF-A workflow)
this QSPI is also accessible from external Security chip (which is
responsible for AP/SOC's firmware update)
On software side, I have a Non-Secure application which receives the
prebuilt binary (FIP.BIN) it chopps the binary in fixed sized buffers and
send to OPTEE-PTA which internally sends the binary to Security chip (which
then validates and writes to QSPI).
Now in this firmware update flow, I want to do following things.
1. I want to validate/authenticate the FIP image before sending to
security chip. If it authentication is successful only then send the image
to security chip.
This is what I am planning:-
1. Receive the whole FIP.BIN from NS-APP and store it in RAM (in OPTEE's
RAM)
2. I am planning to implement a SIP service in TF-A which will receive
the address and size of FIP.BIN in RAM.
3. Calls in to auth_mod driver for authentication of the image.
I don't want to update the images immediately I just needs to authenticate
the images within FIP.BIN
To design this feature, I started looking in to BL1 & BL2 code but not sure
how much piece if code I can reuse or use.
Also I looked in to fwu test apps if I can mimic something but looks the
fwu implementation is absolutely different.
My question is how do I just use the authentication functionality available
in TF-A for my purpose.
Appreciate your help!
sandrine-bailleux-arm
<https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added a
subscriber: sandrine-bailleux-arm
<https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/>.Tue, Jun
16, 6:50 AM <https://developer.trustedfirmware.org/T763#9015>
<https://developer.trustedfirmware.org/T763#>
Hi,
Apologies for the delay, I had missed this ticket... Generally speaking,
it's better to post questions on the TF-A mailing list [1], it's got far
better visibility than Maniphest (which few people monitor I believe) and
you are more likely to get a quick answer there.
First of all, I've got a few questions that would help me understand your
design.
1. You say that OPTEE-PTA would store the FIP image in its RAM. I am
assuming this is secure RAM, that only secure software can access, right?
If not, this would not seem very secure to me, as the normal world software
could easily change the FIP image after it has been authenticated and
replace it with some malicious firmware.
1. You'd like to implement a SiP service in TF-A to authenticate the FIP
image, given its base address and size. Now I am guessing this would be an
address in OPTEE's RAM, right?
1. What cryptographic key would sign the FIP image? Would TF-A have
access to this key to authenticate it?
1. What would need authentication? The FIP image as a monolithic blob,
or each individual image within this FIP? And in the latter case, would all
images be authenticated using the same key or would different images be
signed with different keys?
Now, coming back to where to look into TF-A source code. You're looking in
the right place, all Trusted Boot code is indeed built into BL1 and BL2 in
TF-A. The following two documents detail how the authentication framework
and firmware update flow work in TF-A and are worth reading if you haven't
done so already:
https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…https://trustedfirmware-a.readthedocs.io/en/latest/components/firmware-upda…
I reckon you'd want to reuse the cryptographic module in your case. It
provides a callback to verify the signature of some payload, see [2] and
include/drivers/auth/crypto_mod.h. However, I expect it won't be straight
forward to reuse this code outside of its context, as it expects
DER-encoded data structures following a specific ASN.1 format. Typically
when used in the TF-A trusted boot flow, these are coming from X.509
certificates, which already provide the data structures in the right format.
Best regards,
Sandrine Bailleux
[1] https://lists.trustedfirmware.org/mailman/listinfo/tf-a
[2]
https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
arm-in-cloud <https://developer.trustedfirmware.org/p/arm-in-cloud/> added
a comment.Edited · Wed, Jun 17, 11:42 AM
<https://developer.trustedfirmware.org/T763#9018>
<https://developer.trustedfirmware.org/T763#>
Thank you Sandrine for your feedback.
Following are the answers to your questions:
1. Yes, the image will be stored in to OPTEEs secured memory. I am
guessing TF-A gets access to this memory.
2. Yes, the OPTEE's secure memory address and Size will be passed to the
SiP service running in TF-A.
3. These are RSA keys, in my case only TF-A has access to these keys. On
a regular boot these are the same keys used to authenticate the images
(BL31, BL32 & BL33) in the FIP stored in the QSPI.
4. Yes all images will be authenticated using same key.
Thanks for the documentation links, from the firmware-update doc. I am
mainly trying to use IMAGE_AUTH part. In my case COPY is already done just
needs AUTH part and once AUTH is successful, optee will pass that payload
the security chip for update. And after rebooting my device will boot with
new firmware. (In my case we always reboot after firmware updates).
Do you want me to post this query again on the TF-A mailing list?
Thank you!
sandrine-bailleux-arm
<https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added a
comment.Fri, Jun 19, 3:09 AM
<https://developer.trustedfirmware.org/T763#9032>
<https://developer.trustedfirmware.org/T763#>
Hi,
OK I understand a bit better, thanks for the details.
Do you want me to post this query again on the TF-A mailing list?
If it's not too much hassle for you then yes, I think it'd be preferable to
continue the discussion on the TF-A mailing list. I will withhold my
comments until then.
Hi Raghu,
> -----Original Message-----
> From: TF-A [mailto:tf-a-bounces@lists.trustedfirmware.org] On Behalf Of
> Raghu Krishnamurthy via TF-A
> Sent: Monday, June 08, 2020 7:09 PM
> To: tf-a(a)lists.trustedfirmware.org
> Subject: Re: [TF-A] GICV3: system interface EOI ordering RFC
>
> Hello,
>
> Sorry for chiming in late. Are we sure that the msr/mrs to the GICv3 EOI
> register is not a context synchronizing event or ensures ordering itself
> ? Also did the addition of DSB() fix the issue ? If yes, can you try
> adding a delay or a few NOP's/dummy instructions or an ISB to see if the
> issue goes away?
Yes these experiments were done to observe the behavior which led to
this change.
A definitive confirmation of how core treats these accesses can be
only be from arm core team or by actually probing the transfers.
I had a response from arm support on the very same question which reads
as follows:
"In theory, the memory barrier should be placed between the clearing of
the peripheral and the access to the EOI register".
In a slightly unrelated case I had observed this with NVIC on axi probe and
there is no feasibility to repeat this with GICv3 at the moment.
>
> We need to know the right thing to do architecturally and confirm that
> the issue is really due to reordering at the core before adding the DSB.
> Adding a DSB might be masking a timing issue.
Is the above response and what Soby had confirmed sufficient ? or could
someone from arm get this reconfirmed from core design unless it is obvious
to them ? And this is equally applicable to linux as well.
>
> -Raghu
>
Thanks
Sandeep
>
>
> On 6/8/20 4:47 AM, Sandeep Tripathy via TF-A wrote:
> >> -----Original Message-----
> >> From: Soby Mathew [mailto:Soby.Mathew@arm.com]
> >> Sent: Monday, June 08, 2020 3:36 PM
> >> To: Sandeep Tripathy; Olivier Deprez; tf-a(a)lists.trustedfirmware.org
> >> Cc: nd
> >> Subject: RE: [TF-A] GICV3: system interface EOI ordering RFC
> >>
> >> Just some inputs from my side:
> >> I agree we need at least a dsb() after the write to MMIO region to
> >> silence
> >> the
> >> peripheral and before the EOI at GIC sysreg interface. Adding it to the
> >> GIC EOI
> >
> > Thanks for the confirmation. Now it’s about choosing the right place.
> >
> >> API seems the logical thing to do but as Olivier mentions, there are
> >> interrupt
> >> handler which do not deal with MMIO (eg: the Systimer interrupt) so
> >> adding
> >> it
> >> to GICv3 API might be arduous for such handlers.
> >>
> >> So there is a choice here to let the interrupt handlers to deal with
> >> ensuring
> >> completeness before EOI at sysreg interface or adding it to GICv3 EOI
> >> API
> >> and
> >> take the overhead for interrupt handlers which do not have to deal with
> >> MMIO.
> >>
> >
> > Yes I feel either of these is a must to guarantee functionality
> > architecturally though
> > both approach end up with some unnecessary overhead.
> >
> > If GICv3 api takes care then it is an overhead for some ISRs dealing
> > with
> > non-MMIO.
> > At present I do not see an active use case in reference implementation
> > where
> > sys timer
> > ISR is in a performance intensive path where one additional DSB will be
> > perceivable.
> > But there may be some I could be totally wrong in this assumption
> (pmu/debug
> > or.. not sure).
> > Whereas I can certainly imagine some MMIO ISRs in performance critical
> > path
> > where unnecessary
> > DSB is not acceptable at all.
> >
> > If the interrupt handler needs to ensure then it will generically add
> > 'DSB'
> > as I think
> > the driver cannot and should not make assumptions about how EOI is done
> > afterwards.
> > That will be overhead for the ISRs dealing with MMIO peripherals and non
> > GIC-v3.
> > If we consider only GICv3+ then good. Otherwise I would prefer the
> > 'plat_ic_end_of_interrupt'
> > like Olivier mentioned with a #if GICv3 instead of each ISRs dealing
> > with
> > it.
> >
> >> The GICv3 legacy MMIO CPU interface is deprecated for TF-A and the sys
> >> interface is the only one GICv3 driver in TF-A supports.
> >
> > Right we can ignore the GICv3 legacy mode but GICv2 will still remain ?
> >
> >>
> >> Best Regards
> >> Soby Mathew
> >>
> >
> > Thanks
> > Sandeep
> >
> >>> -----Original Message-----
> >>> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of
> Sandeep
> >>> Tripathy via TF-A
> >>> Sent: 08 June 2020 10:34
> >>> To: Olivier Deprez <Olivier.Deprez(a)arm.com>; tf-
> >> a(a)lists.trustedfirmware.org
> >>> Subject: Re: [TF-A] GICV3: system interface EOI ordering RFC
> >>>
> >>> Hi Olivier,
> >>>> -----Original Message-----
> >>>> From: Olivier Deprez [mailto:Olivier.Deprez@arm.com]
> >>>> Sent: Monday, June 08, 2020 1:14 PM
> >>>> To: tf-a(a)lists.trustedfirmware.org; Sandeep Tripathy
> >>>> Subject: Re: [TF-A] GICV3: system interface EOI ordering RFC
> >>>>
> >>>> Hi Sandeep,
> >>>>
> >>>> gicv3_end_of_interrupt_sel1 is a static access helper macro. Its
> >>>> naming precisely tells what it does at gicv3 module level. It is
> >>>> called from
> >>> the weak
> >>>> plat_ic_end_of_interrupt function for BL32 image.
> >>>>
> >>>> I tend to think it is the driver responsibility to ensure the module
> >>> interrupt
> >>>> acknowledge register write is reaching HW in order (or "be visible to
> >>> all
> >>>> observers").
> >>>
> >>> The driver should be agnostic of what interrupt controller is used and
> >>> its
> >>> behavior.
> >>> And since 'all' writes were to mmio ranges mapped Device(nGnRE)
> >>> /strongly-ordered(nGnRnE)
> >>> there was no such need. This is a special case for GICv3 system
> >>> interface only.
> >>> Adding at driver level a DSB is unnecessary for other interrupt
> >>> controllers.
> >>>
> >>>> Also I suspect adding a dsb might not be required generically for all
> >>> kind of IP.
> >>>
> >>> Here are you referring to the peripheral IP or interrupt controller IP
> >>> ?
> >>> The issue is reordering at arm core itself (STR to device address Vs
> >>> msr(sysreg
> >>> write)).
> >>> So I think Its applicable for all IP.
> >>>
> >>>> Adding a barrier in generic code might incur an unnecessary
> >>>> bottleneck.
> >>>
> >>> But if there is a need to *ensure* presence of at least one DSB
> >>> between
> >>> the
> >>> write transfer from core to device clear and gicv3 eoi icc register
> >>> write in a
> >>> generic way then what other option we have.
> >>>>
> >>>> Thus wouldn't it be better to add the barrier to the overridden
> >>>> platform function rather than in generic gicv3 code?
> >>>
> >>> That can be done but I feel this is more to do with gicv3 system
> >>> interface only.
> >>> Inside plat_xxx one has to check #if GICV3 ...and system interface.
> >>>>
> >>>> I have a put a comment in the review, we can continue the discussion
> >>> there.
> >>>>
> >>>> Regards,
> >>>> Olivier.
> >>>>
> >>> Thanks
> >>> Sandeep
> >>>> ________________________________________
> >>>> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> on behalf of
> >>>> Sandeep Tripathy via TF-A <tf-a(a)lists.trustedfirmware.org>
> >>>> Sent: 05 June 2020 19:43
> >>>> To: tf-a(a)lists.trustedfirmware.org
> >>>> Subject: [TF-A] GICV3: system interface EOI ordering RFC
> >>>>
> >>>> Hi,
> >>>> In a typical interrupt handling sequence we do 1-Read IAR
> >>>> 2-Do interrupt handling
> >>>> 3-Clear the interrupt at source , so that the device de-asserts
> >>>> IRQ
> >>> request to
> >>>> GIC
> >>>> >> I am suggesting a need of DSB here in case of GICv3 +.
> >>>> 4-Write to GIC cpu interface to do EOI.
> >>>>
> >>>> Till GICv2 and with GICv3 legacy interface ICC_EOI write is a write
> >>>> over
> >>> AMBA
> >>>> interface. The
> >>>> Addresses are mapped with (nR) attribute. Hence the write transfers
> >>> from the
> >>>> core will be
> >>>> generated at step 3 and 4 in order. Please ignore the additional
> >>> buffers/bridges
> >>>> in path from
> >>>> core till peripheral which has to be dealt separately as per SOC.
> >>>>
> >>>> Query: I understand GICv3 system interface accesses are not over
> >>>> this
> >>> protocol
> >>>> and core will not
> >>>> follow the ordering rule ?
> >>>>
> >>>> I have observed spurious interrupt issue/manifestation ( I don't have
> >>> the
> >>>> transfers probed) in
> >>>> RTOS environment where I have a primitive GICv3 driver but I wonder
> >>>> why things does not fail in Linux or tf-a. If it is working because
> >>>> from step(3) to
> >>> step(4) we have
> >>>> barriers by chace
> >>>> due to other device register writes then I would suggest to have one
> >>>> in
> >>> the EOI
> >>>> clearing API itself.
> >>>>
> >>>> RFC:
> >>> https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/4454
> >>>>
> >>>> Thanks
> >>>> Sandeep
> >>> --
> >>> TF-A mailing list
> >>> TF-A(a)lists.trustedfirmware.org
> >>> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
I think it is clear Madhu has been applying due diligence here both in his own testing and notification to the project through the ML so that consumers can respond and also perform any testing they feel is necessary.
In think its appropriate for us to pause while we wait for such feedback but it would be good to have some indication on who feels there might be an issue here and wants to provide feedback.
I appreciate you are Varun and we need to analyse the risks and what mitigations need to be in place. As you may have picked up on from the various tech forums we take testing seriously and are looking to augment with different test strategies and tools however these will take time to deploy and spread their coverage through the project codebase.
Until then we will have to analyse the level of risk and apply the appropriate level of mitigations which I think Madhu has but lets see if we have some specific issues needing further mitigations to give more confidence.
Thanks
Joanna
On 19/06/2020, 23:35, "TF-A on behalf of Varun Wadekar via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hi,
Thanks for the information.
Adding libfdt to the unit testing framework is a good idea. The verification should give us more confidence on the stability side. Let’s see if platform owners would like to include more tests.
-Varun
-----Original Message-----
From: Madhukar Pappireddy <Madhukar.Pappireddy(a)arm.com>
Sent: Friday, June 19, 2020 1:01 PM
To: Varun Wadekar <vwadekar(a)nvidia.com>
Cc: tf-a(a)lists.trustedfirmware.org
Subject: RE: [TF-A] Upgrading libfdt library
External email: Use caution opening links or attachments
Hi Varun
I tested the patch by running all applicable tf-a-tests suites [1] and Linux boot tests on FVP and Juno platforms. Since libfdt is platform agnostic, we are planning on adding it to the unit test framework as well(which was described in yesterday's tech forum). I need to admit that I am (over!) confident about libfdt since it is also used in U-boot and Linux projects. It seems the release tags don’t have much significance in the dtc project. Hence, I picked the latest commit at the time.
In our internal CI, we also run Coverity MISRA and other static checks to find issues in TF-A and fix them as needed. However, we don’t fix issues reported in 3rd party libraries such as libfdt. I am waiting to hear feedback from various platform owners if they have any issues with this upgrade.
[1] https://git.trustedfirmware.org/TF-A/tf-a-tests.git/tree/tftf/tests
Thanks,
Madhukar
-----Original Message-----
From: Varun Wadekar <vwadekar(a)nvidia.com>
Sent: Thursday, June 18, 2020 1:30 PM
To: Madhukar Pappireddy <Madhukar.Pappireddy(a)arm.com>
Cc: tf-a(a)lists.trustedfirmware.org
Subject: RE: [TF-A] Upgrading libfdt library
Hi,
I think there are some valid concerns raised in the thread. Since this is an external project I felt we needed to set a criteria for the upgrade. One day, when we start using the unit testing framework and have 100% coverage numbers and solid positive/negative testing, we would be more confident. Until then we just have to live with what have.
@Madhukar is there any we can find out all the tests, static analysis checks, security checks that was run on the upgrade? Just trying to understand how confident we are that this does not introduce any regressions?
-Varun
-----Original Message-----
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Raghu K via TF-A
Sent: Thursday, June 18, 2020 9:07 AM
To: tf-a(a)lists.trustedfirmware.org
Subject: Re: [TF-A] Upgrading libfdt library
External email: Use caution opening links or attachments
One thing that might be worth considering is versioning the library like xlat_tables and xlat_tables_v2 for a few releases and deprecate the old one if/when there is broad agreement from platforms to move to the new one. Platforms can perform their independent testing like STM and can report back over time. Obviously, for those few releases if there are API changes in libfdt there will have to be support for both new and old api's which will cause temporary ugliness but this might ease concerns about testing, stability etc.
-Raghu
On 6/18/20 4:12 AM, Soby Mathew via TF-A wrote:
>> -----Original Message-----
>> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of
>> Sandrine Bailleux via TF-A
>> Sent: 16 June 2020 13:09
>> To: Alexei Fedorov <Alexei.Fedorov(a)arm.com>; Madhukar Pappireddy
>> <Madhukar.Pappireddy(a)arm.com>; Varun Wadekar <vwadekar(a)nvidia.com>
>> Cc: tf-a(a)lists.trustedfirmware.org
>> Subject: Re: [TF-A] Upgrading libfdt library
>>
>> Hi Alexei,
>>
>> On 6/16/20 1:51 PM, Alexei Fedorov wrote:
>>> Thanks for your very detailed explanation of the reason behind this
>>> solution.
>>> This brings the question on how do we monitor libfdt bug fixes, code
>>> cleanup, etc. (which Madhukar pointed out) and integrate these
>>> changes in TF-A source tree.
>> Right now, it is a manual process and not only for libfdt but for all
>> projects TF-A depends on. We keep an eye on them and aim at updating
>> them when necessary. Unfortunately, like any manual process, it is
>> error-prone and things might slip through the cracks. The TF-A
>> release tick is usually a good reminder to update all dependencies
>> but unfortunately libfdt has been left behind for some time (about 2 years)...
> As I recall, when we tried to update libfdt last time, there was significant code bloat in generated binary (feature creep?) and we abandoned the update. The policy was to stick to a stable version and only update if there are any bugfixes or new feature we would want to make use of.
>
> Generally speaking, we might have done several fixes in the imported 3rd party library by running static checks like for MISRA compliance etc and this has to be repeated when the library is updated. Also, for security audits, it is important to be sure of the "security status" of the imported library and hence we tended to not update for every release.
>
> But I agree that we do have to be on top of bug fixes and hence we need to update but not sure how we can balance this with concerns above.
>
> Regarding MbedTLS, being a security critical project, we would like to encourage platform integrators to take ownership of staying upto date rather than relying on update from TF-A repo. Hence the motivation to keep it separate.
>
> Best Regards
> Soby Mathew
>
>
>
>
>> Regards,
>> Sandrine
>> --
>> TF-A mailing list
>> TF-A(a)lists.trustedfirmware.org
>> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi,
I think there are some valid concerns raised in the thread. Since this is an external project I felt we needed to set a criteria for the upgrade. One day, when we start using the unit testing framework and have 100% coverage numbers and solid positive/negative testing, we would be more confident. Until then we just have to live with what have.
@Madhukar is there any we can find out all the tests, static analysis checks, security checks that was run on the upgrade? Just trying to understand how confident we are that this does not introduce any regressions?
-Varun
-----Original Message-----
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Raghu K via TF-A
Sent: Thursday, June 18, 2020 9:07 AM
To: tf-a(a)lists.trustedfirmware.org
Subject: Re: [TF-A] Upgrading libfdt library
External email: Use caution opening links or attachments
One thing that might be worth considering is versioning the library like xlat_tables and xlat_tables_v2 for a few releases and deprecate the old one if/when there is broad agreement from platforms to move to the new one. Platforms can perform their independent testing like STM and can report back over time. Obviously, for those few releases if there are API changes in libfdt there will have to be support for both new and old api's which will cause temporary ugliness but this might ease concerns about testing, stability etc.
-Raghu
On 6/18/20 4:12 AM, Soby Mathew via TF-A wrote:
>> -----Original Message-----
>> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of
>> Sandrine Bailleux via TF-A
>> Sent: 16 June 2020 13:09
>> To: Alexei Fedorov <Alexei.Fedorov(a)arm.com>; Madhukar Pappireddy
>> <Madhukar.Pappireddy(a)arm.com>; Varun Wadekar <vwadekar(a)nvidia.com>
>> Cc: tf-a(a)lists.trustedfirmware.org
>> Subject: Re: [TF-A] Upgrading libfdt library
>>
>> Hi Alexei,
>>
>> On 6/16/20 1:51 PM, Alexei Fedorov wrote:
>>> Thanks for your very detailed explanation of the reason behind this
>>> solution.
>>> This brings the question on how do we monitor libfdt bug fixes, code
>>> cleanup, etc. (which Madhukar pointed out) and integrate these
>>> changes in TF-A source tree.
>> Right now, it is a manual process and not only for libfdt but for all
>> projects TF-A depends on. We keep an eye on them and aim at updating
>> them when necessary. Unfortunately, like any manual process, it is
>> error-prone and things might slip through the cracks. The TF-A
>> release tick is usually a good reminder to update all dependencies
>> but unfortunately libfdt has been left behind for some time (about 2 years)...
> As I recall, when we tried to update libfdt last time, there was significant code bloat in generated binary (feature creep?) and we abandoned the update. The policy was to stick to a stable version and only update if there are any bugfixes or new feature we would want to make use of.
>
> Generally speaking, we might have done several fixes in the imported 3rd party library by running static checks like for MISRA compliance etc and this has to be repeated when the library is updated. Also, for security audits, it is important to be sure of the "security status" of the imported library and hence we tended to not update for every release.
>
> But I agree that we do have to be on top of bug fixes and hence we need to update but not sure how we can balance this with concerns above.
>
> Regarding MbedTLS, being a security critical project, we would like to encourage platform integrators to take ownership of staying upto date rather than relying on update from TF-A repo. Hence the motivation to keep it separate.
>
> Best Regards
> Soby Mathew
>
>
>
>
>> Regards,
>> Sandrine
>> --
>> TF-A mailing list
>> TF-A(a)lists.trustedfirmware.org
>> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a