On Thu, 11 Jun 2020 at 23:42, Varun Wadekar via TF-A <
tf-a(a)lists.trustedfirmware.org> wrote:
> Hello Matteo,
>
> Apologies for still using an outdated term. I have trained myself to get
> used to "TF-A" - looks like I am still not there.
>
> >> The idea has also been just raised to the Trusted Firmware project
> Board for initial consideration and we will be all very keen to understand
> how much interest there is from the wider TF-A community of adopters and
> external (non-Arm) maintainers
>
> That is good to hear. For the exact scope, I think we can assume the usual
> expectations from any LTS software stack - stability, performance,
> security, bug fixes along with maintenance support. We are open to
> discussing the cadence and any other operational commitments.
>
> @Francois, from the description of Trusted Substrate looks like you also
> expect the sub-projects to provide LTS versions for the project as a whole
> to succeed (?)
>
> Yes. I assume relevant tf.org projects decide to branch LTSes so that we
can extend the scope to selected OP-TEE TAs for the Trusted Substrate LTS
and may be extend duration of support for the tf.org LTSes. (just to make
sure: this is just early open thinking to understand what it would mean to
build such a service on the Linaro side should there be tf.org LTSes).
-Varun
>
>
> -----Original Message-----
> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Matteo
> Carlini via TF-A
> Sent: Thursday, June 11, 2020 4:25 AM
> To: tf-a(a)lists.trustedfirmware.org
> Subject: Re: [TF-A] ATF LTS version
>
> External email: Use caution opening links or attachments
>
>
> Hi Francois,
>
> > I'd be happy to know more about what you see as TFA LTS: exact scope,
> number of versions, duration, operational commitments (zero-day...).
> > Do you have other firmware LTS needs?
>
> Agree. That’s precisely what I was hinting to Varun, when mentioning
> concrete requirements for the LTS scheme.
>
> > Trusted Substrate is the aggregation of { TFA, OP-TEE, some TEE apps
> such as firmwareTPM, U-Boot }.
> > Trusted Substrate effort is led by Linaro members and is going to be set
> up as a more open project.
>
> First time I heard about it. Good to know, but I guess we'll need to
> discuss the intersection and collaboration with the Trusted Firmware
> project at some point.
> Having a LTS versioning scheme for the Trusted Firmware hosted projects
> should be theoretically either in the scope of the Project itself or, if
> the Board agrees, appointed to some other project/entity.
>
> > Our end goal is to enable unified, transactional, robust (anti-bricking,
> anti rollback) UEFI OTA on both U-Boot and EDK2.
>
> Fair, but IMHO this has little to do with Arm Secure world software LTS
> releases (TF-A/Hafnium/OP-TEE/TAs, TF-M)...probably best to discuss aside,
> this is not in scope of what Varun is raising.
>
> Thanks
> Matteo
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>
--
François-Frédéric Ozog | *Director Linaro Edge & Fog Computing Group*
T: +33.67221.6485
francois.ozog(a)linaro.org | Skype: ffozog
Hi Sandrine,
If my understanding is right, Code-Owner-Review is the equivalent to the old Code-Review+1 and therefore anyone can add a vote there, owner or not. If that is so, I think the current name might be misleading. Wouldn't it be better just "Code-Review", for instance?
As an example, I am not code owner for Measure Boot, but sometimes I am requested to review the patches for it, so in that case it might lead to confusion if I give Code-Owner-Review score.
What do you think?
Best regards,
Javier
-----Original Message-----
From: Sandrine Bailleux via TF-A <tf-a(a)lists.trustedfirmware.org<mailto:Sandrine%20Bailleux%20via%20TF-A%20%3ctf-a@lists.trustedfirmware.org%3e>>
Reply-To: Sandrine Bailleux <sandrine.bailleux(a)arm.com<mailto:Sandrine%20Bailleux%20%3csandrine.bailleux@arm.com%3e>>
To: tf-a <tf-a(a)lists.trustedfirmware.org<mailto:tf-a%20%3ctf-a@lists.trustedfirmware.org%3e>>
Subject: [TF-A] Announcing some changes around the code review label in Gerrit
Date: Tue, 30 Jun 2020 13:29:00 +0000
Hello all,
If you recall, the tf.org Project Maintenance Process [1] advocates a
code review model where both code owners and maintainers need to review
and approve a patch before it gets merged.
Until now, the way we would record this in Gerrit was a bit cumbersome
and ambiguous.
* Code-Review+1 was for code owners to approve a patch.
* Code-Review+2 was for maintainers to approve a patch.
The submission rules in Gerrit were such that a patch needed a
Code-Review+2 to be merged. This meant that if a maintainer was first to
take a look at a patch and approved it (Code-Review+2), Gerrit would
allow the patch to be merged without any code owners review.
To align better with the Project Maintenance Process, I've done some
changes in our Gerrit configuration. You will notice that the
Code-Review label is gone and has been replaced by 2 new labels:
* Code-Owner-Review
* Maintainer-Review
Stating the obvious but code owners are expected to vote on the
Code-Owner-Review label and maintainers on the Maintainer-Review.
Note that maintainers might also be code owners for some modules. If
they are doing a "code owner" type of review on a patch, they would vote
on the Code-Owner-Review label in this case.
Any doubt, please ask. I hope I didn't break anything but if you notice
any permissions problems (things you used to be able to do and cannot do
anymore, or vice versa!) please let me know.
One annoying consequence of this change is that if you've got some
patches in review right now and got some votes on the former
'Code-Review' label, these have been partially lost. The history of
review comments in Gerrit will still show that somebody had voted
Code-Review -1/+1 on your patch but this will no longer appear in the
labels summary frame and won't count towards the approvals required to
get the patch merged.
I could not think of a way to avoid this issue... This should only be
temporary, while we transition all in-flight patches to the new code
review model.
Roughly:
* Existing "Code-Review -1/+1" votes will have to be replayed as
"Code-Owner-Review -1/+1".
* Existing "Code-Review -2/+2" votes will have to be replayed as
"Maintainer-Review -1/+1".
Sorry for the inconvenience. I hope, once we've gone through the
transition period, these changes will make the code review process
clearer to everybody.
I will update the TF-A documentation to explain all of this in the
coming future.
Best regards,
Sandrine
[1]
https://developer.trustedfirmware.org/w/collaboration/project-maintenance-p…
Hi all
The new TrustedFirmware.org security incident process is now live. This process is described here:
https://developer.trustedfirmware.org/w/collaboration/security_center/repor…
Initially the process will be used for the following projects: TF-A, TF-M, OP-TEE and Mbed TLS. The security documentation for each project will be updated soon to reflect this change.
If you are part of an organization that believes it should receive security vulnerability information before it is made public then please ask your relevant colleagues to register as Trusted Stakeholders as described here:
https://developer.trustedfirmware.org/w/collaboration/security_center/trust…
Note we prefer individuals in each organization to coordinate their registration requests with each other and to provide us with an email alias managed by your organization instead of us managing a long list of individual addresses.
Best regards
Dan.
(on behalf of the TrustedFirmware.org security team)
Hello all,
If you recall, the tf.org Project Maintenance Process [1] advocates a
code review model where both code owners and maintainers need to review
and approve a patch before it gets merged.
Until now, the way we would record this in Gerrit was a bit cumbersome
and ambiguous.
* Code-Review+1 was for code owners to approve a patch.
* Code-Review+2 was for maintainers to approve a patch.
The submission rules in Gerrit were such that a patch needed a
Code-Review+2 to be merged. This meant that if a maintainer was first to
take a look at a patch and approved it (Code-Review+2), Gerrit would
allow the patch to be merged without any code owners review.
To align better with the Project Maintenance Process, I've done some
changes in our Gerrit configuration. You will notice that the
Code-Review label is gone and has been replaced by 2 new labels:
* Code-Owner-Review
* Maintainer-Review
Stating the obvious but code owners are expected to vote on the
Code-Owner-Review label and maintainers on the Maintainer-Review.
Note that maintainers might also be code owners for some modules. If
they are doing a "code owner" type of review on a patch, they would vote
on the Code-Owner-Review label in this case.
Any doubt, please ask. I hope I didn't break anything but if you notice
any permissions problems (things you used to be able to do and cannot do
anymore, or vice versa!) please let me know.
One annoying consequence of this change is that if you've got some
patches in review right now and got some votes on the former
'Code-Review' label, these have been partially lost. The history of
review comments in Gerrit will still show that somebody had voted
Code-Review -1/+1 on your patch but this will no longer appear in the
labels summary frame and won't count towards the approvals required to
get the patch merged.
I could not think of a way to avoid this issue... This should only be
temporary, while we transition all in-flight patches to the new code
review model.
Roughly:
* Existing "Code-Review -1/+1" votes will have to be replayed as
"Code-Owner-Review -1/+1".
* Existing "Code-Review -2/+2" votes will have to be replayed as
"Maintainer-Review -1/+1".
Sorry for the inconvenience. I hope, once we've gone through the
transition period, these changes will make the code review process
clearer to everybody.
I will update the TF-A documentation to explain all of this in the
coming future.
Best regards,
Sandrine
[1]
https://developer.trustedfirmware.org/w/collaboration/project-maintenance-p…
Hi,
Please find the latest report on new defect(s) introduced to ARM-software/arm-trusted-firmware found with Coverity Scan.
1 new defect(s) introduced to ARM-software/arm-trusted-firmware found with Coverity Scan.
New defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 360213: Null pointer dereferences (NULL_RETURNS)
/plat/arm/common/arm_dyn_cfg.c: 96 in arm_bl1_set_mbedtls_heap()
________________________________________________________________________________________________________
*** CID 360213: Null pointer dereferences (NULL_RETURNS)
/plat/arm/common/arm_dyn_cfg.c: 96 in arm_bl1_set_mbedtls_heap()
90 * In the latter case, if we still wanted to write in the DTB the heap
91 * information, we would need to call plat_get_mbedtls_heap to retrieve
92 * the default heap's address and size.
93 */
94
95 tb_fw_config_info = FCONF_GET_PROPERTY(dyn_cfg, dtb, TB_FW_CONFIG_ID);
>>> CID 360213: Null pointer dereferences (NULL_RETURNS)
>>> Dereferencing "tb_fw_config_info", which is known to be "NULL".
96 tb_fw_cfg_dtb = tb_fw_config_info->config_addr;
97
98 if ((tb_fw_cfg_dtb != 0UL) && (mbedtls_heap_addr != NULL)) {
99 /* As libfdt use void *, we can't avoid this cast */
100 void *dtb = (void *)tb_fw_cfg_dtb;
101
________________________________________________________________________________________________________
To view the defects in Coverity Scan visit, https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPkl…
Hi All,
The next TF-A Tech Forum is scheduled for Thu 2nd July 2020 16:00 – 17:00 (BST). A reoccurring meeting invite has been sent out to the subscribers of this TF-A mailing list. If you don’t have this please let me know.
Agenda:
* PSA Fuzz Testing - Presented by Gary Morrison
* Originally developed for TF-M PSA API’s this is a different approach to Fuzz testing to that was previously presented in the TF-A SMC Fuzz testing session a few weeks back and provides a testing approach for a the different level of PSA API testing coming to TF-A.
* Optional TF-A Mailing List Topic Discussions
If TF-A contributors have anything they wish to present at any future TF-A tech forum please contact me to have that scheduled.
Previous sessions, both recording and presentation material can be found on the trustedfirmware.org TF-A Technical meeting webpage: https://www.trustedfirmware.org/meetings/tf-a-technical-forum/
A scheduling tracking page is also available to help track sessions suggested and being prepared: https://developer.trustedfirmware.org/w/tf_a/tf-a-tech-forum-scheduling/ Final decisions on what will be presented will be shared a few days before the next meeting and shared on the TF-A mailing list.
Thanks
Joanna
My take on this is perhaps not unexpectantly fairly aligned with Sandrine's.
We as a project have over 30 platforms now up-streamed and the expectations is these are managed by the identified platform owners. We look to them to decide on which of any new capabilities offered are supported in their platforms. We have a strategy not to break up-streamed platforms except where we have clearly announced deprecation and this is all documented here https://trustedfirmware-a.readthedocs.io/en/latest/process/platform-compati… and where interfaces have changed and the work is trivial the submitter is expected to attempt to make a good effort to migrate platforms.
To support this the current CI system hosted by Arm can only verify the builds of most platforms which is one of the gerrit approvals needed to merge a patch. Of course that does not mean the executables produced will necessarily run correctly which is where platform owners would need to assist with their own approvals and own testing if possible. With the Arm Juno platform and various Arm FVP models we do have the capability to test but that misses out all the other platforms up-streamed.
With the OpenCI coming, which we owe a Tech-Forum session on BTW, the Juno and Arm FVP models will also be made available and if platform owners want they will be able to add their platforms to be tested if they want to invest the effort. Sadly migrating to the OpenCI will take time and will have to be delivered in phases but the hope is this will be useful to platform owners as well as for core changes with the visibility of the CI results. The various Arm platforms are generally owned and managed by different teams and go through the same processes as other non Arm platforms on getting patched up-streamed.
So when new core features are added for new Arm hardware IP support or new software features that change behaviour as previously we still look to the platform owners to decide if they want support for these in their platforms. For some trivial changes that may be in the form of reviewing changes prepared but for others that may be deciding if they want to invest the effort to implement themselves. For core changes as well as making patches available via Gerrit for review we have been making more use of the TF-A mailing list to announce the patches are available. Indeed we are trying to make use of the mailing list while work is still in design to discuss openly decisions that need to be made which may or may not effect platforms. The Tech-Forum is also being used to present and discuss ideas and ongoing work. This is all being done to be more open about upcoming changes. We would welcome discussions on the mailing list and tech-forum sessions from platform owners about any subjects. This could be around coordinated support across platforms of new capabilities vs each platform following its own path. That of course holds challenges in coordination of effort and who's effort.
Hopefully the direction of travel here for the project helps with some of the suggestions but happy to discuss more here on the mailing list of in a tech-forum meeting.
Thanks
Joanna
On 25/06/2020, 03:16, "TF-A on behalf of Varun Wadekar via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hi,
>> As you said, even a small change might break a platform and unfortunately we're not in a position to detect that with the CI yet.
I agree that this is a limitation today. If you remember, I have been advocating for announcing on the email alias when in doubt. That way we minimize surprises for the platform owners.
>> IMO we cannot reasonably announce all such changes and I would think that contributors are responsible for keeping an eye on patches and testing them if they think they might be affected.
I understand. We should announce on the mailing list, when in doubt. I strongly disagree with the second part of the statement though. Asking platform owners to keep an eye on every change defeats the purpose of upstreaming. The burden must lie on the implementer of a change to try and upgrade consumers.
>> But I think that the same change might be considered as an improvement by some, and a useless/undesirable change by others.
Possible. And then we take call as a community. If it makes sense for some platforms to move ahead, then that should be OK too.
>> We might be able to reduce the number of build options in some cases but there will always be a need to retain most of them IMO
Thinking out loud - maybe moving to runtime checks makes sense in some cases instead of makefile variables e.g. CPU erratas. We should look at each case individually.
I guess, the main problem seems to be low level of testing. So any implementer wont be confident making a change that affects other platforms. That's when sending an email to the community makes sense. We tackle the situation as a team, instead of one person doing all the work.
-Varun
-----Original Message-----
From: Sandrine Bailleux <sandrine.bailleux(a)arm.com>
Sent: Wednesday, June 24, 2020 4:54 AM
To: Varun Wadekar <vwadekar(a)nvidia.com>
Cc: tf-a(a)lists.trustedfirmware.org
Subject: Re: [TF-A] How should we manage patches affecting multiple users?
External email: Use caution opening links or attachments
Hi Varun,
Thanks for sharing your opinion.
On 6/20/20 1:55 AM, Varun Wadekar wrote:
> Hello @Sandrine Bailleux,
>
> Thanks for starting the email chain. My response to the issues you flagged in the email below.
>
> 1. This seems to be a bit tricky. As a consumer of an API, one would expect that it works as advertised. If the behavior changes, thus affecting the expectations or assumptions surrounding the API, then they should be announced. With so many platforms in the tree, we should always assume that even a small change will break a platform.
I agree. Whenever there is clear intention to change the expectations or assumptions of an API, this should be announced and discussed.
However, there are cases where we might want to rework the implementation of an API (to clean the code for example), make some improvements or extend its functionality. All of that might be done with no intent to step outside of the API original scope but still subtly break some code. As you said, even a small change might break a platform and unfortunately we're not in a position to detect that with the CI yet. IMO we cannot reasonably announce all such changes and I would think that contributors are responsible for keeping an eye on patches and testing them if they think they might be affected.
> 2. For improvements, we should strive to upgrade all consumers of the API/makefile/features. This will ensure that all platforms remain current and we reduce maintenance cost in the long run. I assume, that such improvements will give birth to additional configs/makefile settings/platform variables etc. We would be signing up for a lot of maintenance by allowing some platforms to lag.
>
> For straight forward changes (e.g. the change under discussion) I assume that the platforms will respond and we would get the changes merged relatively fast. For controversial features, this will spark a healthy discussion in the community.
I agree with you in principle. But I think that the same change might be considered as an improvement by some, and a useless/undesirable change by others.
For example, the change under discussion is an improvement from our point of view. It will lower the maintenance cost, as any new file added in the GIC driver in the future will be automatically pulled in by platforms that use the newly introduced GIC makefile, without the need to update all platforms makefiles. But this might not be what everyone wants, some platform owners might prefer continue cherry-picking specific GIC source files to retain control over what they include in their firmware. In this case, is it the right thing to do to "upgrade"
them? I think this is debatable.
Maybe a better alternative is to simply make people aware of the change, provide a link to how the migration was done for Arm platforms in this case, so that they've got an example to guide them if they wish to embrace the change for their own platform ports.
> 3. For a case where the change breaks platforms, it makes sense to just upgrade all of them.
>
>>> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
>
> [VW] IMO, the author of such a patch is the best person to upgrade all affected platforms/users. The main reason being that he/she is already an expert for the patch.
Agree that the patch author is an expert for his own patch but that does not mean he's also an expert on how his patch should be applied for another platform port he's not familiar with.
> But, this might not be true in some cases where the author will need help from the community. For such cases, we should ask for help on the mailing list.
Yes, I think such cases require collaboration indeed.
> I also want to highlight the dangers of introducing make variables as a solution. The current situation is already unmanageable with so many build variables and weak handlers. We should try and avoid adding more.
Not sure how this ties with the original discussions but I agree with you, we've got far too many build options in this project. This makes testing a lot harder, as it means we have hundreds of valid combinations of build options to verify. It also makes the code harder to read and understand because it is sprinkled with #ifdefs.
Hopefully, switching to a KConfig-like configuration system in the future will help alleviate this issue, as at least it will allow us to express the valid combinations of build options in a more robust manner (today the build system tries to forbid invalid combinations but I am sure it's far from exhaustive in these types of checks) and also visualize what's been enabled/disabled more easily for a specific build.
Even though I acknowledge this problem, I am not sure we can completely solve it. Firmware is not general-purpose software and it requires a lot more control over the features you include or not because of stricter memory and CPU constraints. I think people need a way to turn on/off features at a fine granularity. We might be able to reduce the number of build options in some cases but there will always be a need to retain most of them IMO.
> Finally, I agree that there's no silver bullet and we have to deal with each situation differently. Communication is key, as you rightly said. As long we involve the community, we should be OK.
>
> -Varun
>
> -----Original Message-----
> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of
> Sandrine Bailleux via TF-A
> Sent: Friday, June 19, 2020 2:57 AM
> To: tf-a <tf-a(a)lists.trustedfirmware.org>
> Subject: [TF-A] How should we manage patches affecting multiple users?
>
> External email: Use caution opening links or attachments
>
>
> Hello everyone,
>
> I would like to start a discussion around the way we want to handle changes that affect all platforms/users. This subject came up in this code review [1].
>
> I'll present my views on the subject and would like to know what others think.
>
> First of all, what do we mean by "changes that affect all platforms"? It would be any change that is not self-contained within a platform port.
> IOW, a change in some file outside of the plat/ directory. It could be in a driver, in a library, in the build system, in the platform interfaces called by the generic code... the list goes on.
>
> 1. Some are just implementation changes, they leave the interfaces unchanged. These do not break other platforms builds since the call sites don't need to be updated. However, they potentially change the responsibilities of the interface, in which case this might introduce behavioral changes that all users of the interface need to be aware of and adapt their code to.
>
> 2. Some other changes might introduce optional improvements. They introduce a new way of doing things, perhaps a cleaner or more efficient one. Users can safely stay with the old way of doing things without fear of breakage but they would benefit from migrating.
>
> This is the case of Alexei's patch [1], which introduces an
> intermediate
> GICv2 makefile, that Arm platforms now include instead of pulling each individual file by themselves. At this point, it is just an improvement that introduces an indirection level in the build system. This is an improvement in terms of maintainability because if we add a new essential file to this driver in the future, we'll just need to add it in this new GICv2 makefile without having to update all platforms makefiles. However, platform makefiles can continue to pull in individual files if they wish to.
>
> 3. Some other changes might break backwards compatibility. This is something we should avoid as much as possible and we should always strive to provide a migration path. But there are cases where there might be no other way than to break things. In this case, all users have to be migrated.
>
>
> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
>
> I think it's hard to come up with a one-size-fits-all answer. It really depends on the nature of changes, their consequences, the amount of work needed to do the migration, the ability for the patch author to test these changes, to name a few.
>
> I believe the patch author should migrate all users on a best-effort basis. If it's manageable then they should do the work and propose a patch to all affected users for them to review and test on their platforms.
>
> On the other hand, if it's too much work or impractical, then I think the best approach would be for the patch author to announce and discuss the changes on this mailing list. In some scenarios, the changes might be controversial and not all platform owners might agree that the patch brings enough benefit compared to the migration effort, in which case the changes might have to be withdrawn or re-thought. In any case, I believe communication is key here and no one should take any unilateral decision on their own if it affects other users.
>
> I realize this is a vast subject and that we probably won't come up with an answer/reach an agreement on all the aspects of the question. But I'd be interested in hearing other contributors' view and if they've got any experience managing these kinds of issues, perhaps in other open source project.
>
> Best regards,
> Sandrine
>
> [1]
> https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/4538
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi,
On 6/25/20 3:24 AM, Raghu K via TF-A wrote:
> One option you can look at is in bl1_fwu.c if you already haven't, in
> bl1_fwu_image_auth() function, that uses the auth_mod_verify_img, which
> is provided an image id, address and size. You can repeatedly call this
> function on the different image ID's that you have in your FIP. You will
> likely need to calculate the address to pass to this function based on
> reading the FIP header(maybe you can use the IO framework for this by
> opening different image ID's).
> Don't think there is a straight forward way but what you are trying to
> do should be achievable using existing code, rearranged to fit your needs.
Yes, Raghu's approach sounds right to me. As he pointed out, this is not
something that's supported out of the box because the firmware update
feature is normally invoked through an SMC into BL1 as part of the
firmware update flow early during the boot. Things are different in your
case as you're looking to provide this feature much later as a runtime
service so this will surely require some amount of tweaking but sounds
feasible to me.
Best regards,
Sandrine
>
> Thanks
> Raghu
>
> On 6/24/20 12:42 PM, arm in-cloud via TF-A wrote:
>> Hi Sandrine,
>>
>> Waiting for your suggestions on this query.
>>
>> Thanks
>>
>> On Mon, Jun 22, 2020 at 3:40 PM arm in-cloud via TF-A
>> <tf-a(a)lists.trustedfirmware.org
>> <mailto:tf-a@lists.trustedfirmware.org>> wrote:
>>
>> <Posting the question to tf-a mailing list from Maniphest and
>> copied all previous conversation>
>>
>> Hi,
>>
>> On our boards I have an external security chip which communicates
>> with AP (application processor) over a encrypted communication
>> channel. I have a communication driver/PTA running in a Secure
>> Playload (OPTEE) running in S-EL1. My firmware update workflow
>> (FIP.BIN) is as below.
>>
>> In our design, we have QSPI from where AP boots up (regular TF-A
>> workflow) this QSPI is also accessible from external Security chip
>> (which is responsible for AP/SOC's firmware update)
>> On software side, I have a Non-Secure application which receives
>> the prebuilt binary (FIP.BIN) it chopps the binary in fixed sized
>> buffers and send to OPTEE-PTA which internally sends the binary to
>> Security chip (which then validates and writes to QSPI).
>>
>> Now in this firmware update flow, I want to do following things.
>>
>> 1. I want to validate/authenticate the FIP image before sending
>> to security chip. If it authentication is successful only then
>> send the image to security chip.
>>
>> This is what I am planning:-
>>
>> 1. Receive the whole FIP.BIN from NS-APP and store it in RAM (in
>> OPTEE's RAM)
>> 2. I am planning to implement a SIP service in TF-A which will
>> receive the address and size of FIP.BIN in RAM.
>> 3. Calls in to auth_mod driver for authentication of the image.
>>
>> I don't want to update the images immediately I just needs to
>> authenticate the images within FIP.BIN
>>
>> To design this feature, I started looking in to BL1 & BL2 code but
>> not sure how much piece if code I can reuse or use.
>> Also I looked in to fwu test apps if I can mimic something but
>> looks the fwu implementation is absolutely different.
>>
>> My question is how do I just use the authentication functionality
>> available in TF-A for my purpose.
>>
>> Appreciate your help!
>>
>>
>>
>>
>> sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added
>> a subscriber: sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/>.Tue,
>> Jun 16, 6:50 AM <https://developer.trustedfirmware.org/T763#9015>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Hi,
>>
>> Apologies for the delay, I had missed this ticket... Generally
>> speaking, it's better to post questions on the TF-A mailing list
>> [1], it's got far better visibility than Maniphest (which few
>> people monitor I believe) and you are more likely to get a quick
>> answer there.
>>
>> First of all, I've got a few questions that would help me
>> understand your design.
>>
>> 1. You say that OPTEE-PTA would store the FIP image in its RAM. I
>> am assuming this is secure RAM, that only secure software can
>> access, right? If not, this would not seem very secure to me,
>> as the normal world software could easily change the FIP image
>> after it has been authenticated and replace it with some
>> malicious firmware.
>>
>> 2. You'd like to implement a SiP service in TF-A to authenticate
>> the FIP image, given its base address and size. Now I am
>> guessing this would be an address in OPTEE's RAM, right?
>>
>> 3. What cryptographic key would sign the FIP image? Would TF-A
>> have access to this key to authenticate it?
>>
>> 4. What would need authentication? The FIP image as a monolithic
>> blob, or each individual image within this FIP? And in the
>> latter case, would all images be authenticated using the same
>> key or would different images be signed with different keys?
>>
>> Now, coming back to where to look into TF-A source code. You're
>> looking in the right place, all Trusted Boot code is indeed built
>> into BL1 and BL2 in TF-A. The following two documents detail how
>> the authentication framework and firmware update flow work in TF-A
>> and are worth reading if you haven't done so already:
>>
>> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>> https://trustedfirmware-a.readthedocs.io/en/latest/components/firmware-upda…
>>
>> I reckon you'd want to reuse the cryptographic module in your
>> case. It provides a callback to verify the signature of some
>> payload, see [2] and include/drivers/auth/crypto_mod.h. However, I
>> expect it won't be straight forward to reuse this code outside of
>> its context, as it expects DER-encoded data structures following a
>> specific ASN.1 format. Typically when used in the TF-A trusted
>> boot flow, these are coming from X.509 certificates, which already
>> provide the data structures in the right format.
>>
>> Best regards,
>> Sandrine Bailleux
>>
>> [1] https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>> [2] https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>>
>>
>> arm-in-cloud
>> <https://developer.trustedfirmware.org/p/arm-in-cloud/> added a
>> comment.Edited · Wed, Jun 17, 11:42 AM
>> <https://developer.trustedfirmware.org/T763#9018>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Thank you Sandrine for your feedback.
>>
>> Following are the answers to your questions:
>>
>> 1. Yes, the image will be stored in to OPTEEs secured memory. I
>> am guessing TF-A gets access to this memory.
>> 2. Yes, the OPTEE's secure memory address and Size will be passed
>> to the SiP service running in TF-A.
>> 3. These are RSA keys, in my case only TF-A has access to these
>> keys. On a regular boot these are the same keys used to
>> authenticate the images (BL31, BL32 & BL33) in the FIP stored
>> in the QSPI.
>> 4. Yes all images will be authenticated using same key.
>>
>> Thanks for the documentation links, from the firmware-update doc.
>> I am mainly trying to use IMAGE_AUTH part. In my case COPY is
>> already done just needs AUTH part and once AUTH is successful,
>> optee will pass that payload the security chip for update. And
>> after rebooting my device will boot with new firmware. (In my case
>> we always reboot after firmware updates).
>>
>> Do you want me to post this query again on the TF-A mailing list?
>>
>> Thank you!
>>
>>
>>
>> sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added
>> a comment.Fri, Jun 19, 3:09 AM
>> <https://developer.trustedfirmware.org/T763#9032>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Hi,
>>
>> OK I understand a bit better, thanks for the details.
>>
>> Do you want me to post this query again on the TF-A mailing list?
>>
>> If it's not too much hassle for you then yes, I think it'd be
>> preferable to continue the discussion on the TF-A mailing list. I
>> will withhold my comments until then.
>>
>>
>>
>> --
>> TF-A mailing list
>> TF-A(a)lists.trustedfirmware.org <mailto:TF-A@lists.trustedfirmware.org>
>> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>>
>>
>
>
Hello Lauren,
Thanks for answering the open items from the tech forum.
Just curious, do you know if CppUTest can also consider a set of C files as a unit instead of a single function? This is the policy we have adopted internally, but with an automated tool to generate the unit tests.
Cheers,
Varun
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Lauren Wehrmeister via TF-A
Sent: Thursday, June 25, 2020 10:52 AM
To: tf-a(a)lists.trustedfirmware.org
Subject: [TF-A] Unit Test tech forum follow-up
External email: Use caution opening links or attachments
Following up about some questions from my presentation at the tech forum on the TF-A unit testing framework.
In regards to the other tool that was investigated alongside CppUTest, it was Google Test<https://github.com/google/googletest>. The main reason for choosing CppUTest was that it is easily portable to new platforms and has a small footprint despite its feature set. This could be very useful if we want to use it on hardware (not on the host machine) in the future, even if it only has a small amount of memory. Google test is very good for testing user space applications, but most of its features require pthread, file system, and other services of the operating system which we usually don't have in an embedded environment. So we chose CppUTest because it fits for both host and target based testing so we won't end up using different frameworks in the same project. Something to note is that CppUTest has an option<https://cpputest.github.io/manual.html#gtest> for running Google Test based test cases too, but the intention is to keep the framework clean by using only CppUTest.
Also c-picker can support parsing large data structures since the tool uses PyYAML for parsing the config files and clang for processing the source files.
Thanks,
Lauren
Following up about some questions from my presentation at the tech forum on the TF-A unit testing framework.
In regards to the other tool that was investigated alongside CppUTest, it was Google Test<https://github.com/google/googletest>. The main reason for choosing CppUTest was that it is easily portable to new platforms and has a small footprint despite its feature set. This could be very useful if we want to use it on hardware (not on the host machine) in the future, even if it only has a small amount of memory. Google test is very good for testing user space applications, but most of its features require pthread, file system, and other services of the operating system which we usually don't have in an embedded environment. So we chose CppUTest because it fits for both host and target based testing so we won't end up using different frameworks in the same project. Something to note is that CppUTest has an option<https://cpputest.github.io/manual.html#gtest> for running Google Test based test cases too, but the intention is to keep the framework clean by using only CppUTest.
Also c-picker can support parsing large data structures since the tool uses PyYAML for parsing the config files and clang for processing the source files.
Thanks,
Lauren