Hi,
Please find the latest report on new defect(s) introduced to ARM-software/arm-trusted-firmware found with Coverity Scan.
1 new defect(s) introduced to ARM-software/arm-trusted-firmware found with Coverity Scan.
New defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 360213: Null pointer dereferences (NULL_RETURNS)
/plat/arm/common/arm_dyn_cfg.c: 96 in arm_bl1_set_mbedtls_heap()
________________________________________________________________________________________________________
*** CID 360213: Null pointer dereferences (NULL_RETURNS)
/plat/arm/common/arm_dyn_cfg.c: 96 in arm_bl1_set_mbedtls_heap()
90 * In the latter case, if we still wanted to write in the DTB the heap
91 * information, we would need to call plat_get_mbedtls_heap to retrieve
92 * the default heap's address and size.
93 */
94
95 tb_fw_config_info = FCONF_GET_PROPERTY(dyn_cfg, dtb, TB_FW_CONFIG_ID);
>>> CID 360213: Null pointer dereferences (NULL_RETURNS)
>>> Dereferencing "tb_fw_config_info", which is known to be "NULL".
96 tb_fw_cfg_dtb = tb_fw_config_info->config_addr;
97
98 if ((tb_fw_cfg_dtb != 0UL) && (mbedtls_heap_addr != NULL)) {
99 /* As libfdt use void *, we can't avoid this cast */
100 void *dtb = (void *)tb_fw_cfg_dtb;
101
________________________________________________________________________________________________________
To view the defects in Coverity Scan visit, https://u2389337.ct.sendgrid.net/ls/click?upn=nJaKvJSIH-2FPAfmty-2BK5tYpPkl…
Hi All,
The next TF-A Tech Forum is scheduled for Thu 2nd July 2020 16:00 – 17:00 (BST). A reoccurring meeting invite has been sent out to the subscribers of this TF-A mailing list. If you don’t have this please let me know.
Agenda:
* PSA Fuzz Testing - Presented by Gary Morrison
* Originally developed for TF-M PSA API’s this is a different approach to Fuzz testing to that was previously presented in the TF-A SMC Fuzz testing session a few weeks back and provides a testing approach for a the different level of PSA API testing coming to TF-A.
* Optional TF-A Mailing List Topic Discussions
If TF-A contributors have anything they wish to present at any future TF-A tech forum please contact me to have that scheduled.
Previous sessions, both recording and presentation material can be found on the trustedfirmware.org TF-A Technical meeting webpage: https://www.trustedfirmware.org/meetings/tf-a-technical-forum/
A scheduling tracking page is also available to help track sessions suggested and being prepared: https://developer.trustedfirmware.org/w/tf_a/tf-a-tech-forum-scheduling/ Final decisions on what will be presented will be shared a few days before the next meeting and shared on the TF-A mailing list.
Thanks
Joanna
My take on this is perhaps not unexpectantly fairly aligned with Sandrine's.
We as a project have over 30 platforms now up-streamed and the expectations is these are managed by the identified platform owners. We look to them to decide on which of any new capabilities offered are supported in their platforms. We have a strategy not to break up-streamed platforms except where we have clearly announced deprecation and this is all documented here https://trustedfirmware-a.readthedocs.io/en/latest/process/platform-compati… and where interfaces have changed and the work is trivial the submitter is expected to attempt to make a good effort to migrate platforms.
To support this the current CI system hosted by Arm can only verify the builds of most platforms which is one of the gerrit approvals needed to merge a patch. Of course that does not mean the executables produced will necessarily run correctly which is where platform owners would need to assist with their own approvals and own testing if possible. With the Arm Juno platform and various Arm FVP models we do have the capability to test but that misses out all the other platforms up-streamed.
With the OpenCI coming, which we owe a Tech-Forum session on BTW, the Juno and Arm FVP models will also be made available and if platform owners want they will be able to add their platforms to be tested if they want to invest the effort. Sadly migrating to the OpenCI will take time and will have to be delivered in phases but the hope is this will be useful to platform owners as well as for core changes with the visibility of the CI results. The various Arm platforms are generally owned and managed by different teams and go through the same processes as other non Arm platforms on getting patched up-streamed.
So when new core features are added for new Arm hardware IP support or new software features that change behaviour as previously we still look to the platform owners to decide if they want support for these in their platforms. For some trivial changes that may be in the form of reviewing changes prepared but for others that may be deciding if they want to invest the effort to implement themselves. For core changes as well as making patches available via Gerrit for review we have been making more use of the TF-A mailing list to announce the patches are available. Indeed we are trying to make use of the mailing list while work is still in design to discuss openly decisions that need to be made which may or may not effect platforms. The Tech-Forum is also being used to present and discuss ideas and ongoing work. This is all being done to be more open about upcoming changes. We would welcome discussions on the mailing list and tech-forum sessions from platform owners about any subjects. This could be around coordinated support across platforms of new capabilities vs each platform following its own path. That of course holds challenges in coordination of effort and who's effort.
Hopefully the direction of travel here for the project helps with some of the suggestions but happy to discuss more here on the mailing list of in a tech-forum meeting.
Thanks
Joanna
On 25/06/2020, 03:16, "TF-A on behalf of Varun Wadekar via TF-A" <tf-a-bounces(a)lists.trustedfirmware.org on behalf of tf-a(a)lists.trustedfirmware.org> wrote:
Hi,
>> As you said, even a small change might break a platform and unfortunately we're not in a position to detect that with the CI yet.
I agree that this is a limitation today. If you remember, I have been advocating for announcing on the email alias when in doubt. That way we minimize surprises for the platform owners.
>> IMO we cannot reasonably announce all such changes and I would think that contributors are responsible for keeping an eye on patches and testing them if they think they might be affected.
I understand. We should announce on the mailing list, when in doubt. I strongly disagree with the second part of the statement though. Asking platform owners to keep an eye on every change defeats the purpose of upstreaming. The burden must lie on the implementer of a change to try and upgrade consumers.
>> But I think that the same change might be considered as an improvement by some, and a useless/undesirable change by others.
Possible. And then we take call as a community. If it makes sense for some platforms to move ahead, then that should be OK too.
>> We might be able to reduce the number of build options in some cases but there will always be a need to retain most of them IMO
Thinking out loud - maybe moving to runtime checks makes sense in some cases instead of makefile variables e.g. CPU erratas. We should look at each case individually.
I guess, the main problem seems to be low level of testing. So any implementer wont be confident making a change that affects other platforms. That's when sending an email to the community makes sense. We tackle the situation as a team, instead of one person doing all the work.
-Varun
-----Original Message-----
From: Sandrine Bailleux <sandrine.bailleux(a)arm.com>
Sent: Wednesday, June 24, 2020 4:54 AM
To: Varun Wadekar <vwadekar(a)nvidia.com>
Cc: tf-a(a)lists.trustedfirmware.org
Subject: Re: [TF-A] How should we manage patches affecting multiple users?
External email: Use caution opening links or attachments
Hi Varun,
Thanks for sharing your opinion.
On 6/20/20 1:55 AM, Varun Wadekar wrote:
> Hello @Sandrine Bailleux,
>
> Thanks for starting the email chain. My response to the issues you flagged in the email below.
>
> 1. This seems to be a bit tricky. As a consumer of an API, one would expect that it works as advertised. If the behavior changes, thus affecting the expectations or assumptions surrounding the API, then they should be announced. With so many platforms in the tree, we should always assume that even a small change will break a platform.
I agree. Whenever there is clear intention to change the expectations or assumptions of an API, this should be announced and discussed.
However, there are cases where we might want to rework the implementation of an API (to clean the code for example), make some improvements or extend its functionality. All of that might be done with no intent to step outside of the API original scope but still subtly break some code. As you said, even a small change might break a platform and unfortunately we're not in a position to detect that with the CI yet. IMO we cannot reasonably announce all such changes and I would think that contributors are responsible for keeping an eye on patches and testing them if they think they might be affected.
> 2. For improvements, we should strive to upgrade all consumers of the API/makefile/features. This will ensure that all platforms remain current and we reduce maintenance cost in the long run. I assume, that such improvements will give birth to additional configs/makefile settings/platform variables etc. We would be signing up for a lot of maintenance by allowing some platforms to lag.
>
> For straight forward changes (e.g. the change under discussion) I assume that the platforms will respond and we would get the changes merged relatively fast. For controversial features, this will spark a healthy discussion in the community.
I agree with you in principle. But I think that the same change might be considered as an improvement by some, and a useless/undesirable change by others.
For example, the change under discussion is an improvement from our point of view. It will lower the maintenance cost, as any new file added in the GIC driver in the future will be automatically pulled in by platforms that use the newly introduced GIC makefile, without the need to update all platforms makefiles. But this might not be what everyone wants, some platform owners might prefer continue cherry-picking specific GIC source files to retain control over what they include in their firmware. In this case, is it the right thing to do to "upgrade"
them? I think this is debatable.
Maybe a better alternative is to simply make people aware of the change, provide a link to how the migration was done for Arm platforms in this case, so that they've got an example to guide them if they wish to embrace the change for their own platform ports.
> 3. For a case where the change breaks platforms, it makes sense to just upgrade all of them.
>
>>> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
>
> [VW] IMO, the author of such a patch is the best person to upgrade all affected platforms/users. The main reason being that he/she is already an expert for the patch.
Agree that the patch author is an expert for his own patch but that does not mean he's also an expert on how his patch should be applied for another platform port he's not familiar with.
> But, this might not be true in some cases where the author will need help from the community. For such cases, we should ask for help on the mailing list.
Yes, I think such cases require collaboration indeed.
> I also want to highlight the dangers of introducing make variables as a solution. The current situation is already unmanageable with so many build variables and weak handlers. We should try and avoid adding more.
Not sure how this ties with the original discussions but I agree with you, we've got far too many build options in this project. This makes testing a lot harder, as it means we have hundreds of valid combinations of build options to verify. It also makes the code harder to read and understand because it is sprinkled with #ifdefs.
Hopefully, switching to a KConfig-like configuration system in the future will help alleviate this issue, as at least it will allow us to express the valid combinations of build options in a more robust manner (today the build system tries to forbid invalid combinations but I am sure it's far from exhaustive in these types of checks) and also visualize what's been enabled/disabled more easily for a specific build.
Even though I acknowledge this problem, I am not sure we can completely solve it. Firmware is not general-purpose software and it requires a lot more control over the features you include or not because of stricter memory and CPU constraints. I think people need a way to turn on/off features at a fine granularity. We might be able to reduce the number of build options in some cases but there will always be a need to retain most of them IMO.
> Finally, I agree that there's no silver bullet and we have to deal with each situation differently. Communication is key, as you rightly said. As long we involve the community, we should be OK.
>
> -Varun
>
> -----Original Message-----
> From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of
> Sandrine Bailleux via TF-A
> Sent: Friday, June 19, 2020 2:57 AM
> To: tf-a <tf-a(a)lists.trustedfirmware.org>
> Subject: [TF-A] How should we manage patches affecting multiple users?
>
> External email: Use caution opening links or attachments
>
>
> Hello everyone,
>
> I would like to start a discussion around the way we want to handle changes that affect all platforms/users. This subject came up in this code review [1].
>
> I'll present my views on the subject and would like to know what others think.
>
> First of all, what do we mean by "changes that affect all platforms"? It would be any change that is not self-contained within a platform port.
> IOW, a change in some file outside of the plat/ directory. It could be in a driver, in a library, in the build system, in the platform interfaces called by the generic code... the list goes on.
>
> 1. Some are just implementation changes, they leave the interfaces unchanged. These do not break other platforms builds since the call sites don't need to be updated. However, they potentially change the responsibilities of the interface, in which case this might introduce behavioral changes that all users of the interface need to be aware of and adapt their code to.
>
> 2. Some other changes might introduce optional improvements. They introduce a new way of doing things, perhaps a cleaner or more efficient one. Users can safely stay with the old way of doing things without fear of breakage but they would benefit from migrating.
>
> This is the case of Alexei's patch [1], which introduces an
> intermediate
> GICv2 makefile, that Arm platforms now include instead of pulling each individual file by themselves. At this point, it is just an improvement that introduces an indirection level in the build system. This is an improvement in terms of maintainability because if we add a new essential file to this driver in the future, we'll just need to add it in this new GICv2 makefile without having to update all platforms makefiles. However, platform makefiles can continue to pull in individual files if they wish to.
>
> 3. Some other changes might break backwards compatibility. This is something we should avoid as much as possible and we should always strive to provide a migration path. But there are cases where there might be no other way than to break things. In this case, all users have to be migrated.
>
>
> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
>
> I think it's hard to come up with a one-size-fits-all answer. It really depends on the nature of changes, their consequences, the amount of work needed to do the migration, the ability for the patch author to test these changes, to name a few.
>
> I believe the patch author should migrate all users on a best-effort basis. If it's manageable then they should do the work and propose a patch to all affected users for them to review and test on their platforms.
>
> On the other hand, if it's too much work or impractical, then I think the best approach would be for the patch author to announce and discuss the changes on this mailing list. In some scenarios, the changes might be controversial and not all platform owners might agree that the patch brings enough benefit compared to the migration effort, in which case the changes might have to be withdrawn or re-thought. In any case, I believe communication is key here and no one should take any unilateral decision on their own if it affects other users.
>
> I realize this is a vast subject and that we probably won't come up with an answer/reach an agreement on all the aspects of the question. But I'd be interested in hearing other contributors' view and if they've got any experience managing these kinds of issues, perhaps in other open source project.
>
> Best regards,
> Sandrine
>
> [1]
> https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/4538
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi,
On 6/25/20 3:24 AM, Raghu K via TF-A wrote:
> One option you can look at is in bl1_fwu.c if you already haven't, in
> bl1_fwu_image_auth() function, that uses the auth_mod_verify_img, which
> is provided an image id, address and size. You can repeatedly call this
> function on the different image ID's that you have in your FIP. You will
> likely need to calculate the address to pass to this function based on
> reading the FIP header(maybe you can use the IO framework for this by
> opening different image ID's).
> Don't think there is a straight forward way but what you are trying to
> do should be achievable using existing code, rearranged to fit your needs.
Yes, Raghu's approach sounds right to me. As he pointed out, this is not
something that's supported out of the box because the firmware update
feature is normally invoked through an SMC into BL1 as part of the
firmware update flow early during the boot. Things are different in your
case as you're looking to provide this feature much later as a runtime
service so this will surely require some amount of tweaking but sounds
feasible to me.
Best regards,
Sandrine
>
> Thanks
> Raghu
>
> On 6/24/20 12:42 PM, arm in-cloud via TF-A wrote:
>> Hi Sandrine,
>>
>> Waiting for your suggestions on this query.
>>
>> Thanks
>>
>> On Mon, Jun 22, 2020 at 3:40 PM arm in-cloud via TF-A
>> <tf-a(a)lists.trustedfirmware.org
>> <mailto:tf-a@lists.trustedfirmware.org>> wrote:
>>
>> <Posting the question to tf-a mailing list from Maniphest and
>> copied all previous conversation>
>>
>> Hi,
>>
>> On our boards I have an external security chip which communicates
>> with AP (application processor) over a encrypted communication
>> channel. I have a communication driver/PTA running in a Secure
>> Playload (OPTEE) running in S-EL1. My firmware update workflow
>> (FIP.BIN) is as below.
>>
>> In our design, we have QSPI from where AP boots up (regular TF-A
>> workflow) this QSPI is also accessible from external Security chip
>> (which is responsible for AP/SOC's firmware update)
>> On software side, I have a Non-Secure application which receives
>> the prebuilt binary (FIP.BIN) it chopps the binary in fixed sized
>> buffers and send to OPTEE-PTA which internally sends the binary to
>> Security chip (which then validates and writes to QSPI).
>>
>> Now in this firmware update flow, I want to do following things.
>>
>> 1. I want to validate/authenticate the FIP image before sending
>> to security chip. If it authentication is successful only then
>> send the image to security chip.
>>
>> This is what I am planning:-
>>
>> 1. Receive the whole FIP.BIN from NS-APP and store it in RAM (in
>> OPTEE's RAM)
>> 2. I am planning to implement a SIP service in TF-A which will
>> receive the address and size of FIP.BIN in RAM.
>> 3. Calls in to auth_mod driver for authentication of the image.
>>
>> I don't want to update the images immediately I just needs to
>> authenticate the images within FIP.BIN
>>
>> To design this feature, I started looking in to BL1 & BL2 code but
>> not sure how much piece if code I can reuse or use.
>> Also I looked in to fwu test apps if I can mimic something but
>> looks the fwu implementation is absolutely different.
>>
>> My question is how do I just use the authentication functionality
>> available in TF-A for my purpose.
>>
>> Appreciate your help!
>>
>>
>>
>>
>> sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added
>> a subscriber: sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/>.Tue,
>> Jun 16, 6:50 AM <https://developer.trustedfirmware.org/T763#9015>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Hi,
>>
>> Apologies for the delay, I had missed this ticket... Generally
>> speaking, it's better to post questions on the TF-A mailing list
>> [1], it's got far better visibility than Maniphest (which few
>> people monitor I believe) and you are more likely to get a quick
>> answer there.
>>
>> First of all, I've got a few questions that would help me
>> understand your design.
>>
>> 1. You say that OPTEE-PTA would store the FIP image in its RAM. I
>> am assuming this is secure RAM, that only secure software can
>> access, right? If not, this would not seem very secure to me,
>> as the normal world software could easily change the FIP image
>> after it has been authenticated and replace it with some
>> malicious firmware.
>>
>> 2. You'd like to implement a SiP service in TF-A to authenticate
>> the FIP image, given its base address and size. Now I am
>> guessing this would be an address in OPTEE's RAM, right?
>>
>> 3. What cryptographic key would sign the FIP image? Would TF-A
>> have access to this key to authenticate it?
>>
>> 4. What would need authentication? The FIP image as a monolithic
>> blob, or each individual image within this FIP? And in the
>> latter case, would all images be authenticated using the same
>> key or would different images be signed with different keys?
>>
>> Now, coming back to where to look into TF-A source code. You're
>> looking in the right place, all Trusted Boot code is indeed built
>> into BL1 and BL2 in TF-A. The following two documents detail how
>> the authentication framework and firmware update flow work in TF-A
>> and are worth reading if you haven't done so already:
>>
>> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>> https://trustedfirmware-a.readthedocs.io/en/latest/components/firmware-upda…
>>
>> I reckon you'd want to reuse the cryptographic module in your
>> case. It provides a callback to verify the signature of some
>> payload, see [2] and include/drivers/auth/crypto_mod.h. However, I
>> expect it won't be straight forward to reuse this code outside of
>> its context, as it expects DER-encoded data structures following a
>> specific ASN.1 format. Typically when used in the TF-A trusted
>> boot flow, these are coming from X.509 certificates, which already
>> provide the data structures in the right format.
>>
>> Best regards,
>> Sandrine Bailleux
>>
>> [1] https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>> [2] https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>>
>>
>> arm-in-cloud
>> <https://developer.trustedfirmware.org/p/arm-in-cloud/> added a
>> comment.Edited · Wed, Jun 17, 11:42 AM
>> <https://developer.trustedfirmware.org/T763#9018>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Thank you Sandrine for your feedback.
>>
>> Following are the answers to your questions:
>>
>> 1. Yes, the image will be stored in to OPTEEs secured memory. I
>> am guessing TF-A gets access to this memory.
>> 2. Yes, the OPTEE's secure memory address and Size will be passed
>> to the SiP service running in TF-A.
>> 3. These are RSA keys, in my case only TF-A has access to these
>> keys. On a regular boot these are the same keys used to
>> authenticate the images (BL31, BL32 & BL33) in the FIP stored
>> in the QSPI.
>> 4. Yes all images will be authenticated using same key.
>>
>> Thanks for the documentation links, from the firmware-update doc.
>> I am mainly trying to use IMAGE_AUTH part. In my case COPY is
>> already done just needs AUTH part and once AUTH is successful,
>> optee will pass that payload the security chip for update. And
>> after rebooting my device will boot with new firmware. (In my case
>> we always reboot after firmware updates).
>>
>> Do you want me to post this query again on the TF-A mailing list?
>>
>> Thank you!
>>
>>
>>
>> sandrine-bailleux-arm
>> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added
>> a comment.Fri, Jun 19, 3:09 AM
>> <https://developer.trustedfirmware.org/T763#9032>
>>
>> <https://developer.trustedfirmware.org/T763#>
>>
>> Hi,
>>
>> OK I understand a bit better, thanks for the details.
>>
>> Do you want me to post this query again on the TF-A mailing list?
>>
>> If it's not too much hassle for you then yes, I think it'd be
>> preferable to continue the discussion on the TF-A mailing list. I
>> will withhold my comments until then.
>>
>>
>>
>> --
>> TF-A mailing list
>> TF-A(a)lists.trustedfirmware.org <mailto:TF-A@lists.trustedfirmware.org>
>> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>>
>>
>
>
Hello Lauren,
Thanks for answering the open items from the tech forum.
Just curious, do you know if CppUTest can also consider a set of C files as a unit instead of a single function? This is the policy we have adopted internally, but with an automated tool to generate the unit tests.
Cheers,
Varun
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Lauren Wehrmeister via TF-A
Sent: Thursday, June 25, 2020 10:52 AM
To: tf-a(a)lists.trustedfirmware.org
Subject: [TF-A] Unit Test tech forum follow-up
External email: Use caution opening links or attachments
Following up about some questions from my presentation at the tech forum on the TF-A unit testing framework.
In regards to the other tool that was investigated alongside CppUTest, it was Google Test<https://github.com/google/googletest>. The main reason for choosing CppUTest was that it is easily portable to new platforms and has a small footprint despite its feature set. This could be very useful if we want to use it on hardware (not on the host machine) in the future, even if it only has a small amount of memory. Google test is very good for testing user space applications, but most of its features require pthread, file system, and other services of the operating system which we usually don't have in an embedded environment. So we chose CppUTest because it fits for both host and target based testing so we won't end up using different frameworks in the same project. Something to note is that CppUTest has an option<https://cpputest.github.io/manual.html#gtest> for running Google Test based test cases too, but the intention is to keep the framework clean by using only CppUTest.
Also c-picker can support parsing large data structures since the tool uses PyYAML for parsing the config files and clang for processing the source files.
Thanks,
Lauren
Following up about some questions from my presentation at the tech forum on the TF-A unit testing framework.
In regards to the other tool that was investigated alongside CppUTest, it was Google Test<https://github.com/google/googletest>. The main reason for choosing CppUTest was that it is easily portable to new platforms and has a small footprint despite its feature set. This could be very useful if we want to use it on hardware (not on the host machine) in the future, even if it only has a small amount of memory. Google test is very good for testing user space applications, but most of its features require pthread, file system, and other services of the operating system which we usually don't have in an embedded environment. So we chose CppUTest because it fits for both host and target based testing so we won't end up using different frameworks in the same project. Something to note is that CppUTest has an option<https://cpputest.github.io/manual.html#gtest> for running Google Test based test cases too, but the intention is to keep the framework clean by using only CppUTest.
Also c-picker can support parsing large data structures since the tool uses PyYAML for parsing the config files and clang for processing the source files.
Thanks,
Lauren
Hello @Sandrine Bailleux,
Thanks for starting the email chain. My response to the issues you flagged in the email below.
1. This seems to be a bit tricky. As a consumer of an API, one would expect that it works as advertised. If the behavior changes, thus affecting the expectations or assumptions surrounding the API, then they should be announced. With so many platforms in the tree, we should always assume that even a small change will break a platform.
2. For improvements, we should strive to upgrade all consumers of the API/makefile/features. This will ensure that all platforms remain current and we reduce maintenance cost in the long run. I assume, that such improvements will give birth to additional configs/makefile settings/platform variables etc. We would be signing up for a lot of maintenance by allowing some platforms to lag.
For straight forward changes (e.g. the change under discussion) I assume that the platforms will respond and we would get the changes merged relatively fast. For controversial features, this will spark a healthy discussion in the community.
3. For a case where the change breaks platforms, it makes sense to just upgrade all of them.
>> Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
[VW] IMO, the author of such a patch is the best person to upgrade all affected platforms/users. The main reason being that he/she is already an expert for the patch. But, this might not be true in some cases where the author will need help from the community. For such cases, we should ask for help on the mailing list.
I also want to highlight the dangers of introducing make variables as a solution. The current situation is already unmanageable with so many build variables and weak handlers. We should try and avoid adding more.
Finally, I agree that there's no silver bullet and we have to deal with each situation differently. Communication is key, as you rightly said. As long we involve the community, we should be OK.
-Varun
-----Original Message-----
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org> On Behalf Of Sandrine Bailleux via TF-A
Sent: Friday, June 19, 2020 2:57 AM
To: tf-a <tf-a(a)lists.trustedfirmware.org>
Subject: [TF-A] How should we manage patches affecting multiple users?
External email: Use caution opening links or attachments
Hello everyone,
I would like to start a discussion around the way we want to handle changes that affect all platforms/users. This subject came up in this code review [1].
I'll present my views on the subject and would like to know what others think.
First of all, what do we mean by "changes that affect all platforms"? It would be any change that is not self-contained within a platform port.
IOW, a change in some file outside of the plat/ directory. It could be in a driver, in a library, in the build system, in the platform interfaces called by the generic code... the list goes on.
1. Some are just implementation changes, they leave the interfaces unchanged. These do not break other platforms builds since the call sites don't need to be updated. However, they potentially change the responsibilities of the interface, in which case this might introduce behavioral changes that all users of the interface need to be aware of and adapt their code to.
2. Some other changes might introduce optional improvements. They introduce a new way of doing things, perhaps a cleaner or more efficient one. Users can safely stay with the old way of doing things without fear of breakage but they would benefit from migrating.
This is the case of Alexei's patch [1], which introduces an intermediate
GICv2 makefile, that Arm platforms now include instead of pulling each individual file by themselves. At this point, it is just an improvement that introduces an indirection level in the build system. This is an improvement in terms of maintainability because if we add a new essential file to this driver in the future, we'll just need to add it in this new GICv2 makefile without having to update all platforms makefiles. However, platform makefiles can continue to pull in individual files if they wish to.
3. Some other changes might break backwards compatibility. This is something we should avoid as much as possible and we should always strive to provide a migration path. But there are cases where there might be no other way than to break things. In this case, all users have to be migrated.
Now the question we were pondering in Gerrit is, where does the responsibility of migrating all platforms/users lie for all these types of changes? Is the patch author responsible to do all the work?
I think it's hard to come up with a one-size-fits-all answer. It really depends on the nature of changes, their consequences, the amount of work needed to do the migration, the ability for the patch author to test these changes, to name a few.
I believe the patch author should migrate all users on a best-effort basis. If it's manageable then they should do the work and propose a patch to all affected users for them to review and test on their platforms.
On the other hand, if it's too much work or impractical, then I think the best approach would be for the patch author to announce and discuss the changes on this mailing list. In some scenarios, the changes might be controversial and not all platform owners might agree that the patch brings enough benefit compared to the migration effort, in which case the changes might have to be withdrawn or re-thought. In any case, I believe communication is key here and no one should take any unilateral decision on their own if it affects other users.
I realize this is a vast subject and that we probably won't come up with an answer/reach an agreement on all the aspects of the question. But I'd be interested in hearing other contributors' view and if they've got any experience managing these kinds of issues, perhaps in other open source project.
Best regards,
Sandrine
[1] https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/4538
--
TF-A mailing list
TF-A(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/tf-a
Hi Sandrine,
Waiting for your suggestions on this query.
Thanks
On Mon, Jun 22, 2020 at 3:40 PM arm in-cloud via TF-A <
tf-a(a)lists.trustedfirmware.org> wrote:
> <Posting the question to tf-a mailing list from Maniphest and copied all
> previous conversation>
>
> Hi,
>
> On our boards I have an external security chip which communicates with AP
> (application processor) over a encrypted communication channel. I have a
> communication driver/PTA running in a Secure Playload (OPTEE) running in
> S-EL1. My firmware update workflow (FIP.BIN) is as below.
>
> In our design, we have QSPI from where AP boots up (regular TF-A workflow)
> this QSPI is also accessible from external Security chip (which is
> responsible for AP/SOC's firmware update)
> On software side, I have a Non-Secure application which receives the
> prebuilt binary (FIP.BIN) it chopps the binary in fixed sized buffers and
> send to OPTEE-PTA which internally sends the binary to Security chip (which
> then validates and writes to QSPI).
>
> Now in this firmware update flow, I want to do following things.
>
> 1. I want to validate/authenticate the FIP image before sending to
> security chip. If it authentication is successful only then send the image
> to security chip.
>
> This is what I am planning:-
>
> 1. Receive the whole FIP.BIN from NS-APP and store it in RAM (in
> OPTEE's RAM)
> 2. I am planning to implement a SIP service in TF-A which will receive
> the address and size of FIP.BIN in RAM.
> 3. Calls in to auth_mod driver for authentication of the image.
>
> I don't want to update the images immediately I just needs to authenticate
> the images within FIP.BIN
>
> To design this feature, I started looking in to BL1 & BL2 code but not
> sure how much piece if code I can reuse or use.
> Also I looked in to fwu test apps if I can mimic something but looks the
> fwu implementation is absolutely different.
>
> My question is how do I just use the authentication functionality
> available in TF-A for my purpose.
>
> Appreciate your help!
>
>
>
>
> sandrine-bailleux-arm
> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added a
> subscriber: sandrine-bailleux-arm
> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/>.Tue, Jun
> 16, 6:50 AM <https://developer.trustedfirmware.org/T763#9015>
>
> <https://developer.trustedfirmware.org/T763#>
>
> Hi,
>
> Apologies for the delay, I had missed this ticket... Generally speaking,
> it's better to post questions on the TF-A mailing list [1], it's got far
> better visibility than Maniphest (which few people monitor I believe) and
> you are more likely to get a quick answer there.
>
> First of all, I've got a few questions that would help me understand your
> design.
>
> 1. You say that OPTEE-PTA would store the FIP image in its RAM. I am
> assuming this is secure RAM, that only secure software can access, right?
> If not, this would not seem very secure to me, as the normal world software
> could easily change the FIP image after it has been authenticated and
> replace it with some malicious firmware.
>
>
> 1. You'd like to implement a SiP service in TF-A to authenticate the
> FIP image, given its base address and size. Now I am guessing this would be
> an address in OPTEE's RAM, right?
>
>
> 1. What cryptographic key would sign the FIP image? Would TF-A have
> access to this key to authenticate it?
>
>
> 1. What would need authentication? The FIP image as a monolithic blob,
> or each individual image within this FIP? And in the latter case, would all
> images be authenticated using the same key or would different images be
> signed with different keys?
>
> Now, coming back to where to look into TF-A source code. You're looking in
> the right place, all Trusted Boot code is indeed built into BL1 and BL2 in
> TF-A. The following two documents detail how the authentication framework
> and firmware update flow work in TF-A and are worth reading if you haven't
> done so already:
>
>
> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>
> https://trustedfirmware-a.readthedocs.io/en/latest/components/firmware-upda…
>
> I reckon you'd want to reuse the cryptographic module in your case. It
> provides a callback to verify the signature of some payload, see [2] and
> include/drivers/auth/crypto_mod.h. However, I expect it won't be straight
> forward to reuse this code outside of its context, as it expects
> DER-encoded data structures following a specific ASN.1 format. Typically
> when used in the TF-A trusted boot flow, these are coming from X.509
> certificates, which already provide the data structures in the right format.
>
> Best regards,
> Sandrine Bailleux
>
> [1] https://lists.trustedfirmware.org/mailman/listinfo/tf-a
> [2]
> https://trustedfirmware-a.readthedocs.io/en/latest/design/auth-framework.ht…
>
>
> arm-in-cloud <https://developer.trustedfirmware.org/p/arm-in-cloud/> added
> a comment.Edited · Wed, Jun 17, 11:42 AM
> <https://developer.trustedfirmware.org/T763#9018>
>
> <https://developer.trustedfirmware.org/T763#>
>
> Thank you Sandrine for your feedback.
>
> Following are the answers to your questions:
>
> 1. Yes, the image will be stored in to OPTEEs secured memory. I am
> guessing TF-A gets access to this memory.
> 2. Yes, the OPTEE's secure memory address and Size will be passed to
> the SiP service running in TF-A.
> 3. These are RSA keys, in my case only TF-A has access to these keys.
> On a regular boot these are the same keys used to authenticate the images
> (BL31, BL32 & BL33) in the FIP stored in the QSPI.
> 4. Yes all images will be authenticated using same key.
>
> Thanks for the documentation links, from the firmware-update doc. I am
> mainly trying to use IMAGE_AUTH part. In my case COPY is already done just
> needs AUTH part and once AUTH is successful, optee will pass that payload
> the security chip for update. And after rebooting my device will boot with
> new firmware. (In my case we always reboot after firmware updates).
>
> Do you want me to post this query again on the TF-A mailing list?
>
> Thank you!
>
>
>
> sandrine-bailleux-arm
> <https://developer.trustedfirmware.org/p/sandrine-bailleux-arm/> added a
> comment.Fri, Jun 19, 3:09 AM
> <https://developer.trustedfirmware.org/T763#9032>
>
> <https://developer.trustedfirmware.org/T763#>
>
> Hi,
>
> OK I understand a bit better, thanks for the details.
>
> Do you want me to post this query again on the TF-A mailing list?
>
> If it's not too much hassle for you then yes, I think it'd be preferable
> to continue the discussion on the TF-A mailing list. I will withhold my
> comments until then.
>
>
>
> --
> TF-A mailing list
> TF-A(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tf-a
>
Sorry for a delay.
I didn't have a bandwidth for doing it.
I asked my teammate Marcin Salnik for testing the fix https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3493 on our Veloce emulator.
Now, I can confirm that this fix works. ACC BL31 booted.
From: Varun Wadekar <vwadekar(a)nvidia.com>
Sent: Monday, June 01, 2020 8:57 PM
To: Witkowski, Andrzej <andrzej.witkowski(a)intel.com>
Cc: Lessnau, Adam <adam.lessnau(a)intel.com>; tf-a(a)lists.trustedfirmware.org
Subject: RE: Unhandled Exception at EL3 in BL31 for Neoverse N1 with direct connect of DSU
Hi,
I believe the fix for this issue is part of https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/3493. Can you please try that change?
-Varun
From: TF-A <tf-a-bounces(a)lists.trustedfirmware.org<mailto:tf-a-bounces@lists.trustedfirmware.org>> On Behalf Of Witkowski, Andrzej via TF-A
Sent: Monday, June 1, 2020 7:08 AM
To: tf-a(a)lists.trustedfirmware.org<mailto:tf-a@lists.trustedfirmware.org>
Cc: Witkowski, Andrzej <andrzej.witkowski(a)intel.com<mailto:andrzej.witkowski@intel.com>>; Lessnau, Adam <adam.lessnau(a)intel.com<mailto:adam.lessnau@intel.com>>
Subject: [TF-A] Unhandled Exception at EL3 in BL31 for Neoverse N1 with direct connect of DSU
External email: Use caution opening links or attachments
Hi,
I work on project which uses ARM CPU Neoverse N1 with direct connect of DSU.
I noticed the following error "Unhandled Exception at EL3" in BL31 bootloader after switching from ATF 2.1 to 2.2.
The root cause is that after invoking neoverse_n1_errata-report function in lib/cpus/aarch64/neoverse_n1.S file and reaching the line 525, the function check_errata_dsu_934 in lib/cpus/aarch64/dsu_helpers.S file is always invoked regardless of whether ERRATA_DSU_936184 is set to 0 or to 1.
In our case, ERRATA_DSU_936184 := 0.
[cid:image001.png@01D64A40.BBEF1AA0]
[cid:image002.png@01D64A40.BBEF1AA0]
Neoverse N1 with direct connect version of DSU doesn't contain SCU/L3 cache and CLUSTERCFR_EL1 register.
The error "Unhandled Exception at EL3" appears in BL31 bootloader during attempt to read the CLUSTERCFR_EL1 register which doesn't exist in our RTL HW.
Andrzej Witkowskie
Intel Technology Poland
---------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Słowackiego 173 | 80-298 Gdańsk | Sąd Rejonowy Gdańsk Północ | VII Wydział Gospodarczy Krajowego Rejestru Sądowego - KRS 101882 | NIP 957-07-52-316 | Kapitał zakładowy 200.000 PLN.
Ta wiadomość wraz z załącznikami jest przeznaczona dla określonego adresata i może zawierać informacje poufne. W razie przypadkowego otrzymania tej wiadomości, prosimy o powiadomienie nadawcy oraz trwałe jej usunięcie; jakiekolwiek przeglądanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by others is strictly prohibited.
---------------------------------------------------------------------
Intel Technology Poland sp. z o.o.
ul. Sowackiego 173 | 80-298 Gdask | Sd Rejonowy Gdask Pnoc | VII Wydzia Gospodarczy Krajowego Rejestru Sdowego - KRS 101882 | NIP 957-07-52-316 | Kapita zakadowy 200.000 PLN.
Ta wiadomo wraz z zacznikami jest przeznaczona dla okrelonego adresata i moe zawiera informacje poufne. W razie przypadkowego otrzymania tej wiadomoci, prosimy o powiadomienie nadawcy oraz trwae jej usunicie; jakiekolwiek przegldanie lub rozpowszechnianie jest zabronione.
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). If you are not the intended recipient, please contact the sender and delete all copies; any review or distribution by others is strictly prohibited.