Hi Raghu,
Thanks for reporting.
This part of the test infrastructure (testing the SPMC) is still very fresh and requires improvement iterations so please bear with us. Also a reason it's not yet part of the automated non-regression with jenkins (as opposed to the legacy kokoro/test.sh). For the time being we still mostly rely on the TF-A CI for testing on the secure side.
IIUC this change was made to help with the test time as the FVP takes long to reload on every test.
But indeed it might have the side effect you describe.
So either we revert the FVP reloading on every test.
Or another (somewhat hackish) possibility is to clear the mentioned variables from within the test (or make them part of BSS)?
To be fair, the both worlds test scenario is not 100% stable on my machine (for some reason the connection is not always successful between the FVP and hftest) hence limiting confidence/robustness of my testing and investigations. So I wonder is the scripting is still somewhat a bit fragile.
Regards,
Olivier.
________________________________________
From: Hafnium <hafnium-bounces(a)lists.trustedfirmware.org> on behalf of Raghu Krishnamurthy via Hafnium <hafnium(a)lists.trustedfirmware.org>
Sent: 03 August 2021 23:47
To: 'Raghu Krishnamurthy via Hafnium'
Subject: [Hafnium] Bug in hftest.py
Hi All,
Wanted to report to you that commit 18a25f9241f86ba2d637011ff465ce3869e8651b
in hafnium "appears" broken. The issue with the optimization in this patch
is that the partition images are not reloaded for each test run, which means
a previous test could have written data to say SRAM, and the following test
would use the old values from the previous test, when the same image is
executed again from SRAM for a following test. This would be a problem for
pretty much anything in the data section of a partition. In my case, I have
a counter in the data section of my partition, which does not get reset back
to its original value.
I've attached a patch to help repro the issue. Fix is to disable the
optimization or somehow reload the images for each run. This affects only
"both world" tests.
Let me know if I'm missing something here.
Apply patch and run timeout --foreground 300s ./test/hftest/hftest.py
--out_partitions out/reference/secure_aem_v8a_fvp_vm_clang --log
out/reference/kokoro_log --spmc
out/reference/secure_aem_v8a_fvp_clang/hafnium.bin --driver=fvp --hypervisor
out/reference/aem_v8a_fvp_clang/hafnium.bin --partitions_json
test/vmapi/ffa_secure_partitions/ffa_both_world_partitions_test.json
The command line is from kokoro/test_spmc.sh.
Thanks
Raghu
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
Hi All,
Wanted to report to you that commit 18a25f9241f86ba2d637011ff465ce3869e8651b
in hafnium "appears" broken. The issue with the optimization in this patch
is that the partition images are not reloaded for each test run, which means
a previous test could have written data to say SRAM, and the following test
would use the old values from the previous test, when the same image is
executed again from SRAM for a following test. This would be a problem for
pretty much anything in the data section of a partition. In my case, I have
a counter in the data section of my partition, which does not get reset back
to its original value.
I've attached a patch to help repro the issue. Fix is to disable the
optimization or somehow reload the images for each run. This affects only
"both world" tests.
Let me know if I'm missing something here.
Apply patch and run timeout --foreground 300s ./test/hftest/hftest.py
--out_partitions out/reference/secure_aem_v8a_fvp_vm_clang --log
out/reference/kokoro_log --spmc
out/reference/secure_aem_v8a_fvp_clang/hafnium.bin --driver=fvp --hypervisor
out/reference/aem_v8a_fvp_clang/hafnium.bin --partitions_json
test/vmapi/ffa_secure_partitions/ffa_both_world_partitions_test.json
The command line is from kokoro/test_spmc.sh.
Thanks
Raghu
Hi Andrew,
I don't think Hafnium implements the different cacheability and shareability types for memory sharing at all does it?
[JA] No, it doesn't. At least that is my understanding as well. I noticed mostly due to lack of support in the mm library. Asking was a means to confirm.
The point (at least for now) was just about validation of the respective fields in the memory transaction descriptor.
Thank you for this! 🙂
Best regards,
João
________________________________
From: Andrew Walbran
Sent: Wednesday, July 14, 2021 5:29 PM
To: Joao Alves
Cc: hafnium(a)lists.trustedfirmware.org; Olivier Deprez; Achin Gupta; Mahesh Reddy Bireddy; Jaykumar Pitambarbhai Patel
Subject: Re: Hafnium - Memory attributes precedence checks for mem share
I don't think Hafnium implements the different cacheability and shareability types for memory sharing at all does it? We just didn't have a need for it, if you want to add support that should be fine.
This is mentioned in https://developer.trustedfirmware.org/T827, you can assign that to yourself if you want to take it on.
On Wed, 14 Jul 2021 at 13:50, Joao Alves <Joao.Alves(a)arm.com<mailto:Joao.Alves@arm.com>> wrote:
Hi Andrew,
We have been revising some aspects of the memory sharing implementation. The specification describes a set of precedence rules for the memory attributes specified in the memory transaction descriptor, including: Memory type, cacheability, shareability.
The sender would fill the memory attributes for the region to be shared. After memory send, the receiver should retrieve the regions, filling the memory attributes on its transaction descriptor that comply with the referred precedence rules.
The referred rules can be found in section 10.10.4 of the newly release FF-A v1.1 beta spec<https://developer.arm.com/documentation/den0077/c/?lang=en>, as follows.
Memory type precedence rules ( < reads as is less permissive than):
* Device-nGnRnE < Device-nGnRE < Device-nGRE < Device-GRE < Normal
Cacheability precedence rules:
* Non-cacheable < Write-Back Cacheable
Shareability precedence rules:
* Non-Shareable < Inner Shareable < Outer shareable
These checks are not part of the handling of FFA_MEMORY_RETRIEVE_REQ.
Was there an implementation defined reason for this? If so, could you please provide the rationale?
Thank you in advance for your help.
Best regards,
João Alves
Hi Andrew,
We have been revising some aspects of the memory sharing implementation. The specification describes a set of precedence rules for the memory attributes specified in the memory transaction descriptor, including: Memory type, cacheability, shareability.
The sender would fill the memory attributes for the region to be shared. After memory send, the receiver should retrieve the regions, filling the memory attributes on its transaction descriptor that comply with the referred precedence rules.
The referred rules can be found in section 10.10.4 of the newly release FF-A v1.1 beta spec<https://developer.arm.com/documentation/den0077/c/?lang=en>, as follows.
Memory type precedence rules ( < reads as is less permissive than):
* Device-nGnRnE < Device-nGnRE < Device-nGRE < Device-GRE < Normal
Cacheability precedence rules:
* Non-cacheable < Write-Back Cacheable
Shareability precedence rules:
* Non-Shareable < Inner Shareable < Outer shareable
These checks are not part of the handling of FFA_MEMORY_RETRIEVE_REQ.
Was there an implementation defined reason for this? If so, could you please provide the rationale?
Thank you in advance for your help.
Best regards,
João Alves
Hi,
>> @Arun, your view on how those two items were solved is beneficial to further elaborate our plans.
@Arunachalam Ganapathy your comments on this topic would be very helpful.
Thanks.
-----Original Message-----
From: Hafnium <hafnium-bounces(a)lists.trustedfirmware.org> On Behalf Of Varun Wadekar via Hafnium
Sent: Monday, May 31, 2021 1:49 PM
To: Olivier Deprez <Olivier.Deprez(a)arm.com>; hafnium(a)lists.trustedfirmware.org; Arunachalam Ganapathy <Arunachalam.Ganapathy(a)arm.com>
Cc: Bo Yan <byan(a)nvidia.com>
Subject: Re: [Hafnium] .git submodules increase hafnium code size
External email: Use caution opening links or attachments
Hi Olivier,
Thanks for answering my queries.
We are looking to deploy the following use case at NVIDIA.
<snip>
-ability to build only the SPMC (not all reference targets such as qemu, rpi4, fvp) -A distribution only requiring the Hypervisor/SPMC output binary ("out/reference/.../hafnium.bin") using any toolchain (be it arm64 or x86 host, and arbitrary clang version).
<snip>
>> As you noticed, the Hafnium Hypervisor/SPMC and test environment builds are closely coupled by the use of ninja/gn flow and scripts. We intend to approach those problems in the course of Q3 in Arm OSS roadmap.
[VW] Are there any local changes to decouple hafnium from its dependencies? We can evaluate Arm;s approach against what we use internally. Our changes moved the dependencies out of the tree and passed file locations to the build system with the help of command line arguments.
-Varun
-----Original Message-----
From: Olivier Deprez <Olivier.Deprez(a)arm.com>
Sent: Monday, May 31, 2021 11:03 AM
To: hafnium(a)lists.trustedfirmware.org; Varun Wadekar <vwadekar(a)nvidia.com>; Arunachalam Ganapathy <Arunachalam.Ganapathy(a)arm.com>
Cc: Bo Yan <byan(a)nvidia.com>
Subject: Re: .git submodules increase hafnium code size
External email: Use caution opening links or attachments
Hi Varun,
We had similar requests raised internally.
1- First in context of Total Compute delivery from Arm OSS platforms:
a. ability to build only the SPMC on TC0 platform (not all reference targets such as qemu, rpi4, fvp)
b. use a Yocto provided toolchain.
@Arun, your view on how those two items were solved is beneficial to further elaborate our plans.
2- A similar request as 1.b to build Hafnium as part of a distribution on arm64 host: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdeveloper…
In my view there are two consumers:
-A distribution only requiring the Hypervisor/SPMC output binary ("out/reference/.../hafnium.bin") using any toolchain (be it arm64 or x86 host, and arbitrary clang version).
-The Hf CI framework/automation needs the above, plus the test framework and tests (dependency on googletest, linux submodules etc). It's important to keep this item alive while trying to solve above item.
As you noticed, the Hafnium Hypervisor/SPMC and test environment builds are closely coupled by the use of ninja/gn flow and scripts.
They are using a fixed toolchain version through prebuilts to ensure builds are "reproducible", in particular with regards to the Hafnium CI.
We intend to approach those problems in the course of Q3 in Arm OSS roadmap.
As an early exploration we already have:
-clang 12 compiler upgrade. This is necessary if wiling to use any arbitrary clang version:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Freview.tr…
-Ability to build on arm64 host (done, internally).
-Identify the flow/script changes such that external dependencies can be used (on-going, internally).
I thought of localizing common dependencies to python/shell scripts by the use of definition files included in the mentioned scripts. This is only an early investigation, I will check how this intersects the changes you provided.
Regards,
Olivier.
From: Hafnium <hafnium-bounces(a)lists.trustedfirmware.org> on behalf of Varun Wadekar via Hafnium <hafnium(a)lists.trustedfirmware.org>
Sent: 28 May 2021 16:47
To: hafnium(a)lists.trustedfirmware.org <hafnium(a)lists.trustedfirmware.org>
Cc: Bo Yan <byan(a)nvidia.com>
Subject: [Hafnium] .git submodules increase hafnium code size
Hi,
We at NVIDIA are evaluating Hafnium. During the initial investigation, we found out that the repository size (in terms of MB) is huge. This is mostly because of the "git submodules" used by the project. This is a great way to deliver Hafnium with its dependencies in one go.
But we think that the size can be trimmed by moving the toolchain, linux folder, googletest and dtc compiler out, leaving just the Hafnium code in the project. This way, companies like us can pick and choose instead of having to use everything. In a bid to ease the pain internally and only use the Hafnium code base we have crafted the following changes:
1. hafnium: support external projects (I10a07de3) * Gerrit Code Review (trustedfirmware.org)<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Freview.tr…>
2. hafnium: build with dtc and googletest out of tree (I057c9ad6) * Gerrit Code Review (trustedfirmware.org)<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Freview.tr…>
3. build: support external toolchain (Iafd029c1) * Gerrit Code Review (trustedfirmware.org)<https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Freview.tr…>
This series does not have the patch to use an out of tree linux codebase. I assume these patches wont be acceptable in their current state, so would like to know how the community plans to handle this situation.
The code size is a real concern for us, as we already have copies of the dependencies in our codebase, so have no use for these duplicates.
Thanks.
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.tru…
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.tru…
Hi Varun,
We had similar requests raised internally.
1- First in context of Total Compute delivery from Arm OSS platforms:
a. ability to build only the SPMC on TC0 platform (not all reference targets such as qemu, rpi4, fvp)
b. use a Yocto provided toolchain.
@Arun, your view on how those two items were solved is beneficial to further elaborate our plans.
2- A similar request as 1.b to build Hafnium as part of a distribution on arm64 host: https://developer.trustedfirmware.org/T898
In my view there are two consumers:
-A distribution only requiring the Hypervisor/SPMC output binary ("out/reference/.../hafnium.bin") using any toolchain (be it arm64 or x86 host, and arbitrary clang version).
-The Hf CI framework/automation needs the above, plus the test framework and tests (dependency on googletest, linux submodules etc). It's important to keep this item alive while trying to solve above item.
As you noticed, the Hafnium Hypervisor/SPMC and test environment builds are closely coupled by the use of ninja/gn flow and scripts.
They are using a fixed toolchain version through prebuilts to ensure builds are "reproducible", in particular with regards to the Hafnium CI.
We intend to approach those problems in the course of Q3 in Arm OSS roadmap.
As an early exploration we already have:
-clang 12 compiler upgrade. This is necessary if wiling to use any arbitrary clang version:
https://review.trustedfirmware.org/q/topic:%22od%252Fhf-clang12%22+(status:…
-Ability to build on arm64 host (done, internally).
-Identify the flow/script changes such that external dependencies can be used (on-going, internally).
I thought of localizing common dependencies to python/shell scripts by the use of definition files included in the mentioned scripts. This is only an early investigation, I will check how this intersects the changes you provided.
Regards,
Olivier.
From: Hafnium <hafnium-bounces(a)lists.trustedfirmware.org> on behalf of Varun Wadekar via Hafnium <hafnium(a)lists.trustedfirmware.org>
Sent: 28 May 2021 16:47
To: hafnium(a)lists.trustedfirmware.org <hafnium(a)lists.trustedfirmware.org>
Cc: Bo Yan <byan(a)nvidia.com>
Subject: [Hafnium] .git submodules increase hafnium code size
Hi,
We at NVIDIA are evaluating Hafnium. During the initial investigation, we found out that the repository size (in terms of MB) is huge. This is mostly because of the "git submodules" used by the project. This is a great way to deliver Hafnium with its dependencies in one go.
But we think that the size can be trimmed by moving the toolchain, linux folder, googletest and dtc compiler out, leaving just the Hafnium code in the project. This way, companies like us can pick and choose instead of having to use everything. In a bid to ease the pain internally and only use the Hafnium code base we have crafted the following changes:
1. hafnium: support external projects (I10a07de3) * Gerrit Code Review (trustedfirmware.org)<https://review.trustedfirmware.org/c/hafnium/hafnium/+/10142>
2. hafnium: build with dtc and googletest out of tree (I057c9ad6) * Gerrit Code Review (trustedfirmware.org)<https://review.trustedfirmware.org/c/hafnium/hafnium/+/10144>
3. build: support external toolchain (Iafd029c1) * Gerrit Code Review (trustedfirmware.org)<https://review.trustedfirmware.org/c/hafnium/hafnium/+/10145>
This series does not have the patch to use an out of tree linux codebase. I assume these patches wont be acceptable in their current state, so would like to know how the community plans to handle this situation.
The code size is a real concern for us, as we already have copies of the dependencies in our codebase, so have no use for these duplicates.
Thanks.
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
Hi,
We at NVIDIA are evaluating Hafnium. During the initial investigation, we found out that the repository size (in terms of MB) is huge. This is mostly because of the "git submodules" used by the project. This is a great way to deliver Hafnium with its dependencies in one go.
But we think that the size can be trimmed by moving the toolchain, linux folder, googletest and dtc compiler out, leaving just the Hafnium code in the project. This way, companies like us can pick and choose instead of having to use everything. In a bid to ease the pain internally and only use the Hafnium code base we have crafted the following changes:
1. hafnium: support external projects (I10a07de3) * Gerrit Code Review (trustedfirmware.org)<https://review.trustedfirmware.org/c/hafnium/hafnium/+/10142>
2. hafnium: build with dtc and googletest out of tree (I057c9ad6) * Gerrit Code Review (trustedfirmware.org)<https://review.trustedfirmware.org/c/hafnium/hafnium/+/10144>
3. build: support external toolchain (Iafd029c1) * Gerrit Code Review (trustedfirmware.org)<https://review.trustedfirmware.org/c/hafnium/hafnium/+/10145>
This series does not have the patch to use an out of tree linux codebase. I assume these patches wont be acceptable in their current state, so would like to know how the community plans to handle this situation.
The code size is a real concern for us, as we already have copies of the dependencies in our codebase, so have no use for these duplicates.
Thanks.
Hi Rebecca,
No problem asking.
Not immediately related to the TF-A log, but the image specified as BL32 shall rather be the secure Hafnium build (hafnium/out/reference/secure_aem_v8a_fvp_clang/hafnium.bin).
The TF-A boot log suggests that the first secure partition is not found within the FIP image.
So there is a problem in terms of provisioning the SP into the FIP and have the BL2 loader finding it.
It can be a mismatch with UUIDs, or related to the contents of the json layout file.
If that's an information you can share, let us know the nature of the secure partition image you wish to have TF-A loaded and booted by Hafnium.
If you wish to reproduce a known working setup you can use below command lines using master for each tree:
TF-a-tests:
make CROSS_COMPILE=aarch64-none-elf- PLAT=fvp DEBUG=1 TESTS=spm -j8
Hafnium:
make PROJECT=reference
TF-A:
make CROSS_COMPILE=aarch64-none-elf- SPD=spmd CTX_INCLUDE_EL2_REGS=1 ARM_ARCH_MINOR=5 BRANCH_PROTECTION=1 CTX_INCLUDE_PAUTH_REGS=1 PLAT=fvp DEBUG=1 BL33=../tf-a-tests/build/fvp/debug/tftf.bin BL32=../hafnium/out/reference/secure_aem_v8a_fvp_clang/hafnium.bin SP_LAYOUT_FILE=../tf-a-tests/build/fvp/debug/sp_layout.json all fip
Run the model:
<path-to-fvp>/FVP_Base_RevC-2xAEMv8A -C pctl.startup=0.0.0.0 -C cluster0.NUM_CORES=4 -C cluster1.NUM_CORES=4 -C bp.secure_memory=1 -C bp.secureflashloader.fname=trusted-firmware-a/build/fvp/debug/bl1.bin -C bp.flashloader0.fname=trusted-firmware-a/build/fvp/debug/fip.bin -C cluster0.has_arm_v8-5=1 -C cluster1.has_arm_v8-5=1 -C cluster0.has_branch_target_exception=1 -C cluster1.has_branch_target_exception=1 -C cluster0.has_pointer_authentication=2 -C cluster1.has_pointer_authentication=2 -C cluster0.restriction_on_speculative_execution=2 -C cluster1.restriction_on_speculative_execution=2 -C pci.pci_smmuv3.mmu.SMMU_AIDR=2 -C pci.pci_smmuv3.mmu.SMMU_IDR0=0x0046123B -C pci.pci_smmuv3.mmu.SMMU_IDR1=0x00600002 -C pci.pci_smmuv3.mmu.SMMU_IDR3=0x1714 -C pci.pci_smmuv3.mmu.SMMU_IDR5=0xFFFF0472 -C pci.pci_smmuv3.mmu.SMMU_S_IDR1=0xA0000002 -C pci.pci_smmuv3.mmu.SMMU_S_IDR2=0 -C pci.pci_smmuv3.mmu.SMMU_S_IDR3=0
It should end up with the following test results:
******************************* Summary *******************************
> Test suite 'FF-A Version'
Passed
> Test suite 'FF-A RXTX Mapping'
Passed
> Test suite 'FF-A Direct messaging'
Passed
> Test suite 'FF-A Power management'
Passed
> Test suite 'FF-A Memory Sharing'
Passed
> Test suite 'FF-A features'
Passed
> Test suite 'SIMD,SVE Registers context'
Passed
> Test suite 'FF-A Interrupt'
Passed
> Test suite 'SMMUv3 tests'
Passed
=================================
Tests Skipped : 0
Tests Passed : 20
Tests Failed : 0
Tests Crashed : 0
Total tests : 20
=================================
NOTICE: Exiting tests.
Regards,
Olivier.
From: Hafnium <hafnium-bounces(a)lists.trustedfirmware.org> on behalf of Rebecca Cran via Hafnium <hafnium(a)lists.trustedfirmware.org>
Sent: 25 May 2021 19:43
To: hafnium(a)lists.trustedfirmware.org <hafnium(a)lists.trustedfirmware.org>
Subject: [Hafnium] Problems running Hafnium at S-EL2 with TF-A master
I'm having problems running TF-A / Hafnium with S-EL2 support.
I can run Hafnium fine using the prebuilt TF-A binary in
prebuilts/linux-aarch64/trusted-firmware-a-trusty/ but it doesn't work
if I use TF-A master.
I'm building TF-A with:
make PLAT=fvp LOG_LEVEL=80 DEBUG=1 SPD=spmd CTX_INCLUDE_EL2_REGS=1
ARM_ARCH_MINOR=5 BRANCH_PROTECTION=1 CTX_INCLUDE_PAUTH_REGS=1 all fip
BL32=../hafnium/out/reference/aem_v8a_fvp_clang/hafnium.bin
BL33=../uefi/Build/ArmVExpress-FVP-AArch64/DEBUG_GCC5/FV/FVP_AARCH64_EFI.fd
SP_LAYOUT_FILE=sp_layout.json
And running it with:
../Base_RevC_AEMv8A_pkg/models/Linux64_GCC-6.4/FVP_Base_RevC-2xAEMv8A \
-C pctl.startup=0.0.0.0 \
-C cluster0.NUM_CORES=4 -C cluster1.NUM_CORES=4 -C bp.secure_memory=1 \
-C
bp.secureflashloader.fname=../trusted-firmware-a/build/fvp/debug/bl1.bin \
-C bp.flashloader0.fname=../trusted-firmware-a/build/fvp/debug/fip.bin \
-C bp.pl011_uart0.out_file=fvp-uart0.log -C
bp.pl011_uart1.out_file=fvp-uart1.log \
-C bp.pl011_uart2.out_file=fvp-uart2.log \
-C cluster0.has_arm_v8-5=1 -C cluster1.has_arm_v8-5=1 -C
pci.pci_smmuv3.mmu.SMMU_AIDR=2 \
-C pci.pci_smmuv3.mmu.SMMU_IDR0=0x0046123B -C
pci.pci_smmuv3.mmu.SMMU_IDR1=0x00600002 \
-C pci.pci_smmuv3.mmu.SMMU_IDR3=0x1714 -C
pci.pci_smmuv3.mmu.SMMU_IDR5=0xFFFF0472 \
-C pci.pci_smmuv3.mmu.SMMU_S_IDR1=0xA0000002 -C
pci.pci_smmuv3.mmu.SMMU_S_IDR2=0 \
-C pci.pci_smmuv3.mmu.SMMU_S_IDR3=0 \
-C cluster0.has_branch_target_exception=1 \
-C cluster1.has_branch_target_exception=1 \
-C cluster0.restriction_on_speculative_execution=2 \
-C cluster1.restriction_on_speculative_execution=2
TF-A prints this and then stops for a minute or two before resetting:
INFO: Loading image id=5 at address 0x88000000
INFO: Image id=5 loaded: 0x88000000 - 0x88280000
INFO: BL2: Skip loading image id 27
INFO: BL2: Loading image id 34
VERBOSE: Using Memmap
VERBOSE: FIP header looks OK.
VERBOSE: Trying alternative IO
I'm fairly new to working with Hafnium, TF-A _and_ Arm's FVP so there's
probably something obvious I'm doing wrong?
--
Rebecca Cran
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
I'm having problems running TF-A / Hafnium with S-EL2 support.
I can run Hafnium fine using the prebuilt TF-A binary in
prebuilts/linux-aarch64/trusted-firmware-a-trusty/ but it doesn't work
if I use TF-A master.
I'm building TF-A with:
make PLAT=fvp LOG_LEVEL=80 DEBUG=1 SPD=spmd CTX_INCLUDE_EL2_REGS=1
ARM_ARCH_MINOR=5 BRANCH_PROTECTION=1 CTX_INCLUDE_PAUTH_REGS=1 all fip
BL32=../hafnium/out/reference/aem_v8a_fvp_clang/hafnium.bin
BL33=../uefi/Build/ArmVExpress-FVP-AArch64/DEBUG_GCC5/FV/FVP_AARCH64_EFI.fd
SP_LAYOUT_FILE=sp_layout.json
And running it with:
../Base_RevC_AEMv8A_pkg/models/Linux64_GCC-6.4/FVP_Base_RevC-2xAEMv8A \
-C pctl.startup=0.0.0.0 \
-C cluster0.NUM_CORES=4 -C cluster1.NUM_CORES=4 -C bp.secure_memory=1 \
-C
bp.secureflashloader.fname=../trusted-firmware-a/build/fvp/debug/bl1.bin \
-C bp.flashloader0.fname=../trusted-firmware-a/build/fvp/debug/fip.bin \
-C bp.pl011_uart0.out_file=fvp-uart0.log -C
bp.pl011_uart1.out_file=fvp-uart1.log \
-C bp.pl011_uart2.out_file=fvp-uart2.log \
-C cluster0.has_arm_v8-5=1 -C cluster1.has_arm_v8-5=1 -C
pci.pci_smmuv3.mmu.SMMU_AIDR=2 \
-C pci.pci_smmuv3.mmu.SMMU_IDR0=0x0046123B -C
pci.pci_smmuv3.mmu.SMMU_IDR1=0x00600002 \
-C pci.pci_smmuv3.mmu.SMMU_IDR3=0x1714 -C
pci.pci_smmuv3.mmu.SMMU_IDR5=0xFFFF0472 \
-C pci.pci_smmuv3.mmu.SMMU_S_IDR1=0xA0000002 -C
pci.pci_smmuv3.mmu.SMMU_S_IDR2=0 \
-C pci.pci_smmuv3.mmu.SMMU_S_IDR3=0 \
-C cluster0.has_branch_target_exception=1 \
-C cluster1.has_branch_target_exception=1 \
-C cluster0.restriction_on_speculative_execution=2 \
-C cluster1.restriction_on_speculative_execution=2
TF-A prints this and then stops for a minute or two before resetting:
INFO: Loading image id=5 at address 0x88000000
INFO: Image id=5 loaded: 0x88000000 - 0x88280000
INFO: BL2: Skip loading image id 27
INFO: BL2: Loading image id 34
VERBOSE: Using Memmap
VERBOSE: FIP header looks OK.
VERBOSE: Trying alternative IO
I'm fairly new to working with Hafnium, TF-A _and_ Arm's FVP so there's
probably something obvious I'm doing wrong?
--
Rebecca Cran