Hi Wanghua,
Happy to help, the RPi4 port should still be working. Though we do not
continuously test it, so there is a possibility that the code has rotted a
bit.
I can share my notes on how I got it working, but you seem to have figured
out pretty much all of it.
You're seeing the hypervisor abort when accessing address 0x19536ee (FAR) -
that's inside the radisk as reported above the error. The value of ESR
suggests that this is an alignment error (the address is indeed unaligned).
So Hafnium probably crashed trying to parse something in the ramdisk.
Before we try anything else, could you try to update your RPi4 firmware? I
remember that I had issues with the 'initramfs' option in config.txt - it
would load random data from the SD card. The error that you're seeing could
be a result of that.
Instructions on how to update your firmware:
https://www.raspberrypi.org/documentation/hardware/raspberrypi/booteeprom.md
If that doesn't help, could you please attach
your out/reference/rpi4_clang/hafnium.elf? It would be good to know which
function is at the crashing PC 0x8b0dc.
David
On Sun, Jun 7, 2020 at 2:08 PM 王华 via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> Hi all :
>
> Has anyone brought up Hafnium on Raspberry Pi 4 board ? I tried but
> failed by the following steps:
>
> 1.Make the Hafnium RAM disk with aarch64 build Pi4 kernel , RAM disk for
> Linux , manifest.dtb.
>
> The manifest.dtb buildt by :
>
> /dts-v1/;
>
>
> / {
>
> │ hypervisor {
>
> │ │ compatible = "hafnium,hafnium";
>
> │ │ vm1 {
>
> │ │ │ debug_name = "Linux VM";
>
> │ │ │ kernel_filename = "vmlinuz";
>
> │ │ │ ramdisk_filename = "initrd.img";
>
> │ │ };
>
> │ };
>
> };
>
> 2.Copy bl31.bin hafnium.bin initrd.img to fat32 boot direction and
> configure config.txt . Power up my unit and got the uart error log :
>
> NOTICE: BL31: v2.3(debug):v2.3-109-g771c676b1
>
> NOTICE: BL31: Built : 15:49:37, Jun 1 2020
>
> INFO: Changed device tree to advertise PSCI.
>
> INFO: ARM GICv2 driver initialized
>
> INFO: BL31: Initializing runtime services
>
> INFO: BL31: cortex_a72: CPU workaround for 859971 was applied
>
> INFO: BL31: cortex_a72: CPU workaround for cve_2017_5715 was applied
>
> INFO: BL31: cortex_a72: CPU workaround for cve_2018_3639 was applied
>
> INFO: BL31: Preparing for EL3 exit to normal world
>
> INFO: Entry point address = 0x80000
>
> INFO: SPSR = 0x3c9
>
> NOTICE: Initialising hafnium
>
> INFO: text: 0x80000 - 0x97000
>
> INFO: rodata: 0x97000 - 0x9a000
>
> INFO: data: 0x9a000 - 0x117000
>
> INFO: Supported bits in physical address: 44
>
> INFO: Stage 2 has 4 page table levels with 1 pages at the root.
>
> INFO: Found PSCI version: 0x10001
>
> INFO: Memory range: 0x0 - 0x3b3fffff
>
> INFO: Memory range: 0x40000000 - 0xfbffffff
>
> INFO: Ramdisk range: 0x1800000 - 0x353dbff
>
> ERROR: Data abort: pc=0x8b0dc, esr=0x96000021, ec=0x25, far=0x19536ee
>
> Panic: EL2 exception
>
> Is there anyone can help me with this issue or share me the way to bring
> up Hafnium on pi4 ? Thanks very much !
>
>
> ------
>
> By Wanghua
>
>
>
>
> Best Regards!
>
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>
Hi Raghu,
Howdy! CIL…
> On 4 Jun 2020, at 16:21, Raghu K via Hafnium <hafnium(a)lists.trustedfirmware.org> wrote:
>
> Hi Achin,
>
> Would you mind elaborating more on why the SPM needs to determine the security state and why it is important to do this without trusting the SP? When you say SPM, it sounds like you are talking about the SPMD running in EL3 for ex., that is not a part of the SPMC which perhaps runs as S-EL2 and the SPMD may need to know this to figure out how to map a particular physical page. Is that the use case you are thinking about?
So this is in the context of PSA FF-A Memory management ABIs. Also, I have the S-EL2 SPMC case in mind. SPMD in EL3 does not participate in memory management in this case when it comes to managing any architectural state i.e. translation tables, control regs etc
Say, a SP0 invokes FFA_MEM_SHARE to share a single page A with SP1. The SPMC would need to map page A in SP’s stage 2 tables. To do this, it would need to determine whether the IPA of page A belongs to the Secure or Non-secure IPA space. This is under the assumption that some memory ranges in SP0’s IPA space will be Non-secure.
IMO, this information can be determined in one of the following ways:
1. Perform PTW in SW to determine whether IPA is mapped in the tables referenced by VSTTBR_EL2 or VTTBR_EL2. I am assuming the SPMC maintains separate S2 translations for the Secure and NS address spaces.
2. Through an internal data structure which tracks the attributes of a memory region assigned to a guest.
3. SP0 specifies the security state of page A in FFA_MEM_SHARE. The spec does not cover this currently. However, the SPMC cannot trust that the SP0 is providing the right security state and must verify this independently anyways.
1 seems clunky. 2 is not done in upstream Hf. 3 does not really help.
I think I had misunderstood that a AT* instruction could be used. There do not seem to be any in the Arm ARM that only perform a IPA to PA i.e. a S2 translation.
So I am wondering what can be done to solve this problem assuming we agree that this is a problem in the first place.
Hth,
Cheers,
Achin
>
> Thanks
> Raghu
>
> On 6/4/20 3:07 AM, Achin Gupta via Hafnium wrote:
>> Hi All,
>>
>> I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
>>
>> The question is how does the SPM determine the security state of the memory region being shared by the SP.
>>
>> It is especially important that the SPM does this without trusting the SP.
>>
>> I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
>>
>> So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
>>
>> This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
>>
>> What do you reckon?
>>
>> cheers,
>> Achin
>
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
Hi all :
Has anyone brought up Hafnium on Raspberry Pi 4 board ? I tried but failed by the following steps:
1.Make the Hafnium RAM disk with aarch64 build Pi4 kernel , RAM disk for Linux , manifest.dtb.
The manifest.dtb buildt by :
/dts-v1/;
/ {
│ hypervisor {
│ │ compatible = "hafnium,hafnium";
│ │ vm1 {
│ │ │ debug_name = "Linux VM";
│ │ │ kernel_filename = "vmlinuz";
│ │ │ ramdisk_filename = "initrd.img";
│ │ };
│ };
};
2.Copy bl31.bin hafnium.bin initrd.img to fat32 boot direction and configure config.txt . Power up my unit and got the uart error log :
NOTICE: BL31: v2.3(debug):v2.3-109-g771c676b1
NOTICE: BL31: Built : 15:49:37, Jun 1 2020
INFO: Changed device tree to advertise PSCI.
INFO: ARM GICv2 driver initialized
INFO: BL31: Initializing runtime services
INFO: BL31: cortex_a72: CPU workaround for 859971 was applied
INFO: BL31: cortex_a72: CPU workaround for cve_2017_5715 was applied
INFO: BL31: cortex_a72: CPU workaround for cve_2018_3639 was applied
INFO: BL31: Preparing for EL3 exit to normal world
INFO: Entry point address = 0x80000
INFO: SPSR = 0x3c9
NOTICE: Initialising hafnium
INFO: text: 0x80000 - 0x97000
INFO: rodata: 0x97000 - 0x9a000
INFO: data: 0x9a000 - 0x117000
INFO: Supported bits in physical address: 44
INFO: Stage 2 has 4 page table levels with 1 pages at the root.
INFO: Found PSCI version: 0x10001
INFO: Memory range: 0x0 - 0x3b3fffff
INFO: Memory range: 0x40000000 - 0xfbffffff
INFO: Ramdisk range: 0x1800000 - 0x353dbff
ERROR: Data abort: pc=0x8b0dc, esr=0x96000021, ec=0x25, far=0x19536ee
Panic: EL2 exception
Is there anyone can help me with this issue or share me the way to bring up Hafnium on pi4 ? Thanks very much !
------
By Wanghua
Best Regards!
Hi Achin,
Would you mind elaborating more on why the SPM needs to determine the
security state and why it is important to do this without trusting the
SP? When you say SPM, it sounds like you are talking about the SPMD
running in EL3 for ex., that is not a part of the SPMC which perhaps
runs as S-EL2 and the SPMD may need to know this to figure out how to
map a particular physical page. Is that the use case you are thinking about?
Thanks
Raghu
On 6/4/20 3:07 AM, Achin Gupta via Hafnium wrote:
> Hi All,
>
> I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
>
> The question is how does the SPM determine the security state of the memory region being shared by the SP.
>
> It is especially important that the SPM does this without trusting the SP.
>
> I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
>
> So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
>
> This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
>
> What do you reckon?
>
> cheers,
> Achin
Hi All,
I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
The question is how does the SPM determine the security state of the memory region being shared by the SP.
It is especially important that the SPM does this without trusting the SP.
I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
What do you reckon?
cheers,
Achin
Hi,
Hafnium sets CPTR_EL2.TTA (bit 28), which traps accesses to trace system registers to EL2.
https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/s…
However CPTR_EL2 register has a different bit field definition depending on HCR_EL2.E2H state.
When HCR_EL2.E2H=0 (Hafnium case) CPTR_EL2.TTA bit position is 20.
Is this a slight issue needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi all,
It would be useful to have a chat group of some sort for quick questions
that don't warrant an email thread. What is the preferred solution for this
for Arm and Trusted Firmware? I saw that there is already an Arm Developer
Ecosystem Discord server, would a channel there be appropriate? What does
TF-A use?
Hello,
I've run into a few issues while implementing fragmentation for memory
sharing based on the FF-A 1.0 spec, which I think need clarifying or fixing.
1. From section 13.2.2.3 points 2-6, it sounds like the 'sender' in the
case of a FFA_MEM_RETRIEVE_RESP_32 from the SPM to the non-secure
hypervisor is the SPM. Am I correct in understanding that this means that
the sender id for FFA_MEM_FRAG_TX and FFA_MEM_FRAG_RX should thus always be
the ID of the SPM, when the hypervisor is retrieving a memory region from
the SPM for the purposes of a reclaim operation from a normal world VM?
2. When a normal world VM tries to share memory with a secure partition via
FFA_MEM_SHARE, it may be that the buffer between the hypervisor and the SPM
is busy because another VM is also sharing memory or doing something else
that uses the buffer, on a different physical CPU. This could happen either
for the initial fragment sent via FFA_MEM_SHARE or for a subsequent
fragment sent with FFA_MEM_FRAG_TX. The spec currently says in section
12.3.1.2, point 13.2 that the hypervisor must return ABORTED in the
FFA_MEM_SHARE case, and doesn't allow any relevant error codes in the
description of FFA_MEM_FRAG_TX in section 13.2.2.5, though it does mention
ABORTED in section 13.2.2.3 point 8.
However, the buffer being busy need not mean that the whole transaction is
aborted; it should be possible for the sender to try again after a short
time, when hopefully the buffer is available again. So it would make more
sense for the hypervisor to return an FFA_BUSY error in this case, as it
does in other cases where a buffer is currently unavailable but the caller
can try again.
Hi Andrew,
Notice I do git clean -fdx prior to trying CHECKPATCH set/not set, because it looks there's some dependency to local ninja build files.
When CHECKPATCH is set I'm still seeing the issue mentioned earlier.
When CHECKPATCH is not set, I got failures related to missing python dependencies on my ubuntu host (ply pathlib python-git). I don't recall seeing those dependencies listed in Hafnium pages, but that's fair enough. Now the checkpatch step passes by installing those python libs.
Obviously I'm not facing this issue when running through docker hermetic build.
So I guess it's low priority glitch, not sure if it really requires a fix or a notice in documentation?
Regards,
Olivier.
________________________________________
From: Andrew Walbran
Sent: Monday, May 18, 2020 13:32
To: Olivier Deprez
Cc: hafnium(a)lists.trustedfirmware.org
Subject: Re: [Hafnium] kokoro/build and CHECKPATCH
That's weird, I haven't seen that before. Running either kokoro/build.sh or 'make checkpatch' locally works for me, whether or not CHECKPATCH is set.
The Makefile in the root directory is setting CHECKPATCH unconditionally, but it looks like the one under drive/linux uses the existing value if there is one. Maybe we should change that?
What error do you get when CHECKPATCH is not set?
On Fri, 15 May 2020 at 18:18, Olivier Deprez via Hafnium <hafnium(a)lists.trustedfirmware.org> wrote:
Hi,
Is there a known problem around checkpatch when running kokoro/build.sh locally?
I noticed if CHECKPATCH env variable is already set then kokoro/build.sh fails with:
+ export CROSS_COMPILE=aarch64-linux-gnu-
+ CROSS_COMPILE=aarch64-linux-gnu-
+ cd driver/linux
+ make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
<my-own-checkpatch-dir>/checkpatch.pl -f main.c
Must be run from the top-level dir. of a kernel tree
Makefile:43: recipe for target 'checkpatch' failed
make: *** [checkpatch] Error 2
If I unset CHECKPATCH before running build.sh then it fails at the driver/linux check.
Is this something known and/or needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose,
or store or copy the information in any medium. Thank you.
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
That's weird, I haven't seen that before. Running either kokoro/build.sh or
'make checkpatch' locally works for me, whether or not CHECKPATCH is set.
The Makefile in the root directory is setting CHECKPATCH unconditionally,
but it looks like the one under drive/linux uses the existing value if
there is one. Maybe we should change that?
What error do you get when CHECKPATCH is not set?
On Fri, 15 May 2020 at 18:18, Olivier Deprez via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> Hi,
>
> Is there a known problem around checkpatch when running kokoro/build.sh
> locally?
>
> I noticed if CHECKPATCH env variable is already set then kokoro/build.sh
> fails with:
>
> + export CROSS_COMPILE=aarch64-linux-gnu-
> + CROSS_COMPILE=aarch64-linux-gnu-
> + cd driver/linux
> + make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
> <my-own-checkpatch-dir>/checkpatch.pl -f main.c
> Must be run from the top-level dir. of a kernel tree
> Makefile:43: recipe for target 'checkpatch' failed
> make: *** [checkpatch] Error 2
>
> If I unset CHECKPATCH before running build.sh then it fails at the
> driver/linux check.
>
> Is this something known and/or needing fix?
>
> Regards,
> Olivier.
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>