Hi all :
Has anyone brought up Hafnium on Raspberry Pi 4 board ? I tried but failed by the following steps:
1.Make the Hafnium RAM disk with aarch64 build Pi4 kernel , RAM disk for Linux , manifest.dtb.
The manifest.dtb buildt by :
/dts-v1/;
/ {
│ hypervisor {
│ │ compatible = "hafnium,hafnium";
│ │ vm1 {
│ │ │ debug_name = "Linux VM";
│ │ │ kernel_filename = "vmlinuz";
│ │ │ ramdisk_filename = "initrd.img";
│ │ };
│ };
};
2.Copy bl31.bin hafnium.bin initrd.img to fat32 boot direction and configure config.txt . Power up my unit and got the uart error log :
NOTICE: BL31: v2.3(debug):v2.3-109-g771c676b1
NOTICE: BL31: Built : 15:49:37, Jun 1 2020
INFO: Changed device tree to advertise PSCI.
INFO: ARM GICv2 driver initialized
INFO: BL31: Initializing runtime services
INFO: BL31: cortex_a72: CPU workaround for 859971 was applied
INFO: BL31: cortex_a72: CPU workaround for cve_2017_5715 was applied
INFO: BL31: cortex_a72: CPU workaround for cve_2018_3639 was applied
INFO: BL31: Preparing for EL3 exit to normal world
INFO: Entry point address = 0x80000
INFO: SPSR = 0x3c9
NOTICE: Initialising hafnium
INFO: text: 0x80000 - 0x97000
INFO: rodata: 0x97000 - 0x9a000
INFO: data: 0x9a000 - 0x117000
INFO: Supported bits in physical address: 44
INFO: Stage 2 has 4 page table levels with 1 pages at the root.
INFO: Found PSCI version: 0x10001
INFO: Memory range: 0x0 - 0x3b3fffff
INFO: Memory range: 0x40000000 - 0xfbffffff
INFO: Ramdisk range: 0x1800000 - 0x353dbff
ERROR: Data abort: pc=0x8b0dc, esr=0x96000021, ec=0x25, far=0x19536ee
Panic: EL2 exception
Is there anyone can help me with this issue or share me the way to bring up Hafnium on pi4 ? Thanks very much !
------
By Wanghua
Best Regards!
Hi Achin,
Would you mind elaborating more on why the SPM needs to determine the
security state and why it is important to do this without trusting the
SP? When you say SPM, it sounds like you are talking about the SPMD
running in EL3 for ex., that is not a part of the SPMC which perhaps
runs as S-EL2 and the SPMD may need to know this to figure out how to
map a particular physical page. Is that the use case you are thinking about?
Thanks
Raghu
On 6/4/20 3:07 AM, Achin Gupta via Hafnium wrote:
> Hi All,
>
> I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
>
> The question is how does the SPM determine the security state of the memory region being shared by the SP.
>
> It is especially important that the SPM does this without trusting the SP.
>
> I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
>
> So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
>
> This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
>
> What do you reckon?
>
> cheers,
> Achin
Hi All,
I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
The question is how does the SPM determine the security state of the memory region being shared by the SP.
It is especially important that the SPM does this without trusting the SP.
I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
What do you reckon?
cheers,
Achin
Hi,
Hafnium sets CPTR_EL2.TTA (bit 28), which traps accesses to trace system registers to EL2.
https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/s…
However CPTR_EL2 register has a different bit field definition depending on HCR_EL2.E2H state.
When HCR_EL2.E2H=0 (Hafnium case) CPTR_EL2.TTA bit position is 20.
Is this a slight issue needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi all,
It would be useful to have a chat group of some sort for quick questions
that don't warrant an email thread. What is the preferred solution for this
for Arm and Trusted Firmware? I saw that there is already an Arm Developer
Ecosystem Discord server, would a channel there be appropriate? What does
TF-A use?
Hello,
I've run into a few issues while implementing fragmentation for memory
sharing based on the FF-A 1.0 spec, which I think need clarifying or fixing.
1. From section 13.2.2.3 points 2-6, it sounds like the 'sender' in the
case of a FFA_MEM_RETRIEVE_RESP_32 from the SPM to the non-secure
hypervisor is the SPM. Am I correct in understanding that this means that
the sender id for FFA_MEM_FRAG_TX and FFA_MEM_FRAG_RX should thus always be
the ID of the SPM, when the hypervisor is retrieving a memory region from
the SPM for the purposes of a reclaim operation from a normal world VM?
2. When a normal world VM tries to share memory with a secure partition via
FFA_MEM_SHARE, it may be that the buffer between the hypervisor and the SPM
is busy because another VM is also sharing memory or doing something else
that uses the buffer, on a different physical CPU. This could happen either
for the initial fragment sent via FFA_MEM_SHARE or for a subsequent
fragment sent with FFA_MEM_FRAG_TX. The spec currently says in section
12.3.1.2, point 13.2 that the hypervisor must return ABORTED in the
FFA_MEM_SHARE case, and doesn't allow any relevant error codes in the
description of FFA_MEM_FRAG_TX in section 13.2.2.5, though it does mention
ABORTED in section 13.2.2.3 point 8.
However, the buffer being busy need not mean that the whole transaction is
aborted; it should be possible for the sender to try again after a short
time, when hopefully the buffer is available again. So it would make more
sense for the hypervisor to return an FFA_BUSY error in this case, as it
does in other cases where a buffer is currently unavailable but the caller
can try again.
Hi Andrew,
Notice I do git clean -fdx prior to trying CHECKPATCH set/not set, because it looks there's some dependency to local ninja build files.
When CHECKPATCH is set I'm still seeing the issue mentioned earlier.
When CHECKPATCH is not set, I got failures related to missing python dependencies on my ubuntu host (ply pathlib python-git). I don't recall seeing those dependencies listed in Hafnium pages, but that's fair enough. Now the checkpatch step passes by installing those python libs.
Obviously I'm not facing this issue when running through docker hermetic build.
So I guess it's low priority glitch, not sure if it really requires a fix or a notice in documentation?
Regards,
Olivier.
________________________________________
From: Andrew Walbran
Sent: Monday, May 18, 2020 13:32
To: Olivier Deprez
Cc: hafnium(a)lists.trustedfirmware.org
Subject: Re: [Hafnium] kokoro/build and CHECKPATCH
That's weird, I haven't seen that before. Running either kokoro/build.sh or 'make checkpatch' locally works for me, whether or not CHECKPATCH is set.
The Makefile in the root directory is setting CHECKPATCH unconditionally, but it looks like the one under drive/linux uses the existing value if there is one. Maybe we should change that?
What error do you get when CHECKPATCH is not set?
On Fri, 15 May 2020 at 18:18, Olivier Deprez via Hafnium <hafnium(a)lists.trustedfirmware.org> wrote:
Hi,
Is there a known problem around checkpatch when running kokoro/build.sh locally?
I noticed if CHECKPATCH env variable is already set then kokoro/build.sh fails with:
+ export CROSS_COMPILE=aarch64-linux-gnu-
+ CROSS_COMPILE=aarch64-linux-gnu-
+ cd driver/linux
+ make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
<my-own-checkpatch-dir>/checkpatch.pl -f main.c
Must be run from the top-level dir. of a kernel tree
Makefile:43: recipe for target 'checkpatch' failed
make: *** [checkpatch] Error 2
If I unset CHECKPATCH before running build.sh then it fails at the driver/linux check.
Is this something known and/or needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose,
or store or copy the information in any medium. Thank you.
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
That's weird, I haven't seen that before. Running either kokoro/build.sh or
'make checkpatch' locally works for me, whether or not CHECKPATCH is set.
The Makefile in the root directory is setting CHECKPATCH unconditionally,
but it looks like the one under drive/linux uses the existing value if
there is one. Maybe we should change that?
What error do you get when CHECKPATCH is not set?
On Fri, 15 May 2020 at 18:18, Olivier Deprez via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> Hi,
>
> Is there a known problem around checkpatch when running kokoro/build.sh
> locally?
>
> I noticed if CHECKPATCH env variable is already set then kokoro/build.sh
> fails with:
>
> + export CROSS_COMPILE=aarch64-linux-gnu-
> + CROSS_COMPILE=aarch64-linux-gnu-
> + cd driver/linux
> + make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
> <my-own-checkpatch-dir>/checkpatch.pl -f main.c
> Must be run from the top-level dir. of a kernel tree
> Makefile:43: recipe for target 'checkpatch' failed
> make: *** [checkpatch] Error 2
>
> If I unset CHECKPATCH before running build.sh then it fails at the
> driver/linux check.
>
> Is this something known and/or needing fix?
>
> Regards,
> Olivier.
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>
Hi,
Is there a known problem around checkpatch when running kokoro/build.sh locally?
I noticed if CHECKPATCH env variable is already set then kokoro/build.sh fails with:
+ export CROSS_COMPILE=aarch64-linux-gnu-
+ CROSS_COMPILE=aarch64-linux-gnu-
+ cd driver/linux
+ make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
<my-own-checkpatch-dir>/checkpatch.pl -f main.c
Must be run from the top-level dir. of a kernel tree
Makefile:43: recipe for target 'checkpatch' failed
make: *** [checkpatch] Error 2
If I unset CHECKPATCH before running build.sh then it fails at the driver/linux check.
Is this something known and/or needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On Tue, 21 Apr 2020 at 21:18, Raghu K via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> All,
>
> I was looking at the hafnium repositories on trusted firmware and was
> wondering how we envision managing the project on trustedfirmware.org vs
> googlesource. Is the plan to set up this project to be like
> googlesource, i.e using repo to manage the multiple repositories ? There
> appears to be dependencies on third party tools/prebuilts in the
> BUILD.gn files so using something like repo even on trustedfirmware.org
> sounds useful.
>
The plan is to continue using git submodules for this. The main Hafnium
repository has submodules for:
prebuilts
driver/linux
project/reference
third_party/dtc
third_party/googletest
third_party/linux
The .gitmodules file currently still points to the old URLs but this will
be updated once the migration is complete. (We are currently blocked on
getting CI running on the new infrastructure.) The advantage of submodules
over repo for this is that it ties specific versions together, which is
important for bisection or otherwise checking out old versions to work
properly. Often we make changes in one repository which need to happen at
the same time as a change in another repository for the build to work. Repo
doesn't provide a way to link versions together like this. We were using
repo for some internal Google tools which don't support submodules, but
this shouldn't be necessary anymore.