I'm not sure about the exact issue you're seeing, but one thing that could
be helpful for debugging is to increase the log verbosity level, by
changing the log_level in build/BUILDCONFIG.gn to LOG_LEVEL_VERBOSE and
rebuilding.
Can you also share the command you are using to make the ramdisk? Changing
the order of files in it (e.g. putting the manifest file first) might work
around the issue if it is caused by alignment.
On Mon, 8 Jun 2020 at 04:26, 王华 via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
>
>
>
> Hi, David :
> Thanks very much for your help.
> I check the firmware version as following command , It seems that it
> is the newest stable FW. But still failed.
> pi@raspberrypi:~$ vcgencmd bootloader_version
> Apr 16 2020 18:11:26
> version
> a5e1b95f320810c69441557c5f5f0a7f2460dfb8 (release)
>
> timestamp 1587057086
>
>
> I share my config.txt in attchment and hafnium.elf in
> https://drive.google.com/file/d/12M-qqTQTF7BampvLOZLhMDZDNuQpoiEB/view?usp=…
>
>
>
>
>
>
>
> 在 2020-06-07 22:05:15,"David Brazdil" <dbrazdil(a)google.com> 写道:
>
> Hi Wanghua,
>
>
> Happy to help, the RPi4 port should still be working. Though we do not
> continuously test it, so there is a possibility that the code has rotted a
> bit.
>
> I can share my notes on how I got it working, but you seem to have figured
> out pretty much all of it.
>
>
> You're seeing the hypervisor abort when accessing address 0x19536ee (FAR)
> - that's inside the radisk as reported above the error. The value of ESR
> suggests that this is an alignment error (the address is indeed unaligned).
> So Hafnium probably crashed trying to parse something in the ramdisk.
>
>
> Before we try anything else, could you try to update your RPi4 firmware? I
> remember that I had issues with the 'initramfs' option in config.txt - it
> would load random data from the SD card. The error that you're seeing could
> be a result of that.
> Instructions on how to update your firmware:
> https://www.raspberrypi.org/documentation/hardware/raspberrypi/booteeprom.md
>
>
> If that doesn't help, could you please attach your
> out/reference/rpi4_clang/hafnium.elf? It would be good to know which
> function is at the crashing PC 0x8b0dc.
>
>
> David
>
>
>
>
> On Sun, Jun 7, 2020 at 2:08 PM 王华 via Hafnium <
> hafnium(a)lists.trustedfirmware.org> wrote:
>
> Hi all :
>
> Has anyone brought up Hafnium on Raspberry Pi 4 board ? I tried but
> failed by the following steps:
>
> 1.Make the Hafnium RAM disk with aarch64 build Pi4 kernel , RAM disk for
> Linux , manifest.dtb.
>
> The manifest.dtb buildt by :
>
> /dts-v1/;
>
>
> / {
>
> │ hypervisor {
>
> │ │ compatible = "hafnium,hafnium";
>
> │ │ vm1 {
>
> │ │ │ debug_name = "Linux VM";
>
> │ │ │ kernel_filename = "vmlinuz";
>
> │ │ │ ramdisk_filename = "initrd.img";
>
> │ │ };
>
> │ };
>
> };
>
> 2.Copy bl31.bin hafnium.bin initrd.img to fat32 boot direction and
> configure config.txt . Power up my unit and got the uart error log :
>
> NOTICE: BL31: v2.3(debug):v2.3-109-g771c676b1
>
> NOTICE: BL31: Built : 15:49:37, Jun 1 2020
>
> INFO: Changed device tree to advertise PSCI.
>
> INFO: ARM GICv2 driver initialized
>
> INFO: BL31: Initializing runtime services
>
> INFO: BL31: cortex_a72: CPU workaround for 859971 was applied
>
> INFO: BL31: cortex_a72: CPU workaround for cve_2017_5715 was applied
>
> INFO: BL31: cortex_a72: CPU workaround for cve_2018_3639 was applied
>
> INFO: BL31: Preparing for EL3 exit to normal world
>
> INFO: Entry point address = 0x80000
>
> INFO: SPSR = 0x3c9
>
> NOTICE: Initialising hafnium
>
> INFO: text: 0x80000 - 0x97000
>
> INFO: rodata: 0x97000 - 0x9a000
>
> INFO: data: 0x9a000 - 0x117000
>
> INFO: Supported bits in physical address: 44
>
> INFO: Stage 2 has 4 page table levels with 1 pages at the root.
>
> INFO: Found PSCI version: 0x10001
>
> INFO: Memory range: 0x0 - 0x3b3fffff
>
> INFO: Memory range: 0x40000000 - 0xfbffffff
>
> INFO: Ramdisk range: 0x1800000 - 0x353dbff
>
> ERROR: Data abort: pc=0x8b0dc, esr=0x96000021, ec=0x25, far=0x19536ee
>
> Panic: EL2 exception
>
> Is there anyone can help me with this issue or share me the way to bring
> up Hafnium on pi4 ? Thanks very much !
>
>
> ------
>
> By Wanghua
>
>
>
>
> Best Regards!
>
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>
>
>
>
>
>
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>
Hi All,
In Arm, we are experimenting with running OP-TEE under Hafnium as the SPMC in S-EL2. We have been debugging this Stage 2 fault that OP-TEE runs into a during a test to share memory (xtest 1003). It seems this is due to a bug in Hafnium but want to be sure before posting a fix. Some thoughts below to this end. Apologies for the verbosity but I hope you will appreciate it is required.
The fault occurs when OP-TEE tries to access a memory region that was shared with it by the OP-TEE driver in Linux i.e. the driver has called FFA_MEM_SHARE to share the memory, OP-TEE has called FFA_MEM_RETRIEVE_REQ to map it in its S2 and Hf has called FFA_MEM_RETRIVE_RESP to describe the IPA range to OP-TEE. So, the S2 tables are created correctly before OP-TEE tries to use them.
The S2 fault is a L3 Translation fault. The L3 descriptor in S2 tables is NULL when the fault occurs. So this makes sense. This also implies that the translation is not cached in the TLBs.
The key thing is that the fault only occurs when cache state modelling is turned on in the FVP_Base_RevC-2xAEMv8A model we are using for development. The fault occurs both when the S2 tables are created and accessed on the same PE as well as different PEs. It does not matter whether the PEs are in the same or different clusters. The fault occurs both with and without a Hypervisor (Hf) in the Normal world. So presence of Hf in EL2 is not a factor.
We noticed that Hf marks its internal memory as outer-shareable. See [1] and [2]. It uses inner-shareable for S2 PTWs though. See [3]. This is a mismatch of memory attributes as per Page 2563 in ARM DDI 0487F.b. The start of the text is quoted below.
"The rules about mismatched attributes given in Mismatched memory attributes…”
And indeed, the fault is not seen if we mark Hf’s internal memory as inner shareable to match the PTWs. The DSBs after creating the S2 tables in [4] are for inner-shareable access types. It seems that the inner-shareable PTW is unable to observe the outer shareable page table write. Using the inner shareable attributes for the internal memory makes the write observable.
Alternatively, if we change shareability of PTWs in VTCR_EL2 to outer shareable then the fault is no longer observed. It is not clear how the PTWs and page table writes are synchronised in this case without a DSB OSH. This is not a violation of the architecture afaiu.
It seems that it would be worth aligning these attributes.
The next bit is why Hf uses the outer shareable attribute for internal memory in the first place. The recommendation seems to be to use inner-shareable. See [5] and [6].
So we are wondering if this should be fixed too. Please let me know if we have misunderstood anything so far. Happy to post a patch if not or provide more information.
Cheers,
Achin
[1] https://hafnium.googlesource.com/hafnium/+/refs/heads/master/src/arch/aarch…
[2] https://hafnium.googlesource.com/hafnium/+/refs/heads/master/src/mm.c#1043
[3] https://hafnium.googlesource.com/hafnium/+/refs/heads/master/src/arch/aarch…
[4] https://hafnium.googlesource.com/hafnium/+/refs/heads/master/src/arch/aarch…
[5] "Shareable Normal memory” in Pg. 154 in ARM DDI 0487F.b
[6] https://linux-arm-kernel.infradead.narkive.com/RZHvk1cT/question-how-can-we…
Hi Wanghua,
Happy to help, the RPi4 port should still be working. Though we do not
continuously test it, so there is a possibility that the code has rotted a
bit.
I can share my notes on how I got it working, but you seem to have figured
out pretty much all of it.
You're seeing the hypervisor abort when accessing address 0x19536ee (FAR) -
that's inside the radisk as reported above the error. The value of ESR
suggests that this is an alignment error (the address is indeed unaligned).
So Hafnium probably crashed trying to parse something in the ramdisk.
Before we try anything else, could you try to update your RPi4 firmware? I
remember that I had issues with the 'initramfs' option in config.txt - it
would load random data from the SD card. The error that you're seeing could
be a result of that.
Instructions on how to update your firmware:
https://www.raspberrypi.org/documentation/hardware/raspberrypi/booteeprom.md
If that doesn't help, could you please attach
your out/reference/rpi4_clang/hafnium.elf? It would be good to know which
function is at the crashing PC 0x8b0dc.
David
On Sun, Jun 7, 2020 at 2:08 PM 王华 via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> Hi all :
>
> Has anyone brought up Hafnium on Raspberry Pi 4 board ? I tried but
> failed by the following steps:
>
> 1.Make the Hafnium RAM disk with aarch64 build Pi4 kernel , RAM disk for
> Linux , manifest.dtb.
>
> The manifest.dtb buildt by :
>
> /dts-v1/;
>
>
> / {
>
> │ hypervisor {
>
> │ │ compatible = "hafnium,hafnium";
>
> │ │ vm1 {
>
> │ │ │ debug_name = "Linux VM";
>
> │ │ │ kernel_filename = "vmlinuz";
>
> │ │ │ ramdisk_filename = "initrd.img";
>
> │ │ };
>
> │ };
>
> };
>
> 2.Copy bl31.bin hafnium.bin initrd.img to fat32 boot direction and
> configure config.txt . Power up my unit and got the uart error log :
>
> NOTICE: BL31: v2.3(debug):v2.3-109-g771c676b1
>
> NOTICE: BL31: Built : 15:49:37, Jun 1 2020
>
> INFO: Changed device tree to advertise PSCI.
>
> INFO: ARM GICv2 driver initialized
>
> INFO: BL31: Initializing runtime services
>
> INFO: BL31: cortex_a72: CPU workaround for 859971 was applied
>
> INFO: BL31: cortex_a72: CPU workaround for cve_2017_5715 was applied
>
> INFO: BL31: cortex_a72: CPU workaround for cve_2018_3639 was applied
>
> INFO: BL31: Preparing for EL3 exit to normal world
>
> INFO: Entry point address = 0x80000
>
> INFO: SPSR = 0x3c9
>
> NOTICE: Initialising hafnium
>
> INFO: text: 0x80000 - 0x97000
>
> INFO: rodata: 0x97000 - 0x9a000
>
> INFO: data: 0x9a000 - 0x117000
>
> INFO: Supported bits in physical address: 44
>
> INFO: Stage 2 has 4 page table levels with 1 pages at the root.
>
> INFO: Found PSCI version: 0x10001
>
> INFO: Memory range: 0x0 - 0x3b3fffff
>
> INFO: Memory range: 0x40000000 - 0xfbffffff
>
> INFO: Ramdisk range: 0x1800000 - 0x353dbff
>
> ERROR: Data abort: pc=0x8b0dc, esr=0x96000021, ec=0x25, far=0x19536ee
>
> Panic: EL2 exception
>
> Is there anyone can help me with this issue or share me the way to bring
> up Hafnium on pi4 ? Thanks very much !
>
>
> ------
>
> By Wanghua
>
>
>
>
> Best Regards!
>
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>
Hi Raghu,
Howdy! CIL…
> On 4 Jun 2020, at 16:21, Raghu K via Hafnium <hafnium(a)lists.trustedfirmware.org> wrote:
>
> Hi Achin,
>
> Would you mind elaborating more on why the SPM needs to determine the security state and why it is important to do this without trusting the SP? When you say SPM, it sounds like you are talking about the SPMD running in EL3 for ex., that is not a part of the SPMC which perhaps runs as S-EL2 and the SPMD may need to know this to figure out how to map a particular physical page. Is that the use case you are thinking about?
So this is in the context of PSA FF-A Memory management ABIs. Also, I have the S-EL2 SPMC case in mind. SPMD in EL3 does not participate in memory management in this case when it comes to managing any architectural state i.e. translation tables, control regs etc
Say, a SP0 invokes FFA_MEM_SHARE to share a single page A with SP1. The SPMC would need to map page A in SP’s stage 2 tables. To do this, it would need to determine whether the IPA of page A belongs to the Secure or Non-secure IPA space. This is under the assumption that some memory ranges in SP0’s IPA space will be Non-secure.
IMO, this information can be determined in one of the following ways:
1. Perform PTW in SW to determine whether IPA is mapped in the tables referenced by VSTTBR_EL2 or VTTBR_EL2. I am assuming the SPMC maintains separate S2 translations for the Secure and NS address spaces.
2. Through an internal data structure which tracks the attributes of a memory region assigned to a guest.
3. SP0 specifies the security state of page A in FFA_MEM_SHARE. The spec does not cover this currently. However, the SPMC cannot trust that the SP0 is providing the right security state and must verify this independently anyways.
1 seems clunky. 2 is not done in upstream Hf. 3 does not really help.
I think I had misunderstood that a AT* instruction could be used. There do not seem to be any in the Arm ARM that only perform a IPA to PA i.e. a S2 translation.
So I am wondering what can be done to solve this problem assuming we agree that this is a problem in the first place.
Hth,
Cheers,
Achin
>
> Thanks
> Raghu
>
> On 6/4/20 3:07 AM, Achin Gupta via Hafnium wrote:
>> Hi All,
>>
>> I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
>>
>> The question is how does the SPM determine the security state of the memory region being shared by the SP.
>>
>> It is especially important that the SPM does this without trusting the SP.
>>
>> I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
>>
>> So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
>>
>> This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
>>
>> What do you reckon?
>>
>> cheers,
>> Achin
>
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
Hi all :
Has anyone brought up Hafnium on Raspberry Pi 4 board ? I tried but failed by the following steps:
1.Make the Hafnium RAM disk with aarch64 build Pi4 kernel , RAM disk for Linux , manifest.dtb.
The manifest.dtb buildt by :
/dts-v1/;
/ {
│ hypervisor {
│ │ compatible = "hafnium,hafnium";
│ │ vm1 {
│ │ │ debug_name = "Linux VM";
│ │ │ kernel_filename = "vmlinuz";
│ │ │ ramdisk_filename = "initrd.img";
│ │ };
│ };
};
2.Copy bl31.bin hafnium.bin initrd.img to fat32 boot direction and configure config.txt . Power up my unit and got the uart error log :
NOTICE: BL31: v2.3(debug):v2.3-109-g771c676b1
NOTICE: BL31: Built : 15:49:37, Jun 1 2020
INFO: Changed device tree to advertise PSCI.
INFO: ARM GICv2 driver initialized
INFO: BL31: Initializing runtime services
INFO: BL31: cortex_a72: CPU workaround for 859971 was applied
INFO: BL31: cortex_a72: CPU workaround for cve_2017_5715 was applied
INFO: BL31: cortex_a72: CPU workaround for cve_2018_3639 was applied
INFO: BL31: Preparing for EL3 exit to normal world
INFO: Entry point address = 0x80000
INFO: SPSR = 0x3c9
NOTICE: Initialising hafnium
INFO: text: 0x80000 - 0x97000
INFO: rodata: 0x97000 - 0x9a000
INFO: data: 0x9a000 - 0x117000
INFO: Supported bits in physical address: 44
INFO: Stage 2 has 4 page table levels with 1 pages at the root.
INFO: Found PSCI version: 0x10001
INFO: Memory range: 0x0 - 0x3b3fffff
INFO: Memory range: 0x40000000 - 0xfbffffff
INFO: Ramdisk range: 0x1800000 - 0x353dbff
ERROR: Data abort: pc=0x8b0dc, esr=0x96000021, ec=0x25, far=0x19536ee
Panic: EL2 exception
Is there anyone can help me with this issue or share me the way to bring up Hafnium on pi4 ? Thanks very much !
------
By Wanghua
Best Regards!
Hi Achin,
Would you mind elaborating more on why the SPM needs to determine the
security state and why it is important to do this without trusting the
SP? When you say SPM, it sounds like you are talking about the SPMD
running in EL3 for ex., that is not a part of the SPMC which perhaps
runs as S-EL2 and the SPMD may need to know this to figure out how to
map a particular physical page. Is that the use case you are thinking about?
Thanks
Raghu
On 6/4/20 3:07 AM, Achin Gupta via Hafnium wrote:
> Hi All,
>
> I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
>
> The question is how does the SPM determine the security state of the memory region being shared by the SP.
>
> It is especially important that the SPM does this without trusting the SP.
>
> I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
>
> So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
>
> This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
>
> What do you reckon?
>
> cheers,
> Achin
Hi All,
I am thinking of a scenario where a SP shares Non-secure memory with one or more SPs or VMs. The NS memory region could have been donated to the SP by a VM earlier (far fetched but possible).
The question is how does the SPM determine the security state of the memory region being shared by the SP.
It is especially important that the SPM does this without trusting the SP.
I don't think it should rely on the AT* instructions. The SP could change the security state of the region in S1. AFAIK, there are no AT* instructions that only do S2 walks with a IPA as an input.
So is the only option to perform a walk in both the Secure and Non-secure S2 tables to determine where is the address mapped.
This seems a bit clunky. So wondering if I am missing anything and there is an easier way to do this.
What do you reckon?
cheers,
Achin
Hi,
Hafnium sets CPTR_EL2.TTA (bit 28), which traps accesses to trace system registers to EL2.
https://git.trustedfirmware.org/hafnium/hafnium.git/tree/src/arch/aarch64/s…
However CPTR_EL2 register has a different bit field definition depending on HCR_EL2.E2H state.
When HCR_EL2.E2H=0 (Hafnium case) CPTR_EL2.TTA bit position is 20.
Is this a slight issue needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi all,
It would be useful to have a chat group of some sort for quick questions
that don't warrant an email thread. What is the preferred solution for this
for Arm and Trusted Firmware? I saw that there is already an Arm Developer
Ecosystem Discord server, would a channel there be appropriate? What does
TF-A use?
Hello,
I've run into a few issues while implementing fragmentation for memory
sharing based on the FF-A 1.0 spec, which I think need clarifying or fixing.
1. From section 13.2.2.3 points 2-6, it sounds like the 'sender' in the
case of a FFA_MEM_RETRIEVE_RESP_32 from the SPM to the non-secure
hypervisor is the SPM. Am I correct in understanding that this means that
the sender id for FFA_MEM_FRAG_TX and FFA_MEM_FRAG_RX should thus always be
the ID of the SPM, when the hypervisor is retrieving a memory region from
the SPM for the purposes of a reclaim operation from a normal world VM?
2. When a normal world VM tries to share memory with a secure partition via
FFA_MEM_SHARE, it may be that the buffer between the hypervisor and the SPM
is busy because another VM is also sharing memory or doing something else
that uses the buffer, on a different physical CPU. This could happen either
for the initial fragment sent via FFA_MEM_SHARE or for a subsequent
fragment sent with FFA_MEM_FRAG_TX. The spec currently says in section
12.3.1.2, point 13.2 that the hypervisor must return ABORTED in the
FFA_MEM_SHARE case, and doesn't allow any relevant error codes in the
description of FFA_MEM_FRAG_TX in section 13.2.2.5, though it does mention
ABORTED in section 13.2.2.3 point 8.
However, the buffer being busy need not mean that the whole transaction is
aborted; it should be possible for the sender to try again after a short
time, when hopefully the buffer is available again. So it would make more
sense for the hypervisor to return an FFA_BUSY error in this case, as it
does in other cases where a buffer is currently unavailable but the caller
can try again.
Hi Andrew,
Notice I do git clean -fdx prior to trying CHECKPATCH set/not set, because it looks there's some dependency to local ninja build files.
When CHECKPATCH is set I'm still seeing the issue mentioned earlier.
When CHECKPATCH is not set, I got failures related to missing python dependencies on my ubuntu host (ply pathlib python-git). I don't recall seeing those dependencies listed in Hafnium pages, but that's fair enough. Now the checkpatch step passes by installing those python libs.
Obviously I'm not facing this issue when running through docker hermetic build.
So I guess it's low priority glitch, not sure if it really requires a fix or a notice in documentation?
Regards,
Olivier.
________________________________________
From: Andrew Walbran
Sent: Monday, May 18, 2020 13:32
To: Olivier Deprez
Cc: hafnium(a)lists.trustedfirmware.org
Subject: Re: [Hafnium] kokoro/build and CHECKPATCH
That's weird, I haven't seen that before. Running either kokoro/build.sh or 'make checkpatch' locally works for me, whether or not CHECKPATCH is set.
The Makefile in the root directory is setting CHECKPATCH unconditionally, but it looks like the one under drive/linux uses the existing value if there is one. Maybe we should change that?
What error do you get when CHECKPATCH is not set?
On Fri, 15 May 2020 at 18:18, Olivier Deprez via Hafnium <hafnium(a)lists.trustedfirmware.org> wrote:
Hi,
Is there a known problem around checkpatch when running kokoro/build.sh locally?
I noticed if CHECKPATCH env variable is already set then kokoro/build.sh fails with:
+ export CROSS_COMPILE=aarch64-linux-gnu-
+ CROSS_COMPILE=aarch64-linux-gnu-
+ cd driver/linux
+ make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
<my-own-checkpatch-dir>/checkpatch.pl -f main.c
Must be run from the top-level dir. of a kernel tree
Makefile:43: recipe for target 'checkpatch' failed
make: *** [checkpatch] Error 2
If I unset CHECKPATCH before running build.sh then it fails at the driver/linux check.
Is this something known and/or needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose,
or store or copy the information in any medium. Thank you.
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
That's weird, I haven't seen that before. Running either kokoro/build.sh or
'make checkpatch' locally works for me, whether or not CHECKPATCH is set.
The Makefile in the root directory is setting CHECKPATCH unconditionally,
but it looks like the one under drive/linux uses the existing value if
there is one. Maybe we should change that?
What error do you get when CHECKPATCH is not set?
On Fri, 15 May 2020 at 18:18, Olivier Deprez via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> Hi,
>
> Is there a known problem around checkpatch when running kokoro/build.sh
> locally?
>
> I noticed if CHECKPATCH env variable is already set then kokoro/build.sh
> fails with:
>
> + export CROSS_COMPILE=aarch64-linux-gnu-
> + CROSS_COMPILE=aarch64-linux-gnu-
> + cd driver/linux
> + make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
> <my-own-checkpatch-dir>/checkpatch.pl -f main.c
> Must be run from the top-level dir. of a kernel tree
> Makefile:43: recipe for target 'checkpatch' failed
> make: *** [checkpatch] Error 2
>
> If I unset CHECKPATCH before running build.sh then it fails at the
> driver/linux check.
>
> Is this something known and/or needing fix?
>
> Regards,
> Olivier.
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
> --
> Hafnium mailing list
> Hafnium(a)lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/hafnium
>
Hi,
Is there a known problem around checkpatch when running kokoro/build.sh locally?
I noticed if CHECKPATCH env variable is already set then kokoro/build.sh fails with:
+ export CROSS_COMPILE=aarch64-linux-gnu-
+ CROSS_COMPILE=aarch64-linux-gnu-
+ cd driver/linux
+ make HAFNIUM_PATH=/home/olidep01/WORK/hafnium-upstream checkpatch
<my-own-checkpatch-dir>/checkpatch.pl -f main.c
Must be run from the top-level dir. of a kernel tree
Makefile:43: recipe for target 'checkpatch' failed
make: *** [checkpatch] Error 2
If I unset CHECKPATCH before running build.sh then it fails at the driver/linux check.
Is this something known and/or needing fix?
Regards,
Olivier.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On Tue, 21 Apr 2020 at 21:18, Raghu K via Hafnium <
hafnium(a)lists.trustedfirmware.org> wrote:
> All,
>
> I was looking at the hafnium repositories on trusted firmware and was
> wondering how we envision managing the project on trustedfirmware.org vs
> googlesource. Is the plan to set up this project to be like
> googlesource, i.e using repo to manage the multiple repositories ? There
> appears to be dependencies on third party tools/prebuilts in the
> BUILD.gn files so using something like repo even on trustedfirmware.org
> sounds useful.
>
The plan is to continue using git submodules for this. The main Hafnium
repository has submodules for:
prebuilts
driver/linux
project/reference
third_party/dtc
third_party/googletest
third_party/linux
The .gitmodules file currently still points to the old URLs but this will
be updated once the migration is complete. (We are currently blocked on
getting CI running on the new infrastructure.) The advantage of submodules
over repo for this is that it ties specific versions together, which is
important for bisection or otherwise checking out old versions to work
properly. Often we make changes in one repository which need to happen at
the same time as a change in another repository for the build to work. Repo
doesn't provide a way to link versions together like this. We were using
repo for some internal Google tools which don't support submodules, but
this shouldn't be necessary anymore.
Hi Raghu,
The transition of repo, CI and tools is still taking place, we can expect in the shorter term the project management to be very similar although on TF.org servers to what it was previously but longer term tighter integration with TF-A will occur but details are still being worked out. We expect a TF-A Tech Forum meeting in the near term future to cover more details of Hafnium integration and other subjects.
Thanks
Joanna
On 21/04/2020, 21:19, "Hafnium on behalf of Raghu K via Hafnium" <hafnium-bounces(a)lists.trustedfirmware.org on behalf of hafnium(a)lists.trustedfirmware.org> wrote:
All,
I was looking at the hafnium repositories on trusted firmware and was
wondering how we envision managing the project on trustedfirmware.org vs
googlesource. Is the plan to set up this project to be like
googlesource, i.e using repo to manage the multiple repositories ? There
appears to be dependencies on third party tools/prebuilts in the
BUILD.gn files so using something like repo even on trustedfirmware.org
sounds useful.
Thanks
Raghu
--
Hafnium mailing list
Hafnium(a)lists.trustedfirmware.org
https://lists.trustedfirmware.org/mailman/listinfo/hafnium
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
All,
I was looking at the hafnium repositories on trusted firmware and was
wondering how we envision managing the project on trustedfirmware.org vs
googlesource. Is the plan to set up this project to be like
googlesource, i.e using repo to manage the multiple repositories ? There
appears to be dependencies on third party tools/prebuilts in the
BUILD.gn files so using something like repo even on trustedfirmware.org
sounds useful.
Thanks
Raghu