A small update on the increase numbers. As said latency increase varies, but I do sometimes observe an even bigger increase.
The base maximum interrupt latency observed is 9μs (under normal conditions, when not calling the PSA PS API), and it sometimes jumps as high as 400μs (when calling it). So that's an almost 50-fold increase.
So there can apparently be quite a bit of fluctuation in the maximum latency increase. Basically between ~90μs (10-fold increase) up to ~400μs (50-fold increase).

Also, as additional information, there is nothing going on in parallel (in the application). This is all happening in a simple application made for reproducing the issue.


On Tue, 2024-10-22 at 07:37 +0000, Fontanilles, Tomi via TF-M wrote:
Hi all,

I'm currently looking into an issue reported internally where the maximum latency for a zero-latency, priority 0 interrupt dramatically increases during a call to the PSA Protected Storage (PS) API.
The maximum interrupt latency goes up ~10-fold, or even some more at times (it varies), and this is not acceptable to code running on the NS side for normal operation.

It's as simple as calling psa_ps_set(1, 4, buf, PSA_STORAGE_FLAG_NONE) once.
During that call (which I am told was also seen to be unreasonably long: 600ms for 4 bytes, 1s for 1000 bytes), the maximum interrupt latency goes up.
It (the maximum) seems to not vary too much even if increasing the size of stored assets (tested up to 4000 bytes), though it can get up to 20 times higher than the normal interrupt latency.

Also, other functions were tested (psa_generate_random(),psa_hash_compute()).
Those don't provoke an increased interrupt latency. Maybe it's only about the PS, or maybe some other calls can provoke that as well.

Does someone have an explanation for that?
And even more importantly, can that interrupt latency be reduced, ideally down to normal levels?

Thanks in advance.
Tomi