Hi Joakim
Thanks for reviewing. Some further comments below.
From: Joakim Bech joakim.bech@linaro.org Sent: 11 July 2019 13:11 To: Dan Handley Dan.Handley@arm.com Cc: tsc@lists.trustedfirmware.org Subject: Re: [TF-TSC] tf.org security incident handling process (v0.3)
Hi Dan and TF reps,
Thanks for putting this together Dan. I like it and it's pretty short and concise. Nothing big from me, but please find a few comments from me inline below.
On Wed, 10 Jul 2019 at 18:35, Dan Handley via TSC mailto:tsc@lists.trustedfirmware.org wrote: Hi TF TSC
This is a v0.3 update to the proposed http://tf.org security incident handling process, which I sent previously, incorporating the comments I've had since then.
<snip>
If you would like replies to be encrypted, please provide your public key. Please note the Trusted Firmware security team cannot guarantee that encrypted information will remain encrypted when shared with trusted third parties.
[DH: Is this acceptable? It allows reporters to use encryption without adding too much admin burden on the security team. The alternative is to force everyone receiving embargoed information to provide an encryption key and decrypt/re-encrypt as information is passed around. I think this adds too much overhead without significant security benefit.]
[JB: I do think this is the right level of dealing with this. The communication with the one reporting the issue can (optionally) be encrypted, but for sharing between members of the security team I think it's easier to use things like LDAP protections, Google docs, GitHub private security advisories (https://help.github.com/en/articles/collaborating-in-a-temporary- private-fork-to-resolve-a-security-vulnerability) etc.]
I'm glad we're in agreement. I hadn't seen that GitHub security advisory support before - thanks!
<snip>
- After the primary embargo period, the fix and relevant vulnerability
information are shared with registered Trusted Stakeholders (see below section). Fix release may be deferred further if a Trusted Stakeholder requests it within 1 working day (Monday to Friday) of being notified,
[JB: One day is not much time, there is a chance that people on the receiving end misses this for both good and bad reasons. However, I don't know how many days that would be sufficient. In the end these kind of things require some discipline and extra efforts from both sides.]
I think you're right this is a bit too aggressive. The window for requesting the primary embargo period is extremely short because we expect ESS security teams to operate 24/7 and we want to minimise the delay in proceeding to the next stage when ESSes are unaffected. But Trusted Stakeholders may need more time. We don't want to unnecessarily delay release of fixes but increasing this window a bit is unlikely to cause a problem in practice: If there is no primary embargo period it's likely we'll need at least a few days to prepare a fix anyway; if there is a primary embargo period, it's likely we'll need a secondary embargo period as well. How about making this window 3 working days?
<snip>
[DH: Note, this aggressive release of security fixes is aligned with the kernel process. The existing TF-A process allows for early release of fixes, which we generally do in practice, but that process doesn't really specify a fix embargo period. However it does specify a 4 week embargo on the subsequent security advisory. Also note that OP-TEE's fix embargo period is 90 days, which is aligned with Google's policy.]
[JB: Right, our initial OP-TEE embargo period was much shorter, but after getting feedback from our members it was clear that they preferred a longer embargo, which was the reason we went back to 90 days. Did we end up asking other people? TF members? Companies that we know are using TF-A/M?]
Sorry, I haven't sought any wider feedback. This is purely based on the Google and OP-TEE policies, which I (and I think TF-A partners would) find reasonable. Given that the fixes are made much earlier, I don't see anyone objecting to this longer advisory embargo, except perhaps some reporters.
- After the fix is released, details of the security vulnerability are
consolidated into a security advisory. This includes a CVE number and the security team will request one if not already done by the reporter. It also includes credit to the reporter unless they indicate otherwise. Drafts of the security advisory are circulated with the reporter, ESSes and Trusted Stakeholders as they become available.
[JB: Do we want to have our own numbering schema also? If we believe _all_ issues will get a CVE number then it's not needed. But if there are issues that for whatever reasons wouldn't get a CVE, then it's quite useful to have some internal numbering system. Out of experience, I know that it can become quite messy if you get a report containing lots of potential security issues. Having an internal number to use in commits etc has been proven to be quite useful.]
Yes, I do expect to have our own numbering scheme for the reasons you state. I thought this could be covered by a supplementary internal process, which we could create after this external process is approved. Let me know if you think we should say something here instead.
- 90 days after the vulnerability was reported, the security advisory is
made public at https://www.trustedfirmware.org/. This 90 day window is the public embargo period.
[DH: This public embargo period aligns with Google and OP-TEE processes, although this proposal releases fixes earlier.]
In exceptionally rare cases, the above disclosure plan may be extended in consultation with the reporter, ESSes and Trusted Stakeholders, for example if it is very difficult to develop or deploy a fix, or wider industry consultation is needed.
[DH: I'm accommodating for the Spectre/Meltdown case here]
[JB: So, is it fair to write the complete timeline as Fast case | Report given to TF: <X days> | <X days + 1>: first embargo | <+ 1 | day>: second embargo | <+ 90>: public security advisory | i.e "X days + 1 + 1 + 90 = X days + 92"
"Worst" case (mentioned in your attached PDF) | Report given to TF: <X days> | <X days + 14>: first embargo | <+ 14 | day>: second embargo | <+ 90>: public security advisory | i.e "X days + 14 + 14 + 90" = X days + 118
Or are the dates running in parallel so to say? Example: public advisory is always X days + 90?]
The latter! It should always be X days + 90. The above says "90 days after the vulnerability was reported". Let me know if I can make this clearer somehow.
<snip>
Note, the security team reserves the right to deny registration or revoke membership to the Trusted Stakeholders list, for example if it has concerns about the confidentiality of embargoed information.
[JB: Only stakeholders list? What about ESS list?]
OK, I'll include that too.
[DH: Note, I've not included severity scoring in this proposal, as I think the only value of a score is helping to determine whether a bug is a security vulnerability or not, which in the end has a subjective element. I'm open to the idea of adding this to the process but I'd prefer it to be optional and aligned with CVSSv3 as used by CVE.]
[JB: Agree and regarding the process I'm quite open to have a look at CVSSv3. The current OP-TEE process is better than the first one, but it's still a bit rough around the edges. If CVSSv3 works for TF-A/M, then I'll probably also work for OP-TEE. Also, something that I've been missing when doing work like this is an internal process also. But that's definitely out of scope for this discussion.]
OK but the process currently doesn't say anything about scoring. I guess we can look at CVSv3 later if there's a need.
I agree we will need internal process documentation as well but that can be worked through later. I can ask the current TF-A security team to help with this.
Regards
Dan.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.