[TF-TSC] tf.org security incident handling process (v0.3)

Joakim Bech joakim.bech at linaro.org
Thu Jul 11 12:10:49 UTC 2019

Hi Dan and TF reps,

Thanks for putting this together Dan. I like it and it's pretty short and
concise. Nothing big from me, but please find a few comments from me inline

On Wed, 10 Jul 2019 at 18:35, Dan Handley via TSC <
tsc at lists.trustedfirmware.org> wrote:

> This is a v0.3 update to the proposed tf.org security incident handling
> process, which I sent previously, incorporating the comments I've had since
> then.
> Changes:
> * Changed ESS 1 day response window to be from date of notification, not
> from date of fix availability.
> * Restored the 7 days embargo limit with 14 days in exceptional cases, to
> align with kernel process.
> * Explicitly stated that it is permitted to point others to a public
> vulnerability fix during an embargo period, as long as this is not
> identified as a vulnerability.
> I propose that the deadline for feedback is the end of next week (19th
> July), with a view to approving the process at the August TSC meeting. We
> can discuss this more at tomorrow's TSC meeting.
> Regards
> Dan.
> ---
> This security incident handling process proposal is broadly based on the
> kernel process [1], with influence from the existing TF-A [2] and OP-TEE
> [3] processes.

> [DH: Note the contrast with the kernel process: mailto:security at kernel.org
> is exclusively about fixing the vulnerability; disclosure is delegated to
> mailto:linux-distros at vs.openwall.org [4]. This proposal combines these
> activities.]
> Reporting
> =========
> If you think you have found a security vulnerability, then please send an
> email to the Trusted Firmware security team at mailto:
> security at trustedfirmware.org. This is a private team of security officers
> who will help verify the security vulnerability, develop and release a fix,
> and disclose the vulnerability details responsibly. Please give us time to
> implement the disclosure plan described in the next section before going
> public. We do our best to respond and fix any issues as soon as possible.
> As with any bug, the more information you provide, the easier it is to
> diagnose and fix. If you already have a fix, please include it with your
> report, as that can speed up the process considerably. Any exploit code is
> very helpful. The security team may bring in extra help from area
> maintainers to understand and fix the vulnerability. The security team may
> share any information you provide with trusted third parties and eventually
> the public unless you request otherwise.
> [DH: Note, mailto:security at kernel.org provides stronger confidentiality
> guarantees because it is only interested in fixes, not disclosure. In
> practice, I'd expect members of the security team to be sensitive with
> confidential information as with any other open source interactions, and to
> get explicit approval from the reporter for disclosure of sensitive
> information, e.g. identity, organization, product information,...]
> You may use this PGP/GPG key [insert link] for encrypting the
> vulnerability information. This key is also available at
> http://keyserver.pgp.com and LDAP port 389 of the same server. The
> fingerprint for this key is:
> [Insert Fingerprint]
> If you would like replies to be encrypted, please provide your public key.
> Please note the Trusted Firmware security team cannot guarantee that
> encrypted information will remain encrypted when shared with trusted third
> parties.
> [DH: Is this acceptable? It allows reporters to use encryption without
> adding too much admin burden on the security team. The alternative is to
> force everyone receiving embargoed information to provide an encryption key
> and decrypt/re-encrypt as information is passed around. I think this adds
> too much overhead without significant security benefit.]
[JB: I do think this is the right level of dealing with this. The
communication with the one reporting the issue can (optionally) be
encrypted, but for sharing between members of the security team I think
it's easier to use things like LDAP protections, Google docs, GitHub
private security advisories (

> If the security team consider the bug not to be a security vulnerability,
> you will be informed and the bug directed to the standard bug fixing
> process.
> Disclosure
> ==========
> The general security vulnerability disclosure plan is as follows:
> 1. For confirmed security vulnerabilities, develop a robust fix as soon as
> possible. During this time, information is only shared with the reporter,
> those needed to develop the fix and Especially Sensitive Stakeholders
> (ESSes). See the "ESS and Trusted Stakeholder registration" section below
> for more information about ESSes.
> 2. After a robust fix becomes available, our preference is to publicly
> release it as soon as possible. This will automatically happen if the
> vulnerability is already publicly known. However, release may be deferred
> if the reporter or an ESS requests it within 1 calendar day of being
> notified, and the security team agree the criticality of the vulnerability
> requires more time. The requested deferral period should be as short as
> possible, up to 7 calendar days after the fix becomes available, with an
> exceptional extension to 14 calendar days. The only valid reason for
> release deferral is to accommodate deployment of the fix by ESSes. If it is
> immediately clear that ESSes are unaffected by the vulnerability then this
> stage is skipped. This 0-14 day deferral is the primary embargo period.
> [DH: Note, this stage is only relevant for TF-A currently.]
> [DH: Note, this assumes that ESS security teams operate 7 days a week.]
> 3. After the primary embargo period, the fix and relevant vulnerability
> information are shared with registered Trusted Stakeholders (see below
> section). Fix release may be deferred further if a Trusted Stakeholder
> requests it within 1 working day (Monday to Friday) of being notified,

[JB: One day is not much time, there is a chance that people on the
receiving end misses this for both good and bad reasons. However, I don't
know how many days that would be sufficient. In the end these kind of
things require some discipline and extra efforts from both sides.]

> and the security team agree the criticality of the vulnerability requires
> more time. The requested deferral period should be as short as possible, up
> to 7 calendar days after the Trusted Stakeholder is notified, with an
> exceptional extension to 14 days. The only valid reason for further release
> deferral is to accommodate deployment of the fix by a Trusted Stakeholder.
> This further 1-14 day deferral is the secondary embargo period.
> Note, security fixes contain the minimum information required to fix the
> bug. The accompanying vulnerability details are disclosed later.
> [DH: Note, the Trusted Stakeholder required response time is slightly
> relaxed compared to the ESS response time; 1 working day as oppose to 1
> calendar day.]
> [DH: Note, this aggressive release of security fixes is aligned with the
> kernel process. The existing TF-A process allows for early release of
> fixes, which we generally do in practice, but that process doesn't really
> specify a fix embargo period. However it does specify a 4 week embargo on
> the subsequent security advisory. Also note that OP-TEE's fix embargo
> period is 90 days, which is aligned with Google's policy.]
[JB: Right, our initial OP-TEE embargo period was much shorter, but after
getting feedback from our members it was clear that they preferred a longer
embargo, which was the reason we went back to 90 days. Did we end up asking
other people? TF members? Companies that we know are using TF-A/M?]

> 4. After the fix is released, details of the security vulnerability are
> consolidated into a security advisory. This includes a CVE number and the
> security team will request one if not already done by the reporter. It also
> includes credit to the reporter unless they indicate otherwise. Drafts of
> the security advisory are circulated with the reporter, ESSes and Trusted
> Stakeholders as they become available.
[JB: Do we want to have our own numbering schema also? If we believe _all_
issues will get a CVE number then it's not needed. But if there are issues
that for whatever reasons wouldn't get a CVE, then it's quite useful to
have some internal numbering system. Out of experience, I know that it can
become quite messy if you get a report containing lots of potential
security issues. Having an internal number to use in commits etc has been
proven to be quite useful.]

> [DH: Note, the existing TF-A process only shares information with Trusted
> Stakeholders in the form of security advisories. This proposal is more
> aligned with the kernel process and means we can focus on fix development
> by sharing raw vulnerability information in the early stages.]
> 5. 90 days after the vulnerability was reported, the security advisory is
> made public at https://www.trustedfirmware.org/. This 90 day window is
> the public embargo period.
> [DH: This public embargo period aligns with Google and OP-TEE processes,
> although this proposal releases fixes earlier.]
> In exceptionally rare cases, the above disclosure plan may be extended in
> consultation with the reporter, ESSes and Trusted Stakeholders, for example
> if it is very difficult to develop or deploy a fix, or wider industry
> consultation is needed.
> [DH: I'm accommodating for the Spectre/Meltdown case here]

[JB: So, is it fair to write the complete timeline as
Fast case
| Report given to TF: <X days> | <X days + 1>: first embargo | <+ 1 day>:
second embargo | <+ 90>: public security advisory |
i.e "X days + 1 + 1 + 90 = X days + 92"

"Worst" case (mentioned in your attached PDF)
| Report given to TF: <X days> | <X days + 14>: first embargo | <+ 14 day>:
second embargo | <+ 90>: public security advisory |
i.e "X days + 14 + 14 + 90" = X days + 118

Or are the dates running in parallel so to say? Example: public advisory is
always X days + 90?]

> Handling embargoed information
> ==============================
> On receipt of embargoed information, you must not disclose any of the
> provided information beyond the group of people in your organization that
> need to know about it. During the primary and secondary embargo periods,
> that group of people should be limited to those entrusted to assess the
> impact of the vulnerability on your organization and deploy fixes to your
> products. After the secondary embargo period but during the public embargo
> period, that group of people may be expanded in order to prepare your
> organization's public response. The embargoed information must not be
> shared outside your organization during the public embargo period under any
> circumstances. It is permitted to point others to a public vulnerability
> fix during an embargo period, as long as this is not identified as a
> vulnerability.
> If you think another individual/organization requires access to the
> embargoed information, then please ask them to register as a Trusted
> Stakeholder (see next section). If you believe there has been a leak of
> embargoed information then please notify the security team immediately.
> [DH: This section is stronger than the existing TF-A and OP-TEE processes,
> but not as strong as the linux distros policy [4]. I hope I've struck the
> right balance here.]
> The security team welcomes feedback on embargoed information at any time.
> ESS and Trusted Stakeholder registration
> ========================================
> [DH: This is broadly based on the OP-TEE policy.]
> The security team maintains a vetted list of organizations and individuals
> who are considered ESSes and Trusted Stakeholders of Trusted Firmware
> security vulnerabilities. Contact <mailto:security at trustedfirmware.org>
> if you wish to be added to one of the lists, providing the following
> information:
> 1. Which list you want to be on. This will almost always be the Trusted
> Stakeholder list.
> 2. A justification of why you should be on the list. That is, why you
> should know about security vulnerabilities and have access to security
> fixes before they are made public. A valid reason to be on the Trusted
> Stakeholder list for example, is that you use Trusted Firmware in a
> deployed product. The ESS list is strictly limited to those organizations
> with large scale deployments of Trusted Firmware that provide bare-metal
> access on multi-tenancy systems.
> 3. An organization email address (not gmail, yahoo or similar addresses).
> It is preferable for each organization to provide an email alias that you
> can manage yourselves rather than providing a long list of individual
> addresses.
> 4. Confirmation that the individuals in your organization will handle
> embargoed information responsibly as described in the previous section.
> Note, the security team reserves the right to deny registration or revoke
> membership to the Trusted Stakeholders list, for example if it has concerns
> about the confidentiality of embargoed information.
[JB: Only stakeholders list? What about ESS list?]

> [DH: Note, becoming a Trusted Stakeholder in the current TF-A process
> requires having a valid NDA with Arm and requesting to be added via Arm
> account management. I propose that Arm send a mail to all existing
> stakeholders to invite them to register for the new process.]
> [DH: Note, I expect each TF project to maintain its own ESS/Trusted
> Stakeholder lists.]
[JB: Makes sense!]

> [DH: Note, I've not included severity scoring in this proposal, as I think
> the only value of a score is helping to determine whether a bug is a
> security vulnerability or not, which in the end has a subjective element.
> I'm open to the idea of adding this to the process but I'd prefer it to be
> optional and aligned with CVSSv3 as used by CVE.]
[JB: Agree and regarding the process I'm quite open to have a look at
CVSSv3. The current OP-TEE process is better than the first one, but it's
still a bit rough around the edges. If CVSSv3 works for TF-A/M, then I'll
probably also work for OP-TEE. Also, something that I've been missing when
doing work like this is an internal process also. But that's definitely out
of scope for this discussion.]

> [1] https://www.kernel.org/doc/html/latest/admin-guide/security-bugs.html
> [2]
> https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/about/docs/security-center.rst
> [3] https://optee.readthedocs.io/general/disclosure.html
> [4]
> https://oss-security.openwall.org/wiki/mailing-lists/distros#how-to-use-the-lists
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
> --
> TSC mailing list
> TSC at lists.trustedfirmware.org
> https://lists.trustedfirmware.org/mailman/listinfo/tsc

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.trustedfirmware.org/mailman/private/tsc/attachments/20190711/acb5aef1/attachment-0001.html>

More information about the TSC mailing list