Hi TF TSC
This is write up of the proposed tf.org security incident handling process, which is one of my actions from the last TSC. I wanted to circulate this before Friday's TSC meeting. I know there's not much time for you to review before then but perhaps you can have some initial discussion (without me) and we can go into more detail at the next TSC?
I've included some comment/rationale in square brackets to aid discussion. All comments welcome.
Regards
Dan.
----
This security incident handling process proposal is broadly based on the kernel process [1], with influence from the existing TF-A [2] and OP-TEE [3] processes.
Reporting
=========
If you think you have found a security vulnerability, then please send an email to the Trusted Firmware security team at <security@trusted-firmware.orgmailto:security@trusted-firmware.org>. This is a private team of security officers who will help verify the security vulnerability, develop and release a fix, and disclose the vulnerability details responsibly. Please give us time to implement the disclosure plan described in the next section before going public. We do our best to respond and fix any issues quickly.
[DH: Note security@kernel.orgmailto:security@kernel.org is exclusively about fixing the vulnerability; disclosure is delegated to linux-distros@vs.openwall.orgmailto:linux-distros@vs.openwall.org [4]. This proposal combines these activities.]
As with any bug, the more information you provide, the easier it is to diagnose and fix. If you already have a fix, please include it with your report, as that can speed up the process considerably. Any exploit code is very helpful. The security team may bring in extra help from area maintainers to understand and fix the vulnerability. The security team may share any information you provide with trusted third parties and eventually the public unless you request otherwise.
[DH: Note, security@kernel.orgmailto:security@kernel.org provides stronger confidentiality guarantees because it is only interested in fixes, not disclosure. In practice, I'd expect members of the security team to be sensitive with confidential information as with any other open source interactions, and to get explicit approval from the reporter for disclosure of sensitive information, e.g. identity, organization, product information,...]
If the security team consider the bug not to be a security vulnerability, you will be informed and the bug directed to the standard bug fixing process.
[DH: Do we need to indicate target reporter response times? TF-A has an internal target of 1 day for an initial response and 1 week for a considered response. Other processes (including TF-A) do not specify a target here. Perhaps one can assume it will always be ASAP?]
Disclosure
==========
The general security vulnerability disclosure plan is as follows:
1. For confirmed security vulnerabilities, develop a robust fix as soon as possible. During this time, information is only shared with the reporter, those needed to develop the fix and Especially Sensitive Stakeholders (ESSes), in particular organizations with large scale deployments of Trusted Firmware providing bare-metal access on multi-tenancy systems.
2. After a robust fix becomes available, our preference is to publicly release it as soon as possible. This will automatically happen if the vulnerability is already publicly known. However, release may be deferred if the reporter or an ESS requests it within 1 calendar day of the fix becoming available, and the security team agree the criticality of the vulnerability requires more time. The requested deferral period should be as short as possible and no more than 14 calendar days after the fix becomes available. The only valid reason for this release deferral is to accommodate deployment of the fix by ESSs. If it is immediately clear that ESSes are unaffected by the vulnerability then this stage is skipped. This 0-14 day deferral is the primary embargo period.
[DH: Note, this stage is only relevant for TF-A currently.]
[DH: I removed the nuance between 7 and 14 days used in the kernel process. I don't think it essentially changes the process.]
[DH: Note, this assumes that ESS security teams operate 7 days a week.]
3. After the primary embargo period, the fix and relevant vulnerability information are shared with registered Trusted Stakeholders (see below section). Fix release may be deferred further if a Trusted Stakeholder requests it within 1 working day (Monday to Friday) of being notified, and the security team agree the criticality of the vulnerability requires more time. The requested deferral period should be as short as possible and no more than 14 calendar days after the Trusted Stakeholder receives it. The only valid reason for this further release deferral is to accommodate deployment of the fix by a Trusted Stakeholder. This further 1-14 day deferral is the secondary embargo period.
Note, security fixes contain the minimum information required to fix the bug. The accompanying vulnerability details are disclosed later.
[DH: Note, I've slightly relaxed the Trusted Stakeholder required response time to 1 working day, as oppose to 1 calendar day. Is this reasonable?]
[DH: Note, this aggressive release of security fixes is aligned with the kernel process. The existing TF-A process allows for early release of fixes, which we generally do in practice, but that process doesn't really specify a fix embargo period. However it does specify a 4 week embargo on the subsequent security advisory. Also note that OP-TEE's fix embargo period is 90 days, which is aligned with Google's policy.]
4. After the fix is released, details of the security vulnerability are consolidated into a security advisory. This includes a CVE number and the security team will request one if not already done by the reporter. It also includes credit to the reporter unless they indicate otherwise. Drafts of the security advisory are circulated with the reporter, ESSes and Trusted Stakeholders as they become available.
[DH: Note, the existing TF-A process only shares information with Trusted Stakeholders in the form of security advisories. This proposal is more aligned with the kernel process and means we can focus on fix development by sharing raw vulnerability information in the early stages.]
5. 90 days after the vulnerability was reported, the security advisory is made public on https://www.trustedfirmware.org/. This 90 day window is the public embargo period.
[DH: This public embargo period aligns with Google and OP-TEE processes, although this proposal releases fixes earlier.]
In exceptional cases, the above disclosure plan may be extended in consultation with the reporter, ESSes and Trusted Stakeholders, for example if it is very difficult to develop or deploy a fix, or wider industry consultation is needed.
[DH: I'm accommodating for the Spectre/Meltdown case here, but does this encourage Trusted Stakeholders to invoke "exceptional cases" casually?]
Handling embargoed information
==============================
On receipt of embargoed information, you must not disclose any of the provided information beyond the group of people in your organization that need to know about it. During the primary and secondary embargo periods, that group of people should be limited to those entrusted to assess the impact of the vulnerability on your organization and deploy fixes to your products. After the secondary embargo period but during the public embargo period, that group of people may be expanded in order to prepare your organization's public response. The embargoed information must not be shared outside your organization during the public embargo period under any circumstances. If you think another individual/organization requires access to the embargoed information, then please ask them to register as a Trusted Stakeholder (see next section). If you believe there has been a leak of embargoed information then please notify the security team immediately.
[DH: This section is stronger than the existing TF-A and OP-TEE processes, but not as strong as the linux distros policy [4]. I hope I've struck the right balance here.]
The security team welcomes feedback on embargoed information at any time.
Trusted Stakeholder registration
================================
[DH: This is broadly based on the OP-TEE policy.]
The security team maintains a vetted list of organizations and individuals who are considered Trusted Stakeholders of Trusted Firmware security vulnerabilities. Contact <security@trusted-firmware.orgmailto:security@trusted-firmware.org> if you wish to be added to the list, providing the following information:
1. A justification of why you need to know about security vulnerabilities and have access to security fixes before they are made public. A valid reason for example, is that you use Trusted Firmware in a deployed product.
2. An organization email address (not gmail, yahoo or similar addresses). It is preferable for each organization to provide an email alias that you can manage yourselves rather than providing a long list of individual addresses.
3. Confirmation that your Trusted Stakeholders will handle embargoed information responsibly as described in the previous section.
Note, the security team reserves the right to deny registration or revoke membership to the Trusted Stakeholders list, for example if it has concerns about the confidentiality of embargoed information.
[DH: Note, becoming a Trusted Stakeholder in the current TF-A process requires having a valid NDA with Arm and requesting to be added via Arm account management. Should Arm automatically add existing stakeholders to the new process or invite them to be part of it?]
[DH: Note, I expect each TF project to have its own Trusted Stakeholder list.]
[DH: Note, I've not included severity scoring in this proposal, as I think the only value of a score is helping to determine whether a bug is a security vulnerability or not, which in the end has a subjective element. I'm open to the idea of adding this to the process but I'd prefer it to be optional and aligned with CVSSv3 as used by CVE.]
[DH: Note, I haven't specified use of PGP/GPG in this proposal. It makes the process much more difficult to administer in practice for questionable additional security IMO. If we did allow reporters to use PGP, then it implies all recipients of embargoed information should provide a PGP key, and that embargoed information should be decrypted and re-encrypted with recipient keys as it is passed around. I think this is too much effort but if we don't have it, will it put off reporters? The current TF-A process allows optional use of PGP but relies on GitHub security for access to embargoed information. security@kernel.orgmailto:security@kernel.org does not support PGP, though linux-distros@vs.openwall.orgmailto:linux-distros@vs.openwall.org does. OP-TEE does not support it.]
[DH: Note, this proposal assumes a single security email alias for all tf.org projects, which was agreed at a previous board/TSC meeting to make things simpler from a reporter's perspective. However, I'm having 2nd thoughts about that. The additional triage stage could unnecessarily delay handling and there will need to be project specific security aliases anyway, whether they're publicly visible or not.]
[1] https://www.kernel.org/doc/html/latest/admin-guide/security-bugs.html
[2] https://github.com/ARM-software/arm-trusted-firmware/wiki/ARM-Trusted-Firmwa...
[3] https://optee.readthedocs.io/general/disclosure.html
[4] https://oss-security.openwall.org/wiki/mailing-lists/distros#how-to-use-the-...
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.