This policy change makes sense to me; I'm also sympathetic to the P0 team's struggle in getting vendors to take patching seriously.
At the same time, I think publicly sharing that some vulnerability was discovered can be valuable information to attackers, particularly in the context of disclosure on open source projects: it's been my experience that maintaining a completely hermetic embargo on an OSS component is extremely difficult, both because of the number of people involved and because fixing the vulnerability sometimes requires advance changes to other public components.
On the contrary: If Project Zero finds a 0-day in a product I know I use, and I know that product is Internet Facing, I can immediately take action and firewall it off. It isn't always the case that they find things like this, but an early warning signal can be really beneficial.
For customers, it also gives them leverage to contact vendors and ask politely for news on the patch.
Maybe I don't understand the threat model here: what kind of public-facing services are you running that are simultaneously (1) not already access-limited, and (2) not load-bearing such that they need to be public-facing?
(And to be clear: I see the benefit here. But I'm talking principally about open source projects, not the vendors you're presumably paying.)
There really isn't a great solution here. The notice that a vulnerability has been discovered puts even more pressure on the fix to be deployed as close to instantly as possible, throughout the entire supply chain.
Why is this? Especially for smaller or more stable open-source projects, the number of commits in a 90-day period that have the possibility to be security-relevant are likely to be quite low, perhaps as low as single digits. So the specific commit that fixes the reported security issue is highly likely to be identified immediately, and now there's a race to develop and use an exploit.
As one example, a stable project that's been the target of significant security hardening and analysis is the libpng decoder. Over the past 3 months (May 1 - Jul 29), its main branch has seen 41 total commits. Of those, at least 25 were non-code changes, involving documentation updates, release engineering activities, and build system / cross-platform support. If Project Zero had announced a vulnerability in this project on May 1 with a disclosure embargo of today, there would be at most 16 commits to inspect over 3 months to find the bug. That's not a lot of work for a dedicated team.
So now, do we delay publishing security fixes to public repos and try and maintain private infrastructure and testing for all of this? And then try and get a release made, propagated to multiple layers of downstream vendors, have them make releases, etc... all within a day or two? That's pretty hard, just organizationally. No great answers here.
This seems like a good move. I do hope that slow moving consumers of the software in question can start anticipating an upcoming release and construct remediation plans instead of doing that after the release.
I love it; it's a big-company reformulation of the classic vulnerability researcher's "reporting transparency" process: post "Found a nasty vuln in XYZ: 6f0c848159d46104fba17e02906f52aef460ee17d1962f5ea05d2478600fce8a" (the SHA2 hash of a report artifact confirming the vuln).
> This data will make it easier for researchers and the public to track how long it takes for a fix to travel from the initial report, all the way to a user's device (which is especially important if the fix never arrives!)
This paragraph is very confusing: What data is meant by "this data"? If they mean the announcement of "there's something", isn't the timeline of disclosure made public already under current reporting policy once everything has been opened up?
In other words, the date of initial report is not new data? Sure the delay is reduced, but it's not new at all in contrast to what the paragraph suggests.
I find the stated goal of alerting downstream a bit odd. Most downstreams scan upstream web pages for releases and automatically open an issue after a new release.
Project zero could also open a mailing list for trusted downstreams and publish the newly found announcements there.
The real goal seems to be to increase pressure on upstream, which in our modern times ranks lowest on the open source ladder: Below distributors, corporations, security pundits (some of whom do not write software themselves and have never been upstream for anything) and demanding users.
Not sure what is the measurable metric here, and what will be considered a success in this trial period.
Propagating the fix downstream depends on the release cycles of all downward vendors. Giving them a heads up will help planning, but I doubt it will significantly impact the patching timeline.
It is highly more likely that companies will get stressed that the public knows they have a vulnerability, while they are still working to fix it. The pressure from these companies will probably shut this policy change down.
Also, will this policy apply also to Google's own products?
The measure would probably be whether any of the reports led to examples of downstreams either syncing prior to release via security sharing they didn't already have established or any projects preparing to sync out of normal schedule ahead of time, regardless of if that's a small or large magnitude of change. How companies would prefer the public hear about a vulnerability has always been the lowest concern out of disclosures so I don't expect it to bring anything new here.
Google's products represent 3/6 of the initial vulnerabilities following this new reporting policy in the linked reporting page.
At the same time, I think publicly sharing that some vulnerability was discovered can be valuable information to attackers, particularly in the context of disclosure on open source projects: it's been my experience that maintaining a completely hermetic embargo on an OSS component is extremely difficult, both because of the number of people involved and because fixing the vulnerability sometimes requires advance changes to other public components.
I'm not sure there's a great solution to this.
For customers, it also gives them leverage to contact vendors and ask politely for news on the patch.
(And to be clear: I see the benefit here. But I'm talking principally about open source projects, not the vendors you're presumably paying.)
Why is this? Especially for smaller or more stable open-source projects, the number of commits in a 90-day period that have the possibility to be security-relevant are likely to be quite low, perhaps as low as single digits. So the specific commit that fixes the reported security issue is highly likely to be identified immediately, and now there's a race to develop and use an exploit.
As one example, a stable project that's been the target of significant security hardening and analysis is the libpng decoder. Over the past 3 months (May 1 - Jul 29), its main branch has seen 41 total commits. Of those, at least 25 were non-code changes, involving documentation updates, release engineering activities, and build system / cross-platform support. If Project Zero had announced a vulnerability in this project on May 1 with a disclosure embargo of today, there would be at most 16 commits to inspect over 3 months to find the bug. That's not a lot of work for a dedicated team.
So now, do we delay publishing security fixes to public repos and try and maintain private infrastructure and testing for all of this? And then try and get a release made, propagated to multiple layers of downstream vendors, have them make releases, etc... all within a day or two? That's pretty hard, just organizationally. No great answers here.
- Vulnerability Disclosure FAQ ( https://googleprojectzero.blogspot.com/p/vulnerability-discl... )
- Reporting Transparency ( https://googleprojectzero.blogspot.com/p/reporting-transpare... )
This paragraph is very confusing: What data is meant by "this data"? If they mean the announcement of "there's something", isn't the timeline of disclosure made public already under current reporting policy once everything has been opened up?
In other words, the date of initial report is not new data? Sure the delay is reduced, but it's not new at all in contrast to what the paragraph suggests.
Project zero could also open a mailing list for trusted downstreams and publish the newly found announcements there.
The real goal seems to be to increase pressure on upstream, which in our modern times ranks lowest on the open source ladder: Below distributors, corporations, security pundits (some of whom do not write software themselves and have never been upstream for anything) and demanding users.
Propagating the fix downstream depends on the release cycles of all downward vendors. Giving them a heads up will help planning, but I doubt it will significantly impact the patching timeline.
It is highly more likely that companies will get stressed that the public knows they have a vulnerability, while they are still working to fix it. The pressure from these companies will probably shut this policy change down.
Also, will this policy apply also to Google's own products?
Google's products represent 3/6 of the initial vulnerabilities following this new reporting policy in the linked reporting page.