Vulnerability Reporting Is Dysfunctional
Vulnerability reporting is still dysfunctional, but let's acknowledge that it's a lot less dysfunctional than it used to be. At the very least, none of the companies initiated criminal proceedings against the researchers for disclosing the vulnerabilities that they did.
I mean, this is well articulated, but it's also one of the best-known problems in computer security. Whole research projects have been done on this problem; I was (with presumably dozens of other researchers) recruited to work on one, where I was asked to stand up a fake security research firm and inquire about vulnerability reports.
A lot of people have burnt a lot of energy pointlessly on technical solutions to this (such as well-known URLs pointing to vulnerability report pages), but the fundamental problem is simply that most vendors don't know that they need to do something here, and until they're educated, nothing else will help them.
The dynamic is exemplified by The Formula.
" Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one."
Is there an aspect of this movie quote that does not apply to vulnerabilities?
I never understood why vulnerability reporting was a social practice. The reason for that is because I see computer hacking in this particular context (breaking into computers or reversing binaries and cracking them) more or less equivalent to breaking into something physical, be it opening a box or breaking into a home.
But people don't go up to my home and say how they can break in. Nor do people go to companies and say "listen, if I go in here as some repair guy with a walky talky, security will let me right in! And then I switched into an office suit and talked to Janet at accounting, and she gave me your private financials by simply asking her. Train your reception and train Janet."
I understand that you want to keep open source software safe, because everyone is using it. So by helping it to be more secure, that's a win. But why isn't the same happening with companies in a physical sense? The public interacts with them.
Or are there 'vulnerability reports' (or whatever you call them) on those things? Then they're simply not posted here.
I suspect that there are organisations that deliberately design their reporting systems to prevent reporting.
This article covers some big name companies, there are others that are critical too: 1. National tax collection organisations. 2. Banks. That, in my experience, prevent the fixing of problems, with anti-useful reporting channels.
I've not yet found any third party it's sensible use for the reporting.
I think the original article should have a better title. The root issue was not in the reporting but that some of the companies involved honestly thought that it was OK to base their security all or in part on the security of wireless providers.
As is all the bug reporting. It should (and can) be easy as a click of a button, not requiring you to sign-up for a new account in another BugZilla instance or something.