On Mon, 2020-11-23 at 07:03 -0600, Gustavo A. R. Silva wrote:
On Sun, Nov 22, 2020 at 11:53:55AM -0800, James Bottomley wrote:
On Sun, 2020-11-22 at 11:22 -0800, Joe Perches wrote:
On Sun, 2020-11-22 at 11:12 -0800, James Bottomley wrote:
On Sun, 2020-11-22 at 10:25 -0800, Joe Perches wrote:
On Sun, 2020-11-22 at 10:21 -0800, James Bottomley wrote:
Please tell me our reward for all this effort isn't a single missing error print.
There were quite literally dozens of logical defects found by the fallthrough additions. Very few were logging only.
So can you give us the best examples (or indeed all of them if someone is keeping score)? hopefully this isn't a US election situation ...
Gustavo? Are you running for congress now?
That's 21 reported fixes of which about 50% seem to produce no change in code behaviour at all, a quarter seem to have no user visible effect with the remaining quarter producing unexpected errors on obscure configuration parameters, which is why no-one really noticed them before.
The really important point here is the number of bugs this has prevented and will prevent in the future. See an example of this, below:
https://lore.kernel.org/linux-iio/20190813135802.GB27392@kroah.com/
I think this falls into the same category as the other six bugs: it changes the output/input for parameters but no-one has really noticed, usually because the command is obscure or the bias effect is minor.
This work is still relevant, even if the total number of issues/bugs we find in the process is zero (which is not the case).
Really, no ... something which produces no improvement has no value at all ... we really shouldn't be wasting maintainer time with it because it has a cost to merge. I'm not sure we understand where the balance lies in value vs cost to merge but I am confident in the zero value case.
"The sucky thing about doing hard work to deploy hardening is that the result is totally invisible by definition (things not happening) [..]"
- Dmitry Vyukov
Really, no. Something that can't be measured at all doesn't exist.
And actually hardening is one of those things you can measure (which I do have to admit isn't true for everything in the security space) ... it's number of exploitable bugs found before you did it vs number of exploitable bugs found after you did it. Usually hardening eliminates a class of bug, so the way I've measured hardening before is to go through the CVE list for the last couple of years for product X, find all the bugs that are of the class we're looking to eliminate and say if we had hardened X against this class of bug we'd have eliminated Y% of the exploits. It can be quite impressive if Y is a suitably big number.
James