Aside of sad fact that it's impossible to try Coverity without first getting the Seal Of Coverity Sales Force Approval the next big difference between Coverity and PVS-Studio anyone can see is...
TEH MARKETING MATERIALS
How This Could be Done
Let's look at a typical PVS-Studio scan report. This one happened to get right in my way and so I link to it here.
A misprint... Yes, I see. Risk of array overrun... I see. Several more subtle defects.. I see. No freaking idea how those defects affect the program functioning, but they are presented quite well and are easily assessible by anyone who is willing to pay attention to them.
One might wonder how the program could run with those horrible defects.
This is quite simple. Defects that actually manifest themselves have been identified earlier using other methods – plain old debugging, unit tests, peer review, whatever else. All the rest requires some effort to get exposed. That might be some unusual dataset. That might be some unusual sequence of user actions. That might be some unusual error indication. That can be upgrading a compiler or a C++ runtime.
Defects are defects. You can't lie to the compiler they say, but not all defects are created equal. Those subtle things will sit in the codebase for years and then all of a sudden someone runs PVS-Studio on the codebase and WOW WHAT A HANDFUL OF HORRIBBLE BUGS ZOMG ELEVENELEVEN!!! they will think.
So a scan report alone is worth nothing – it still takes a developer who is familiar with the codebase to assess and possibly address each reported defect. PVS-Studio scan report does exactly the right thing – it presents defects one by one together with some analysis, nothing more.
How This Should Not Be Done
Now look at Coverity marketing materials. You will have a very hard time to find a scan report like the one linked to above with Coverity scan results. Yet once in a while Coverity dudes will issue an Integrity Report.
An Integrity Report is a very enthusiastic document containing such words as mission, seamlessly and focused on innovation. Not bad as a starter – at least presence of those keywords identifies clearly that there's too much marketing in the first three pages.
Moving on to Table A... Oh, this table shows a distribution of project sizes. Using the word distribution somehow implies that the data gathered has some statistical significance and so deserves extra trust. Well, with 45 projects total trying to build a chart and call it a distribution is very silly. You see, they had TWO projects with more that 7 million lines of code. That's unbelievable, I'm breathless.
All the rest of the Report is also full of similar completely meaningless tables. Yes, it is cool you found 9,7654 defects per square foot of some project. But until you let me try your program – I don't care, those figures don't matter any more than a 132 percent efficiency claim (the post is five years old, yet still relevant).
Fast forward to Appendix A. Tables 2 and 3 summarize defects but assigning each a category and and impact. Let's see...
Control flow issues. What's that? Is it when I forget to put a "break;" at the end of a "case" in a "switch" statement? So you say it has medium impact... Okay. What about "main()" returning immediately? That's a control flow issue as well and don't tell me it has medium impact. Not all control flow issues are created equal.
Null pointer dereferences have medium impact, don't they? Sure, my code defererences null pointers here and there and each time that happens users get a candy. Perhaps the Report authors meant potential null pointer dereferences which is a situation where code dereferences a pointer without first checking that it is not null. Good news is checking a pointer each time before it is dereferenced clutters code big time. Again, not all null pointer dereferences are created equal.
Error handling issues have medium impact. What is that? Is that checking for error codes of Win32 API functions? Sure, any time a program wants to open a file without validating whether the attempt to do so failed and just proceeds reading it's almost no big deal for the user. No access to the folder? We'll pretend we've saved the file. Whatever. Not all error handling issues are created equal.
Integer handling issues have medium impact. Sure, overflowing an integer while computing a memory allocation size is no big deal. Just allocate whatever amount it happens to be and pretend it's the right size. Not all integer handling issues are created equal.
Insecure data handling has medium impact. What's that? No freaking idea, but I something tells me not all cases of insecure data handling are created equal.
Incorrect expression – medium impact. Sure, misplace braces wherever you want, no big deal.
Concurrent access violations – medium impact. You just spend the rest of your life debugging them, no big deal.
API usage errors – medium impact. Your code erroneously forgets to specify the path and that causes entire contents of Windows\System32 be deleted. No big deal.
Program hangs – medium impact. The program hangs only when being run on a computer outside a Windows NT domain. You run it just fine inside your corporate network, then go to a trade show and it stops working, your laptop turns into a thousand dollar space heater with a screen. No big deal.
Why is no category assigned low impact I wonder? Is it because authors didn't dare call a software defect having low impact just because of belonging to some category?
This doesn't work. You can't throw several thousand defects into several categories and then assign each category an impact level. This is just impossible. If you're a software developer you must realise that beyond the shadow of a doubt, otherwise just quit your job immediately and go to the nearest McDonalds outlet – they have a "help needed" sign waiting for you.
The whole Integrity Report is just a big mess of numbers and diagrams. It's usability is not even zero – it is negative. The report scares the hell out of anyone who is concerned about software quality and stops there.
The Outcome
So what's the difference between PVS-Studio marketing materials and Coverity marketing materials? The former present facts that one can interpret and verify. The latter just try to scare by summarizing and no chances for verification.
Because not everyone deserves a free Coverity trial.
TEH MARKETING MATERIALS
How This Could be Done
Let's look at a typical PVS-Studio scan report. This one happened to get right in my way and so I link to it here.
A misprint... Yes, I see. Risk of array overrun... I see. Several more subtle defects.. I see. No freaking idea how those defects affect the program functioning, but they are presented quite well and are easily assessible by anyone who is willing to pay attention to them.
One might wonder how the program could run with those horrible defects.
This is quite simple. Defects that actually manifest themselves have been identified earlier using other methods – plain old debugging, unit tests, peer review, whatever else. All the rest requires some effort to get exposed. That might be some unusual dataset. That might be some unusual sequence of user actions. That might be some unusual error indication. That can be upgrading a compiler or a C++ runtime.
Defects are defects. You can't lie to the compiler they say, but not all defects are created equal. Those subtle things will sit in the codebase for years and then all of a sudden someone runs PVS-Studio on the codebase and WOW WHAT A HANDFUL OF HORRIBBLE BUGS ZOMG ELEVENELEVEN!!! they will think.
So a scan report alone is worth nothing – it still takes a developer who is familiar with the codebase to assess and possibly address each reported defect. PVS-Studio scan report does exactly the right thing – it presents defects one by one together with some analysis, nothing more.
How This Should Not Be Done
Now look at Coverity marketing materials. You will have a very hard time to find a scan report like the one linked to above with Coverity scan results. Yet once in a while Coverity dudes will issue an Integrity Report.
An Integrity Report is a very enthusiastic document containing such words as mission, seamlessly and focused on innovation. Not bad as a starter – at least presence of those keywords identifies clearly that there's too much marketing in the first three pages.
Moving on to Table A... Oh, this table shows a distribution of project sizes. Using the word distribution somehow implies that the data gathered has some statistical significance and so deserves extra trust. Well, with 45 projects total trying to build a chart and call it a distribution is very silly. You see, they had TWO projects with more that 7 million lines of code. That's unbelievable, I'm breathless.
All the rest of the Report is also full of similar completely meaningless tables. Yes, it is cool you found 9,7654 defects per square foot of some project. But until you let me try your program – I don't care, those figures don't matter any more than a 132 percent efficiency claim (the post is five years old, yet still relevant).
Fast forward to Appendix A. Tables 2 and 3 summarize defects but assigning each a category and and impact. Let's see...
Control flow issues. What's that? Is it when I forget to put a "break;" at the end of a "case" in a "switch" statement? So you say it has medium impact... Okay. What about "main()" returning immediately? That's a control flow issue as well and don't tell me it has medium impact. Not all control flow issues are created equal.
Null pointer dereferences have medium impact, don't they? Sure, my code defererences null pointers here and there and each time that happens users get a candy. Perhaps the Report authors meant potential null pointer dereferences which is a situation where code dereferences a pointer without first checking that it is not null. Good news is checking a pointer each time before it is dereferenced clutters code big time. Again, not all null pointer dereferences are created equal.
Error handling issues have medium impact. What is that? Is that checking for error codes of Win32 API functions? Sure, any time a program wants to open a file without validating whether the attempt to do so failed and just proceeds reading it's almost no big deal for the user. No access to the folder? We'll pretend we've saved the file. Whatever. Not all error handling issues are created equal.
Integer handling issues have medium impact. Sure, overflowing an integer while computing a memory allocation size is no big deal. Just allocate whatever amount it happens to be and pretend it's the right size. Not all integer handling issues are created equal.
Insecure data handling has medium impact. What's that? No freaking idea, but I something tells me not all cases of insecure data handling are created equal.
Incorrect expression – medium impact. Sure, misplace braces wherever you want, no big deal.
Concurrent access violations – medium impact. You just spend the rest of your life debugging them, no big deal.
API usage errors – medium impact. Your code erroneously forgets to specify the path and that causes entire contents of Windows\System32 be deleted. No big deal.
Program hangs – medium impact. The program hangs only when being run on a computer outside a Windows NT domain. You run it just fine inside your corporate network, then go to a trade show and it stops working, your laptop turns into a thousand dollar space heater with a screen. No big deal.
Why is no category assigned low impact I wonder? Is it because authors didn't dare call a software defect having low impact just because of belonging to some category?
This doesn't work. You can't throw several thousand defects into several categories and then assign each category an impact level. This is just impossible. If you're a software developer you must realise that beyond the shadow of a doubt, otherwise just quit your job immediately and go to the nearest McDonalds outlet – they have a "help needed" sign waiting for you.
The whole Integrity Report is just a big mess of numbers and diagrams. It's usability is not even zero – it is negative. The report scares the hell out of anyone who is concerned about software quality and stops there.
The Outcome
So what's the difference between PVS-Studio marketing materials and Coverity marketing materials? The former present facts that one can interpret and verify. The latter just try to scare by summarizing and no chances for verification.
Because not everyone deserves a free Coverity trial.