Coverity has been doing scans of OpenSSL for a while, and the OpenSSL team has access to the results: https://scan.coverity.com/projects/294
The problem is that there's so many false positives and noise that it's impossible to interpret the results in any meaningful way. See https://groups.google.com/forum/#!topic/mailing.openssl.dev/4o_XHzEQX90 for one developer's take. I've seen Coverity results for a large project and it's almost completely useless. You could get similar results by printing out the source code and just throwing darts to figure out which lines to manually audit.
I don't know if the Coverity scan detected this issue or not though, it would be interesting if it did.
They give free licenses for open source projects.
http://www.viva64.com/en/b/0092/
Other options if you are interested in doing that. The Clang Analyzer is free and open source. Coverity has a free scan system for open source projects. Visual Studio has a built in analyzer in some versions (used to be ultimate only, but I think it's on more now).
Is GnuTLS in much cleaner shape though? (seriously asking, I haven't looked)
Edit: Some commentary from an OpenLDAP developer in 2008. TL;DR: "I strongly recommend that GnuTLS not be used.". The bug mentioned appears to have been fixed long ago though.
Additional notes. Coverity indicates similar defect density in GnuTLS than OpenSSL, but the fixed vs outstanding looks better for GnuTLS.
I am not familiar with Coverity but this page seems to indicate that it is publicly available. I added one of my github projects and they seem to allow automatic builds via the Travis-CI build system. Though, I couldn't try it because my project doesn't have a Travis-CI build file yet.
> You're going to have to explain that one to me
This seems to be the theme yeah.
> Bitcoin uses energy in the form of electricity but it doesn't have any impact on 'public computing infrastructure' - what a laugh.
You clearly just have no clue what I'm talking about and that's okay, I'll try to explain it to you, even though you sound like you think you already know everything.
Software engineer, especially open source ones, rely on third party computing services for all sorts of things like platform testing, deployment, etc. Bad actors are submitting jobs to CI services that literally just download a miner and mine bitcoin until they're killed.
Here's ddevault on this topic who built and runs sourcehut. Crypto mining has been a plague on CI infrastructure for a while now.
I personally remember Coverity Scan being completely offline for like 6 months while they tried to deal with infrastructure abuse from people mining bitcoin on their computing clusters.
By the way, in the future if you dont know something its okay to admit and just ask. You just look like a dumbass when you pretend.
From reading the article I get the impression the author is not very well informed about the project he's reporting on, but does go on to show off his own product. That's okay, as long as the article is read in that context, a product showcase.
Some critiques on statements made.
>The projects authors definitely haven't done their best in fixing bugs before the release.
That in itself is a presumptuous statement, the fact that bugs remain does not mean that developers tried their best given their constraints (most notably: spare time).
> It is typical for the projects, which generally don't use static analysis tools, even free ones.
Here the author has missed the use of Coverity to analyse the project for over a decade.
>Such inserts also don't give a boost to code quality:
#line 1 "./asn1/acse/packet-acse-template.c"
The author seems to have missed that a lot of source code is actually generated from specification, in this case ASN.1. It does not produce readable/maintainable code, and that was never the intention. The specification and configuration files need to be maintained, where the code generation tooling is vetted for correctness.
​
The article then goes on to show a series of more or less severe issues, which range from unclean code, unclean exception handling to logic errors in dissecting protocol fields and possible mishandling of memory. These issues do exist and should be addressed. Preferably the most pressing first.
​
The conclusion is right, in that there are numerous defects to be addressed. The latest figures (as per April 1st, no joke) show 5,192 defects in 3,748,761 lines of code, giving a .22 defect density, which is less than half the average of the 6000 OSS projects scanned by Coverity. Maybe it's not as bad as the author, being a self proclaimed perfectionist, seems to portray.
Which is interesting because when you objectively measure ... coverity's static analysis has Enlightenment as 0 bugs found: https://scan.coverity.com/projects/enlightenment-window-manager ... compared to Qt at 0.7 or so, glib at 0.4 something. An objective measurement of code shows it to be pretty clean. And EFL is at 0.04 ... 1/10th of Glib and close to 1/20th that of Qt... you won't find too many projects with a lower "bug rate" what static analyzers can find): https://scan.coverity.com/projects/enlightenment-foundation-libraries ... so the numbers disagree. :/
Being free helps with adaption quite a bit. Don't think there are any good static C++ code checkers that happen to be Open Source yet (cppchecker only seems to catch really basic stuff and while Coverity can be used on Open Source projects it is not Open Source itself).
Total FUD. The number of defects has stayed relatively stable for years now which is more indicative of code that is not maintained or false positives. On the whole, defect density has been steadily decreasing over time for the Kernel, despite the number of LOC increasing dramatically over the last decade. The Kernel is also well below the average for other OSS projects over 1M LOC:
If the project you are working on is open source, you can use Coverity Scan.
It will help find null reference exceptions along with a host of other problems like copy/paste bugs, code duplication, and more.
Thanks for your reply and clarifying how Checkmarx works. I do think Checkmark is a great option in the market for any orgnaization evaluating solutions in this space.
Does Checkmark see any value in free open source software scanning projects like Coverity's https://scan.coverity.com/projects ?
For linting, there's splint [http://www.splint.org/] though sadly it seems to have fallen lately into disrepair (and in OS X, it's completely unusable because the system headers make it fall over). Maybe Linux has better maintained ports.
In OS X there's Instruments for performance profiling [https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/Introduction/Introduction.html]
If your code is open source, you can sign up for Coverity Scan. [https://scan.coverity.com/]
It doesn't sound like an advert. Seems to be pretty commonplace nowadays. https://scan.coverity.com does essentially the same. If you gather insights as in test data which helps improve your service or popularity, because famous Open Source projects use it and write about it, that is a fair deal.
> Why is this (missing context) a problem of C++ Exception Handling?
One of the core ideas behind exception handling was that you could get rid of having to check return values all the time. Problem is that when you code that way you end up with random exceptions without any clue where they came from. An exception without context is kind of useless.
On the other side if you catch them early, add context and rethrow them, then you are sort of doing ok, but you haven't really won that much over using return values as the code ends up with try/catch
all over the place, which is not really much difference then if(retval)...else..
. There is also the problem of documentation, return values are part of a functions signature, exceptions can just hide away (static code analysis from Coverity can find some of those).
I am not saying that exceptions are all evil, having a context-less exception is still nicer then having an unchecked return value that just does undefined things silently. But the amount of complications they cause gets pretty close in removing all the benefits they provide, which is why Go removed them and instead improved return values a little.
PS: I just noticed that C++11 has std::nested_exception, so exception chaining is now supported by the language itself, this should make things a little easier.
It simply isn't true that security concerns aren't a top priority for the systemd developers. It is in their coding style: https://github.com/systemd/systemd/blob/master/CODING_STYLE They use "Coverty" static code checking scans: https://scan.coverity.com/projects/350 And have a Jenkins back end to check builds too.
The Red Hat security team is also continuously checking systemd.
Also, it should be considered that systemd can replace some other programs that can have security issues. Like the Bash and Dash exploits, or those who runs ntpd as a sNTP client, etc.
All in all, systemd code is probably better written and audited than most other competing projects.