By Marko Ivanković, Google Zürich
Introduction
Code coverage is a very interesting metric, covered by a large body of research that reaches somewhat contradictory results. Some people think it is an extremely useful metric and that a certain percentage of coverage should be enforced on all code. Some think it is a useful tool to identify areas that need more testing but don’t necessarily trust that covered code is truly well tested. Others yet think that measuring coverage is actively harmful because it provides a false sense of security.
Our team’s mission was to collect coverage related data then develop and champion code coverage practices across Google. We designed an opt-in system where engineers could enable two different types of coverage measurements for their projects: daily and per-commit. With daily coverage, we run all tests for their project, where as with per-commit coverage we run only the tests affected by the commit. The two measurements are independent and many projects opted into both.
While we did experiment with branch, function and statement coverage, we ended up focusing mostly on statement coverage because of its relative simplicity and ease of visualization.
How we measured
Our job was made significantly easier by the wonderful Google build system whose parallelism and flexibility allowed us to simply scale our measurements to Google scale. The build system had integrated various language-specific open source coverage measurement tools like Gcov (C++), Emma / JaCoCo (Java) and Coverage.py (Python), and we provided a central system where teams could sign up for coverage measurement.
For daily whole project coverage measurements, each team was provided with a simple cronjob that would run all tests across the project’s codebase. The results of these runs were available to the teams in a centralized dashboard that displays charts showing coverage over time and allows daily / weekly / quarterly / yearly aggregations and per-language slicing. On this dashboard teams can also compare their project (or projects) with any other project, or Google as a whole.
For per-commit measurement, we hook into the Google code review process (briefly explained in this article ) and display the data visually to both the commit author and the reviewers. We display the data on two levels: color coded lines right next to the color coded diff and a total aggregate number for the entire commit.
Displayed above is a screenshot of the code review tool. The green line coloring is the standard diff coloring for added lines. The orange and lighter green coloring on the line numbers is the coverage information. We use light green for covered lines, orange for non-covered lines and white for non-instrumented lines.
It’s important to note that we surface the coverage information before the commit is submitted to the codebase, because this is the time when engineers are most likely to be interested in improving it.
Results
One of the main benefits of working at Google is the scale at which we operate. We have been running the coverage measurement system for some time now and we have collected data for more than 650 different projects, spanning 100,000+ commits. We have a significant amount of data for C++, Java, Python, Go and JavaScript code.
I am happy to say that we can share some preliminary results with you today:
The chart above is the histogram of average values of measured absolute coverage across Google. The median (50th percentile) code coverage is 78%, the 75th percentile 85% and 90th percentile 90%. We believe that these numbers represent a very healthy codebase.
We have also found it very interesting that there are significant differences between languages:
C++
Java
Go
JavaScript
Python
56.6%
61.2%
63.0%
76.9%
84.2%
The table above shows the total coverage of all analyzed code for each language, averaged over the past quarter. We believe that the large difference is due to structural, paradigm and best practice differences between languages and the more precise ability to measure coverage in certain languages.
Note that these numbers should not be interpreted as guidelines for a particular language, the aggregation method used is too simple for that. Instead this finding is simply a data point for any future research that analyzes samples from a single programming language.
The feedback from our fellow engineers was overwhelmingly positive. The most loved feature was surfacing the coverage information during code review time. This early surfacing of coverage had a statistically significant impact: our initial analysis suggests that it increased coverage by 10% (averaged across all commits).
Future work
We are aware that there are a few problems with the dataset we collected. In particular, the individual tools we use to measure coverage are not perfect. Large integration tests, end to end tests and UI tests are difficult to instrument, so large parts of code exercised by such tests can be misreported as non-covered.
We are working on improving the tools, but also analyzing the impact of unit tests, integration tests and other types of tests individually.
In addition to languages, we will also investigate other factors that might influence coverage, such as platforms and frameworks, to allow all future research to account for their effect.
We will be publishing more of our findings in the future, so stay tuned.
And if this sounds like something you would like to work on, why not apply on our job site ?
Don't know if my first comment got through, so I'll try again. Did you try correlating a project's code coverage and the amount of open issues?
ReplyDeleteBranch Coverage is more important than Statement coverage, while measuring the thoroughness of tests.
ReplyDelete+1
DeleteI am curious to find out if you tried to correlate branch coverage (instead of statement) and the presence of defects if it would not result in stronger correlation, but I am guessing yes.
Did you try that? Can you share your findings?
We did. We didn't find any correlation yet. But we are looking into that.
ReplyDeleteOK. Because that sounds like it might be interesting for clarifying some of the myths you mention in the first paragraph. Keep us posted if you make some discoveries there ;)
DeleteYes, knowing how coverage relates to problems encountered by users would be great information to have.
ReplyDeleteWe are working on this. We will publish the results as soon as we have concrete findings. So far we have neither definite positive nor definite negative results for that question.
DeleteHow do you measure JS code coverage?
ReplyDeleteYour first paragraph mentions the controversy around code coverage, but it looks like your results and research don't attempt to address that core issue, e.g. whether code coverage actually leads to fewer errors in the resulting project.
ReplyDeleteWhich system do you use for code reviews? Is it's Gerrit: How did you integrate the coverage metrics into Gerrit?
ReplyDeleteIn some ways not unexpected that coverage is inversely correlated to language type strength
ReplyDeleteAm confused with the below statement in future work section. the code that is being hit by the tests need to be instrumented. why do we need to instrument the tests?
ReplyDelete"In particular, the individual tools we use to measure coverage are not perfect. Large integration tests, end to end tests and UI tests are difficult to instrument, so large parts of code exercised by such tests can be misreported as non-covered. "
Apart from unit and integration testing are you also capturing statement coverage through manual functional test? What toolset you use for Java.
ReplyDeleteHi!
ReplyDeleteDo you have any news about turn open source the central system you developed to measure the test coverage?
How do you measure your functional tests code coverage?
ReplyDelete