Improving unfamiliar code with unit tests: An empirical investigation on tool-supported and human-based testing
|Autoren|| Dietmar Winkler|
|Editoren|| O. Dieste|
|Titel||Improving unfamiliar code with unit tests: An empirical investigation on tool-supported and human-based testing|
|Buchtitel||Product-Focused Software Process Improvement - Proc. PROFES 2012|
|Nummer||Lecture Notes in Computer Science|
Software testing is a well-established approach in modern software engineering practice to improve software products by systematically introducing unit tests on different levels during software development projects. Nevertheless existing software solutions often suffer from a lack of unit tests which have not been implemented during development because of time restrictions and/or resource limitations. A lack of unit tests can hinder effective and efficient maintenance processes. Introducing unit tests after deployment is a promising approach for (a) enabling systematic and automation-supported tests after deployment and (b) increasing product quality significantly. An important question is whether unit tests should be introduced manually by humans or automatically generated by tools. This paper focuses on an empirical investigation of tool-supported and human-based unit testing in a controlled experiment with focus on defect detection effectiveness, false positives, and test coverage of two different testing approaches applied to unfamiliar source code. Main results were that (a) individual testing approaches (human-based and tool-supported testing) showed advantages for different defect classes, (b) tools delivered a higher number of false positives, and (c) higher test coverage.