Separating passing and failing test executions by clustering anomalies

Rafig Almaghairbe, Marc Roper

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)
132 Downloads (Pure)

Abstract

Developments in the automation of test data generation have greatly improved efficiency of the software testing process, but the so-called oracle problem (deciding the pass or fail outcome of a test execution) is still primarily an expensive and error-prone manual activity. We present an approach to automatically detect passing and failing executions using cluster-based anomaly detection on dynamic execution data based on firstly, just a system’s input/output pairs and secondly, amalgamations of input/output pairs and execution traces. The key hypothesis is that failures will group into small clusters, whereas passing executions will group into larger ones. Evaluation on three systems with a range of faults demonstrates this hypothesis to be valid—in many cases small clusters were composed of at least 60 % failures (and often more). Concentrating the failures in these small clusters substantially reduces the numbers of outputs that a developer would need to manually examine following a test run and illustrates that the approach has the potential to improve the effectiveness and efficiency of the testing process.
Original languageEnglish
Pages (from-to)1-38
Number of pages38
JournalSoftware Quality Journal
Early online date3 Oct 2016
DOIs
Publication statusE-pub ahead of print - 3 Oct 2016

Keywords

  • software testing
  • test oracles
  • anomaly detection
  • clustering
  • automation

Fingerprint

Dive into the research topics of 'Separating passing and failing test executions by clustering anomalies'. Together they form a unique fingerprint.

Cite this