Interaction effects on common measures of sensitivity: choice of measure, type I error, and power

Stephen Rhodes, Nelson Cowan, Mario A. Parra, Robert H. Logie

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Here we use simulation to assess previously unaddressed problems in the assessment of statistical interactions in detection and recognition tasks. The proportion of hits and false-alarms made by an observer on such tasks is affected by both their sensitivity and bias, and numerous measures have been developed to separate out these two factors. Each of these measures makes different assumptions regarding the underlying process and different predictions as to how false-alarm and hit rates should covary. Previous simulations have shown that choice of an inappropriate measure can lead to inflated type I error rates, or reduced power, for main effects, provided there are differences in response bias between the conditions being compared. Interaction effects pose a particular problem in this context. We show that spurious interaction effects in analysis of variance can be produced, or true interactions missed, even in the absence of variation in bias. Additional simulations show that variation in bias complicates patterns of type I error and power further. This under-appreciated fact has the potential to greatly distort the assessment of interactions in detection and recognition experiments. We discuss steps researchers can take to mitigate their chances of making an error.
LanguageEnglish
Number of pages19
JournalBehavior Research Methods
Early online date18 Jul 2018
DOIs
Publication statusE-pub ahead of print - 18 Jul 2018

Fingerprint

Analysis of Variance
Research Personnel

Keywords

  • recognition
  • detection
  • interactions
  • type I error
  • type II error
  • power
  • sensitivity
  • bias

Cite this

@article{0b4350e27cb249a481d78735af9162c1,
title = "Interaction effects on common measures of sensitivity: choice of measure, type I error, and power",
abstract = "Here we use simulation to assess previously unaddressed problems in the assessment of statistical interactions in detection and recognition tasks. The proportion of hits and false-alarms made by an observer on such tasks is affected by both their sensitivity and bias, and numerous measures have been developed to separate out these two factors. Each of these measures makes different assumptions regarding the underlying process and different predictions as to how false-alarm and hit rates should covary. Previous simulations have shown that choice of an inappropriate measure can lead to inflated type I error rates, or reduced power, for main effects, provided there are differences in response bias between the conditions being compared. Interaction effects pose a particular problem in this context. We show that spurious interaction effects in analysis of variance can be produced, or true interactions missed, even in the absence of variation in bias. Additional simulations show that variation in bias complicates patterns of type I error and power further. This under-appreciated fact has the potential to greatly distort the assessment of interactions in detection and recognition experiments. We discuss steps researchers can take to mitigate their chances of making an error.",
keywords = "recognition, detection, interactions, type I error, type II error, power, sensitivity, bias",
author = "Stephen Rhodes and Nelson Cowan and Parra, {Mario A.} and Logie, {Robert H.}",
year = "2018",
month = "7",
day = "18",
doi = "10.3758/s13428-018-1081-0",
language = "English",
journal = "Behavior Research Methods",
issn = "1554-351X",

}

Interaction effects on common measures of sensitivity : choice of measure, type I error, and power. / Rhodes, Stephen; Cowan, Nelson; Parra, Mario A.; Logie , Robert H.

In: Behavior Research Methods, 18.07.2018.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Interaction effects on common measures of sensitivity

T2 - Behavior Research Methods

AU - Rhodes, Stephen

AU - Cowan, Nelson

AU - Parra, Mario A.

AU - Logie , Robert H.

PY - 2018/7/18

Y1 - 2018/7/18

N2 - Here we use simulation to assess previously unaddressed problems in the assessment of statistical interactions in detection and recognition tasks. The proportion of hits and false-alarms made by an observer on such tasks is affected by both their sensitivity and bias, and numerous measures have been developed to separate out these two factors. Each of these measures makes different assumptions regarding the underlying process and different predictions as to how false-alarm and hit rates should covary. Previous simulations have shown that choice of an inappropriate measure can lead to inflated type I error rates, or reduced power, for main effects, provided there are differences in response bias between the conditions being compared. Interaction effects pose a particular problem in this context. We show that spurious interaction effects in analysis of variance can be produced, or true interactions missed, even in the absence of variation in bias. Additional simulations show that variation in bias complicates patterns of type I error and power further. This under-appreciated fact has the potential to greatly distort the assessment of interactions in detection and recognition experiments. We discuss steps researchers can take to mitigate their chances of making an error.

AB - Here we use simulation to assess previously unaddressed problems in the assessment of statistical interactions in detection and recognition tasks. The proportion of hits and false-alarms made by an observer on such tasks is affected by both their sensitivity and bias, and numerous measures have been developed to separate out these two factors. Each of these measures makes different assumptions regarding the underlying process and different predictions as to how false-alarm and hit rates should covary. Previous simulations have shown that choice of an inappropriate measure can lead to inflated type I error rates, or reduced power, for main effects, provided there are differences in response bias between the conditions being compared. Interaction effects pose a particular problem in this context. We show that spurious interaction effects in analysis of variance can be produced, or true interactions missed, even in the absence of variation in bias. Additional simulations show that variation in bias complicates patterns of type I error and power further. This under-appreciated fact has the potential to greatly distort the assessment of interactions in detection and recognition experiments. We discuss steps researchers can take to mitigate their chances of making an error.

KW - recognition

KW - detection

KW - interactions

KW - type I error

KW - type II error

KW - power

KW - sensitivity

KW - bias

U2 - 10.3758/s13428-018-1081-0

DO - 10.3758/s13428-018-1081-0

M3 - Article

JO - Behavior Research Methods

JF - Behavior Research Methods

SN - 1554-351X

ER -