Abstract
Many commonly used data sources in the social sciences suffer from non-random measurement error, understood as mis-measurement of a variable that is systematically related to another variable. We argue that studies relying on potentially suspect data should take the threat this poses to inference seriously and address it routinely in a principled manner. In this article, we aid researchers in this task by introducing a sensitivity analysis approach to non-random measurement error. The method can be used for any type of data or statistical model, is simple to execute, and straightforward to communicate. This makes it possible for researchers to routinely report the robustness of their inference to the presence of non-random measurement error. We demonstrate the sensitivity analysis approach by applying it to two recent studies.
| Original language | English |
|---|---|
| Journal | Political Science Research and Methods |
| Early online date | 16 Jan 2017 |
| DOIs | |
| Publication status | E-pub ahead of print - 16 Jan 2017 |
Keywords
- sensititivty analysis
- measurement error
- statistical methods