This paper presents a surprising result: changing a seemingly innocuous aspect of an experimental setup can result in a systems researcher drawing wrong conclusions from an experiment. What appears to be an innocuous aspect in the experimental setup may in fact introduce a significant bias in an evaluation. For example, consider an experiment to determine if idea I is beneficial for system S. If the systems researcher measures S and S+I in an experimental setup that is biased towards S+I, she may conclude that I is beneficial even when it is not. This phenomenon is called measurement bias in the natural and social sciences.
Our results demonstrate that measurement bias is significant and commonplace. By significant we mean that measurement bias can lead to an incorrect conclusion. By commonplace we mean that measurement bias occurs in all architectures that we tried (Pentium 4, Core 2, and m5 O3CPU), all compilers that we tried (gcc and Intel's C compiler), and all of the SPEC CPU2006 C programs. Thus, we cannot ignore measurement bias. Nevertheless, in a literature survey of 133 recent papers from ASPLOS, PACT, PLDI, and CGO, we determined that none of the papers with experimental results adequately consider measurement bias.
Inspired by similar problems and their solutions in other sciences, we describe and demonstrate two methods, one for detecting (causal analysis) and one for avoiding (setup randomization) measurement bias.