Accuracy

Precise and accurate performance measurement is hard. We are interested in approaches to enable more reliable and repeatable experimental evaluations of computer systems and to improve the accuracy and precision of performance measurements. We aim to answer questions such as:

  • How can we overcome the above measurement context bias?
  • Which aspects of measurement contexts (beyond environment variable sizes) affect measurements, and how?
  • How do measurement infrastructures perturb system behavior?
  • How accurate and precise are commonly used performance measurement infrastructures?

Evaluate Collaboratory

We have co-organized the Evaluate workshop series (Evaluate'10 on gathering issues and pitfalls, Evaluate'11 on formulating evaluation anti-patterns, Evaluate'12 on education about evaluation), and we have started the Evaluate Collaboratory. The aim of this effort is to bring together a community of leading researchers in the area of systems and software to analyze and improve the state of the art in experimental evaluation.

Artifact Evaluation Committees

Over the last few years, Artifact Evaluation Committees (AEC) have become a standard evaluation component in conferences like OOPSLA and PLDI. We have helped organize the first OOPSLA AEC, we have served as AEC members, and we keep track of other AECs in conferences in programming languages and software engineering.

Publications