Precise and accurate performance measurement is hard. We are interested in approaches to enable more reliable and repeatable experimental evaluations of computer systems and to improve the accuracy and precision of performance measurements. We aim to answer questions such as:
We have co-organized the Evaluate workshop series (Evaluate'10 on gathering issues and pitfalls, Evaluate'11 on formulating evaluation anti-patterns, Evaluate'12 on education about evaluation), and we have started the Evaluate Collaboratory. The aim of this effort is to bring together a community of leading researchers in the area of systems and software to analyze and improve the state of the art in experimental evaluation.
Over the last few years, Artifact Evaluation Committees (AEC) have become a standard evaluation component in conferences like OOPSLA and PLDI. We have helped organize the first OOPSLA AEC, we have served as AEC members, and we keep track of other AECs in conferences in programming languages and software engineering.