When designers of Java runtime systems evaluate the performance of their systems for the purpose of running client-side Java applications, they normally use the Dacapo and SPEC JVM benchmark suites.
However, when users of those Java runtime systems run client applications, they usually run interactive applications such as Eclipse or NetBeans.
In this paper we study whether this mismatch is a problem: Do the prevalent Java client-side benchmark suites faithfully represent the characteristics of real-world Java client applications? To answer this question we characterize benchmarks and applications using three kinds of metrics: static metrics, architecture-independent dynamic metrics, and hardware performance counters.
We find that real-world applications significantly differ from existing benchmarks. Our finding indicates that the current benchmark suites should be augmented to more faithfully represent the large segment of interactive applications.