The process of benchmarking, narrowed to software, is about assessing certain behavioral patterns of a given piece of code. These patterns can focus on memory consumption, utilisation of resources or simply an average execution time on some input data (typically randomized or sampled). Micro-benchmarking can help in answering many design or implementation questions (which routine performs best under a given set of circumstances, which routine should be chosen for which type of data). It is, in other words, yet another tool in the toolbox of a good programmer.

From a technical perspective, micro-benchmarking requires some careful preparation in order to be relevant and valuable. In Java, care should be taken to ensure the internals of the HotSpot compiler are "warmed up" before actual measurements are made, for example. For more helpful clues, see this HotSpot microbenchmarks wiki page.

At Carrot Search, we use micro-benchmarks a lot. Over time we have developed several different solutions for collecting samples, averaging the results or performing further analyses. However, none of these were portable or comfortable. Since we use JUnit a lot too, we decided to experiment with combining JUnit and benchmarking. The outcome turned out to be very helpful and easy to use, especially in RAD environments like Eclipse. Try it.

Java and Java Virtual Machine is a very special and fragile environment. Micro-benchmark results on one machine (architecture, JVM vendor, operating system) may not correspond at all to results from another machine. Be careful when extrapolating the conclusions of benchmarks done on one machine to other systems and environments; your assumptions are most likely wrong. A much better way of ensuring things work the way you expect them to is to re-run the same tests on various systems and compare the results.