Measuring software performance with repeatable and accurate results is hard. The state of both the hardware and operating system can influence results. For systems running on JITTED runtimes, as well as things such as GC, further variability can be introduced due to the non-determinism.

Repeatability is vital in order to assess whether changes to application code, hardware or the software stack are beneficial or detrimental to performance.

In this talk an experienced performance tester will discuss some tips and techniques for getting reliable and repeatable data, procedures for avoiding being fooled by misleading data as well as some tools you could use for narrowing down the cause of a regression.

Om Gareth:

Gareth previously worked as a Runtime Performance Analyst at IBM for the past four years. There, he concentrated on the performance of the IBM J9 JVM, and Node.js. He was part of a team responsible for the continued performance monitoring of 6 concurrent Java releases and 4 Node.js releases spanning 14 platforms. He's recently moved to Linköping, Sweden.