To start with let us see the impact of work done to improve the performance of hash indexes. Below is the performance data of the pgbench read-only workload to compare the performance difference of Hash indexes between 9.6 and HEAD on IBM POWER-8 having 24 cores, 192 hardware threads, 492GB RAM.
The workload is such that all the data fits in shared buffers (scale factor is 300 (~4.5GB) and shared_buffers is 8GB).
And chart itself says it is a median of 3 5 minute runs.
Why would I assume that Concurrent runs == Seperate Runs or that other caching mechanisms aren't in place or really anything. Computers do really odd things trying to optimize and making assumptions that your system is the same over a period of 30 minutes when you don't even know it is the same 30 minutes is concurrent. There is all sorts of stuff that gets in the way of performance tests and I would like to know how it is mitigated. Again, I am sure there is proper process, but why wouldn't know want to know what that is?
Quote:
To start with let us see the impact of work done to improve the performance of hash indexes. Below is the performance data of the pgbench read-only workload to compare the performance difference of Hash indexes between 9.6 and HEAD on IBM POWER-8 having 24 cores, 192 hardware threads, 492GB RAM.
The workload is such that all the data fits in shared buffers (scale factor is 300 (~4.5GB) and shared_buffers is 8GB).
And chart itself says it is a median of 3 5 minute runs.