Relation between pg_total_iterations and pg_duration #427
-
Hello, I am trying to evaluate an SSD drive with PostgreSQL using HammerDB. I have specified pg_total_iterations as 1000000. I have also set pg_duration as 2. From what I understand, hammerDB workloads will run for the duration specified by pg_duration, which is 2 minutes in my case. If so, what is the purpose of specifying the total_iterations? As an experiment, I tried setting the test duration to 0 so as to check if the configured virtual users will perform the 1000000 operations and then exit. But I see that the test never ends in this case. So how do I get the configured Virtual Users to perform the specified number of operations and then exit without considering the time? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
There are a number of questions here. |
Beta Was this translation helpful? Give feedback.
-
Thanks Steve for the detailed explanation. The hack of setting ramp-up time and duration as 0 does what I exactly wanted. |
Beta Was this translation helpful? Give feedback.
There are a number of questions here.
I am trying to evaluate an SSD drive with PostgreSQL using HammerDB
With the default TPROC-C workload - see the cached vs scaled section here - most of the data will be cached in memory - so what you will mostly be testing is the write throughput to the WAL. In particular, the key dependency will be latency and therefore the synchronous commit setting will have the biggest impact. If you want to increase the I/O to the data area you should look at the advanced options of "use all warehouses" and "asynchronous scaling".
Basically what I am trying to achieve is to get my test to run specific number of transactions per virtual user.
This is what the asyn…