-
I am trying to benchmark MySQL with HammerDB using their TPROC-C benchmark. This is part of a research I'm doing where I am benchmarking MySQL and PostgreSQL. After running through all benchmarks with PostgreSQL, I started to benchmark MySQL by was surprised at the difference! According to HammerDB's site, NOPM should be comparable between RDBMSs, but the numbers were so low that got me thinking about my methodology was wrong or my tuning parameters were hurting performance. My intention was to run their benchmark with 100 warehouses and varying number of virtual users. With PostgreSQL I go around 14000 NOPM with a single virtual user, but with MySQL, I get around 3800. I was expecting both to perform about the same, honestly. What would be really helpful is if someone can point me in the right direction. I am running both MySQL and PostgreSQL on Docker, with the latest images. Both images were limited to 12GB of RAM and HammerDB is running on the host machine. MySQL is running on InnoDB. These are the settings I changed from MySQL:
I am a complete MySQL noob, but I want to get these tests right, that's why I've come here I am running these on a laptop though, which is less than ideal. Here are the configs: i7-1165G7 - 4 cores 8 threads HT 16GB RAM M.2 nvme SSD As for HammerDB options:
I'll be more than happy to answer any questions! I'm not a native speaker, so forgive me for any mistakes. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi, So key things to learn about are how data is cached in the buffer cache, how checkpoints work to write dirty data out to the data area of the disk, how the redo/WAL/transaction logs work and how the database will wait for transactions to be written to disk if using synchronous commit and how the log size can affect the checkpointing. Also you will want to learn about MVCC (multi version concurrency control) and how this impacts undo. You will also want to consider locking and latching and how this impacts scalability. Performance at one virtual user may not be the same as you ramp up to multiple virtual users. See the blog posts of examples for "performance profiles" for measuring performance right across the system and getting the peak performance. As you scale-up especially on a server with multiple sockets the database software will limit how much CPU you can use due to locking and latching and some CPU will remain idle. The commercial databases can deliver higher performance as the software scales better because the software vendors have worked specifically in this area. Once you have this grounding then you will be able better to understand the parameters you are setting and how they impact performance eg make sure your redo/WAL files are on your SSD and gives good performance, size the buffer cache to 8GB (1/2 of your RAM and also put the CPU in performance mode. Here one thing that you do not mention is the OS you are running. If running Linux please check the pstate and cstate drivers and run the single threaded performance calibration https://www.hammerdb.com/docs/ch04s01.html just to make sure that you are not running in powersave mode - this is the default in for example Ubuntu and many people do not change it. You should be able to run this test and for your CPU see a frequency of 4.7GHz with a tool such as turbostat. also be aware of HWP (hardware managed pstates) you may want to check your BIOS settings for HWP or speed shift to make sure it is enabled. In this case the CPU can manage throttling itself. Also if comparing performance I would recommend starting on bare metal ie the base OS without containers, virtualization etc. The approach is to start at the point of maximum performance and work your way up by adding layers to then determine the performance impact they have. For simple single vu performance tests see the introductory presentation on how many warehouses you need as each virtual user does 90% of its work on the home warehouse (I created 20 for a simple single vu test but you need more as you scale up). I would also recommend testing for a shorter period of time (eg 2 to 5 minutes) to start and using the GUI transaction monitor to observe any variability in performance. I hope this gives you a good starting point for tuning as you should be able to exceed these values with your hardware configuration. |
Beta Was this translation helpful? Give feedback.
Hi,
If new to databases in general, the first thing I would recommend is getting familiar with database architecture. All of the relational databases that HammerDB tests are similar in many respects so having this grounding can help you tune them. There is an introductory presentation linked here https://www.hammerdb.com/about.html and also a number of blog posts the include links to example parameters and potential performance.
https://www.hammerdb.com/blog/uncategorized/hammerdb-best-practice-for-postgresql-performance-and-scalability/
https://www.hammerdb.com/blog/uncategorized/hammerdb-mysql-and-mariadb-best-practice-for-performance-and-scalability/
https://www.hammerdb.com/blog/uncat…