The test server has 24 CPU cores, 48 HW threads with hyperthreading enabled, 256G of RAM and fast SSD. For sysbench I used the 1.0 version with support for Lua. Tests were run in two configurations -- cached and IO-bound. For the cached configuration I used 8 tables, 1M rows/table and the database cache was large enough to cache all data. For the IO-bound configuration I used 8 tables, 10M rows/table, a 2G database cache and buffered IO so that all data was in the OS page cache. The database was ~2G for the cached configuration and ~20G for the IO-bound configuration. InnoDB table compression was not used and jemalloc was used. The binlog was enabled but sync-on-commit was disabled for the binlog and InnoDB redo log.
With 8 tables and 1M rows per table the database is very small -- a few GB. I am wary of drawing too many conclusions from sysbench results for such a small database but other people will use it to evaluate MyRocks so I have been spending more time to understand sysbench performance.
Both the cached and IO-bound configurations require data to be written to storage. For the cached test all page reads are served from the database cache. For the IO-bound test some reads are served from the database cache and misses are served from the OS page cache. In both cases nothing is read from storage.
tl;dr
- For InnoDB using repeatable-read can help performance.
- Disabling performance schema can help performance
- My tests are from MySQL 5.6.26. It will be interesting to understand what has changed in 5.7 and 8. I know that the overhead from #1 and #2 should be much less in modern MySQL.
I have a script that uses sysbench to run tests in an interesting pattern and used it for this test. The pattern is in the all.sh script and is explained below. The all.sh script uses the run.sh script to set the sysbench command line options. I run most of the read-write tests before the read-only tests to fragment the database before evaluating query performance.
- load - load the 8 tables
- update-index - uses oltp.lua and each transaction is an UPDATE statement that finds & changes 1 row by PK. Secondary index maintenance is required for the update statement.
- update-nonindex - like update-index but secondary index maintenance is not required.
- read-write - uses oltp.lua in read-write mode, a classic sysbench workload. Run with oltp-range-size=100.
- read-only - uses oltp.lua in read-only mode, a classic sysbench workload. Run four times with oltp-range-size set 10, 100, 1000 and 10000.
- point-query - uses oltp.lua to fetch one row by PK per query
- select - uses select.lua to fetch one row per query
- insert - uses insert.lua to insert one row per transaction. This grows the database and the growth amount depends on the the insert rate. So a faster engine will grow the database more than a slower engine. It also means that when I run the test for a long time that the database won't fit in the database or OS page cache. For these reasons I run this test last.
The test was run for 1, 2, 4, 8, 16, 24, 32, 40, 48, 64, 80, 96 and 128 concurrent clients. This is currently hardwired in the all.sh script. For each level of concurrency I ran sysbench for 3 minutes for the read-only tests and 5 minutes for the read-write tests. Eventually I will run it for more time at each level of concurrency but I had a large number of tests to run and am trying to figure out which configurations are interesting.
The my.cnf for these tests is here.
Results
Data for the results is here. The numbers provided are queries per second (QPS) not transactions per second (TPS). The configurations tested are described below:
- innodb.8t.1m.rr.ps0 - 8 tables, 1M rows/table, repeatable-read, performance_schema=0
- innodb.8t.1m.rr.ps1 - 8 tables, 1M rows/table, repeatable-read, performance_schema=1
- innodb.8t.1m.rc.ps0 - 8 tables, 10M rows/table, read-committed, performance_schema=0
- innodb.8t.1m.rc.ps1 - 8 tables, 10M rows/table, read-committed, performance_schema=1
My summary of performance is:
- update-index - disabling the performance schema has a small impact on QPS (between 1% and 5% more QPS is common). Changing transaction isolation has no impact on QPS.
- update-nonindex - same as update-index
- read-write - disabling the performance schema frequently boosts QPS by 5% to 10% and the impact is greater at high concurrency. Using repeatable-read boosts performance because it reduces the mutex contention from getting a consistent read snapshot as that is done once per transaction rather than once per statement.
- read-only - see the conclusions for read-write for shorter range scans (oltp-range-size set to 10 or 100 and maybe 1000). For longer range scans (oltp-range-size set to 10000) transaction isolation and the performance schema have little impact because the overhead is elsewhere
- point-query - disabling the performance schema has a big impact on performance (between 5% and 10% more QPS) and the benefit is larger at high concurrency. Transaction isolation has no impact on performance because transactions are single statement.
- select - see point-query
- insert - disabling the performance schema has a smaller impact on QPS. Transaction isolation doesn't have an impact on performance because transactions are single statement.
I assume the results labeled "rc" were "read-committed", not "repeatable-read", right?
ReplyDeleteyes, sorry for the typo. Fixed the data page and the blog post, rc is read-committed
DeleteAnd which one do you prefer?
ReplyDelete