Tuesday, December 29, 2020

Sysbench: MyRocks in MySQL 5.6 vs 8.0

I revived my sysbench helper scripts and have started to get results for MySQL (InnoDB, MyRocks) and Postgres. The results here are from all_small.sh which uses a few Lua files that are in my sysbench fork but have yet to be contributed. Tests were run on my home NUC servers for in-memory and IO-bound workloads. The results here are limited to the in-memory workloads.

Disclaimer -- the tests here are microbenchmarks. While useful, be careful about assuming that the regressions here will reproduce on your workloads. The in-memory tests help me understand CPU overhead which tends to increase in newer releases.

Summary:

  • In most cases, MyRocks in 8.0.17 achieves between 75% and 100% of the throughput that it gets in 5.6.35. The regression tended to be larger for write-heavy tests than for read-only.
  • Many of the regressions are explained by new CPU overhead. But there is also variance that is likely caused by stalls from RocksDB and my scripts don't do a good job of showing that.
  • I don't understand why MyRocks was reading from storage for some of the tests. That doesn't reproduce with InnoDB and the database should have been cached.
  • There was one lousy, but still unexplained result -- a test that does point queries via a secondary index was much slower in 8.0.17. It got between 41% and 54% of the 5.6.35 throughput (search for 0.41 below). The problem occurs when the test is run after write-heavy tests. It does not occur when the test is run before the write-heavy tests. I need to debug this live but my test servers are busy.

Overview

The all_small.sh script run a variety of tests in a sequence that is interesting to me. One example is to run the read-only tests before and then after write-heavy tests to understand the impact of fragmentation on query performance. The tests include the traditional result from oltp_read_write.lua, and results from other Lua scripts focused on a single command (point lookup, range query, etc). These are microbenchmarks and the purpose is to spot regressions between releases. MySQL 8 has many improvements and a few regressions (like bug 102037, found via these scripts).

For the in-memory workload there is 1 table with 10M rows and each test is run for 90 seconds. For the IO-bound setup there is 1 table with 200M rows and each test is run for 300 seconds. In both cases tests are repeated for 1, 2, 3 and 4 threads. The test servers have 4 CPU cores with HT disabled. The in-memory table is small enough to fit in RAM, the IO-bound table is not.

Tests used FB MySQL 5.6.35 at git hash 4911e0 and 8.0.17 at git hash cf9dbc. These were latest as of early December 2020. The NUC servers use Ubuntu 20.04 and XFS.

Results

I pushed results to github with:

  • Result directories for in-memory and IO-bound workloads
  • Summaries with results for 1, 2, 3 and 4 threads
    • Absolute QPS by thread count in-memory and IO-bound workloads
    • Relative QPS by thread count for in-memory and IO-bound workloads. The results are relative to MyRocks in 5.6.35 (> 1.0 is an improvement, < 1.0 is a regression)
    • Metrics (CPU and IO per operation) for in-memory and IO-bound
  • my.cnf for 5.6 and 8.0 are as similar as possible. Fsync is not done on commit to keep this from being an fsync benchmark. Compression was not used.
  • Sysbench output and test scripts. The run.sh script maps the test names that are in the result summaries to the Lua scripts and per-script options.
In-memory

The tests are in 4 groups based on the sequence in which they are run: read-only run before write-heavy, write-heavy, read-only run after write-heavy and insert/delete. 

The summaries focus on relative throughput and HW efficiency for the test with 1 thread. The results for more than 1 thread are similar although at 4 threads there were some tests with larger regressions for 8.0 and the likely cause was CPU saturation.

I use ratios to explain performance. In this case MyRocks in 5.6.35 is the denominator and 8.0.17 is the numerator. A QPS ratio < 1 means that 8.0.17 is slower. A HW efficiency metric, CPU/query, ratio > 1 means that 8.0.17 uses more CPU/query. I use CPU/query because it is correlated with QPS, the CPU/query overhead usually increases in new MySQL releases and that causes a reduction in QPS for low-concurrency workloads, especially when the database is cached and queries are simple.

Read-only before write-heavy:
  • QPS ratios are 0.89, 0.89, 0.90, 0.89 for the first 4 tests (up to here)
    • These do point queries
    • CPU/query ratios are: 1.15, 1.14, 1.12, 1.13. See here to here
  • QPS ratios are 0.94, 1.00, 1.12 for the next 3 tests (here to here
    • These have the range scans from oltp_read_write.lua with ranges of size 10, 100 & 10,000
    • CPU/query ratios are 1.09, 1.03, 0.92. Long scans are better in 8.0.17 while shorter ones are better in 5.6.35. See here to here.
  • QPS ratios are 0.91, 0.89 for the next 2 tests (here to here)
    • These do point queries via in-lists that are covering and then not covering for the primary index. 
    • CPU/query ratios are 1.12, 1.15 (here to here)
  • QPS ratios are 0.82, 0.84 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. The regression in 8.0.17 is larger here than for the tests that use the PK index and CPU/query is the problem.
    • CPU/query ratios are 1.23, 1.23 (here to here)
  • QPS ratios are 0.95, 0.96 for the next 2 tests (here to here)
    • These do range queries that are covering and then not covering for the primary index 
    • CPU/query ratios are 1.09, 1.04 (here to here)
  • QPS ratios are 0.98, 0.94 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. 
    • CPU/query ratios are 1.04, 1.07 (here to here)
Write-heavy - many of these tests have more read IO than I expected given.
  • QPS ratios are 0.80, 0.84, 0.81, 0.77, 0.72 for the next 5 tests (here to here)
    • These are update-only
    • CPU/statement ratios are 1.40, 1.23, 1.39, 1.37, 1.38 (here to here). 
    • IO/operation ratios are worse for 8.0.17. I don't understand why there are so many reads as the database should be cached. The update-zipf tests were the worst for IO and the IO problem occurs for both 5.6 and 8.0. I assume the problem is that new compaction output isn't in the OS page cache.
  • QPS ratio is 0.92 for the next test, write-only. See here.
    • This has the writes from oltp_read_write.lua. 
    • CPU/transaction ratio is 1.17. See here. Read/write IO is also worse.
  • QPS ratios are 1.01, 0.98 for the next two tests, read-write (here to here)
    • These are the traditional sysbench tests (oltp_read_write.lua) with ranges of size 10 and 100
    • CPU/transaction ratios are 1.05, 1.06 (here to here)
    • IO/operation results confuse me as the test table should be cached (here to here)
Read-only after write-heavy includes tests that were run before write-heavy:
  • QPS ratio is 0.98, 0.94, 0.98 for the next 3 tests, read-only (here to here)
    • These have the range scans from oltp_read_write.lua with ranges of size 10, 100 and 10,000. These were also run before write-heavy, results here are slightly worse. 
    • CPU/transaction ratios are 1.06, 1.09, 1.04 (here to here)
  • QPS ratios are 0.90, 0.88. 0.85, 0.85, 0.81 for the next 5 tests (here to here)
    • These do a variety of point queries. The first 4 were run in Read-only before write heavy, and results here are similar. 
    • CPU/query ratios are 1.14, 1.15, 1.18, 1.18, 1.25 (here to here)
  • QPS ratios are 0.89, 0.87 for the next 2 tests (here to here)
    • These do point queries via in-lists that are covering and then not covering for the primary index. These were also run before write-heavy. Results here are slightly worse.
    • CPU/query ratios are 1.14, 1.16 (here to here)
  • QPS ratios are 0.41, 0.54 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. These were also run before write-heavy. The results here show a serious regression for 8.0.17.
    • CPU/query ratios are 2.26, 1.90 (here to here). This starts to explain the problem but I need stack traces and/or CPU profiles to really explain it.
  • QPS ratios are 0.74, 0.76 for the next 2 tests (here to here)
    • These do range queries that are covering and then not covering for the primary index
    • These were also run before write-heavy. 
    • CPU/query ratios are 1.31, 1.27 (here to here)
  • QPS ratios are 0.80, 0.74 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. These were also run before write-heavy.
    • CPU/query ratios are 1.25, 1.25 (here to here)
    Insert/delete

    • QPS ratio is 0.78 for the delete test and 0.72 for the insert test
    • CPU/statement ratio is 1.27 for delete and 1.49 for insert


    No comments:

    Post a Comment