Tuesday, December 29, 2020

Sysbench: IO-bound MyRocks in 5.6 vs 8.0

This post has results for an IO-bound workload. The previous post has results for an in-memory workload. 

Summary:

  • Many of the regressions are explained by new CPU overhead. But there is also variance that is likely caused by stalls from RocksDB and my scripts don't do a good job of showing that. 
  • In most cases, MyRocks in 8.0.17 gets between 80% and 100% of the throughput vs 5.6.35

Overview

Things that are different for this benchmark from the previous post:

  • The test table has 200M rows rather than 10M. Tests here are likely to be IO-bound.
  • Each test step is run for 300 seconds rather than 90.

The IO-bound results and shell scripts used to run the tests are in github.

Results

The tests are in 4 groups based on the sequence in which they are run: read-only run before write-heavy, write-heavy, read-only run after write-heavy and insert/delete. 

The summaries focus on relative throughput and HW efficiency for the test with 1 thread although tests were run for 1, 2, 3 and 4 threads. 

I use ratios to explain performance. In this case MyRocks in 5.6.35 is the denominator and 8.0.17 is the numerator. A QPS ratio < 1 means that 8.0.17 is slower. For HW efficiency I use CPU/operation and IO/operation (read & write). For CPU and IO per operation a ratio > 1 means that 8.0.17 uses more CPU or IO per query. Below I only mention IO/operation ratios when 5.6 and 8.0 don't have similar values.

Read-only before write-heavy:
  • QPS ratios are 0.95, 0.92, 0.91, 0.91 for the first 4 tests (up to here)
    • These do point queries
    • CPU/query ratios are: 1.12, 1.23, 1.27, 1.19. See here to here
  • QPS ratios are 0.97, 0.95, 1.09 for the next 3 tests (here to here
    • These have the range scans from oltp_read_write.lua with ranges of size 10, 100 & 10,000
    • CPU/query ratios are 1.08, 1.10, 0.93 (here to here). Long scans are better in 8.0.17.
  • QPS ratios are 0.97, 0.96 for the next 2 tests (here to here)
    • These do point queries via in-lists that are covering and then not covering for the primary index. 
    • CPU/query ratios are 1.10, 1.13 (here to here).
  • QPS ratios are 0.74, 0.98 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. The regression in 8.0.17 is larger here than for the tests that use the primary key index and CPU/query is the problem.
    • CPU/query ratios are 1.31, 1.08 (here to here)
    • read IO/query ratios are 1.35, 1.01. I don't know why it is 1.35 rather than closer to 1.
  • QPS ratios are 1.03, 1.04 for the next 2 tests (here to here)
    • These do range queries that are covering and then not covering for the primary index 
    • CPU/query ratios are 0.99, 0.97 (here to here)
  • QPS ratios are 1.01, 0.93 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. 
    • CPU/query ratios are 1.01, 1.17 (here to here)
Write-heavy
  • QPS ratios are 0.95, 0.84, 1.06, 0.80, 0.76 for the next 5 tests (here to here)
    • These are update-only
    • CPU/statement ratios are 1.13, 1.23, 0.90, 1.27, 1.25 (here to here). 
    • Read and write IO/operation ratios are better and worse in 8.0.17 depending on the test
  • QPS ratio is 1.00 for the next test, write-only. See here.
    • This has the writes from oltp_read_write.lua. 
    • CPU/transaction ratio is 1.10. See here.
  • QPS ratios are 0.30, 1.47 for the next two tests, read-write (here to here)
    • These are the traditional sysbench tests (oltp_read_write.lua) with ranges of size 10 and 100. This result is odd and it would be great to explain it.
    • CPU/transaction ratios are 3.11, 0.79 (here to here)
Read-only after write-heavy includes tests that were run before write-heavy:
  • QPS ratio is 1.06, 1.11, 1.10 for the next 3 tests, read-only (here to here)
    • These have the range scans from oltp_read_write.lua with ranges of size 10, 100 and 10,000. These were also run before write-heavy, results here are slightly worse. 
    • CPU/transaction ratios are 0.96, 0.84, 0.94 (here to here)
  • QPS ratios are 1.93, 1.03, 1.01, 1.10, 1.02 for the next 5 tests (here to here)
    • These do a variety of point queries. The first 4 were run in Read-only before write heavy, and results here are similar. 
    • CPU/query ratios are 0.56, 1.00, 1.01, 1.00, 0.99 (here to here)
    • The 1.93 result is an example of the variance I mentioned above. This ratio is for the 1-thread test and the ratios for the 2, 3 and 4 thread tests are close to 1. Something is creating variance. I have to debug this live.
  • QPS ratios are 0.94, 0.96 for the next 2 tests (here to here)
    • These do point queries via in-lists that are covering and then not covering for the primary index. These were also run before write-heavy. 
    • CPU/query ratios are 1.17, 1.11 (here to here)
  • QPS ratios are 1.19, 1.00 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. These were also run before write-heavy. 
    • CPU/query ratios are 2.26, 1.90 (here to here). 
  • QPS ratios are 1.04, 1.10 for the next 2 tests (here to here)
    • These do range queries that are covering and then not covering for the primary index
    • These were also run before write-heavy. 
    • CPU/query ratios are 0.97, 0.88 (here to here)
  • QPS ratios are 0.98, 0.94 for the next 2 tests (here to here)
    • These are similar to the previous test, but use the secondary index. These were also run before write-heavy.
    • CPU/query ratios are 1.02, 1.17 (here to here)
    Insert/delete

    • QPS ratio is 0.84 for the delete test and 0.77 for the insert test
    • CPU/statement ratio is 1.22 for delete and 1.28 for insert







    No comments:

    Post a Comment

    Fixing some of the InnoDB scan perf regressions in a MySQL fork

    I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...