Friday, January 12, 2024

Updated Insert benchmark: MyRocks 5.6 and 8.0, small server, cached database, v2

This has results for the Insert Benchmark using MyRocks 5.6 and 8.0, a small server and a cached workload. I have two versions of small servers -- Beelink SER4 with 16G of RAM, Beelink SER7 with 32G of RAM. This report uses the SER7. A recent report from the Beelink SER4 is here but that report will be replaced in a few days.

tl;dr

  • Some of the regressions between MyRocks 5.6 and 8.0 come from upstream. Here that shows up on the l.i0, qp100, qp500 and qr1000 benchmark steps.
  • There is too much noise in the range query benchmark steps (qr*) that I have yet to explain

Noise

I recently improved the benchmark scripts to remove writeback and compaction debt after the l.i2 benchmark step to reduce noise in the read-write steps that follow. At least for MyRocks, the range query benchmark steps (qr100, qr500, qr1000) have more noise. The worst case for noise with MyRocks is the qr100 step, and this is more obvious on a small server. 

For MyRocks, the benchmark script now does the following after l.i2:

  • wait for X seconds where X = min(1200, 60 + #rows / 1M)
  • while waiting: flush memtable, wait 20 seconds, compact L0 into L1. But compacting L0 into L1 is only done for MyRocks builds from mid-2023 or newer because the feature I used for that was buggy prior to mid-2023.
When the qr100 benchmark step starts the memtable is empty and the L0 might be empty. On small servers when I run the benchmark step for less than one hour the memtable never gets full and there are no memtable flushes. On larger servers the memtable is likely to be flushed many times.

Regardless, I have yet to figure out why there is more noise with MyRocks on the range query benchmark steps. Until then, with MyRocks I focus on qr500 and qr1000 or on the results from larger servers in my search for regressions in range queries. What I see now is that the CPU/query overhead changes significantly, but I need to explain why that happens.

Build + Configuration

I tested MyRocks 5.6.35, 8.0.28 and 8.0.32 using the latest code as of December 2023. I also repeated tests for older builds for MyRocks 5.6.35 and 8.0.28. These were compiled from source. All builds use CMAKE_BUILD_TYPE =Release.

MyRocks 5.6.35 builds:
  • fbmy5635_rel_202104072149
    • from code as of 2021-04-07 at git hash f896415f with RocksDB 6.19.0
  • fbmy5635_rel_202203072101
    • from code as of 2022-03-07 at git hash e7d976ee with RocksDB 6.28.2
  • fbmy5635_rel_202205192101
    • from code as of 2022-05-19 at git hash d503bd77 with RocksDB 7.2.2
  • fbmy5635_rel_202208092101
    • from code as of 2022-08-09 at git hash 877a0e58 with RocksDB 7.3.1
  • fbmy5635_rel_202210112144
    • from code as of 2022-10-11 at git hash c691c716 with RocksDB 7.3.1
  • fbmy5635_rel_202302162102
    • from code as of 2023-02-16 at git hash 21a2b0aa with RocksDB 7.10.0
  • fbmy5635_rel_202304122154
    • from code as of 2023-04-12 at git hash 205c31dd with RocksDB 7.10.2
  • fbmy5635_rel_202305292102
    • from code as of 2023-05-29 at git hash b739eac1 with RocksDB 8.2.1
  • fbmy5635_rel_20230529_832
    • from code as of 2023-05-29 at git hash b739eac1 with RocksDB 8.3.2
  • fbmy5635_rel_20230529_843
    • from code as of 2023-05-29 at git hash b739eac1 with RocksDB 8.4.3
  • fbmy5635_rel_20230529_850
    • from code as of 2023-05-29 at git hash b739eac1 with RocksDB 8.5.0
  • fbmy5635_rel_221222
    • from code as of 2023-12-22 at git hash 4f3a57a1, RocksDB 8.7.0 at git hash 29005f0b
MyRocks 8.0.28 builds:
  • fbmy8028_rel_20220829_752
    • from code as of 2022-08-29 at git hash a35c8dfeab, RocksDB 7.5.2
  • fbmy8028_rel_20230129_754
    • from code as of 2023-01-29 at git hash 4d3d44a0459, RocksDB 7.5.4
  • fbmy8028_rel_20230502_810
    • from code as of 2023-05-02 at git hash d1ca8b276d, RocksDB 8.1.0
  • fbmy8028_rel_20230523_821
    • from code as of 2023-05-23 at git hash b08cc536f1, RocksDB 8.2.1
  • fbmy8028_rel_20230619_831
    • from code as of 2023-06-19 at git hash 6164cf0274, RocksDB 8.3.1
  • fbmy8028_rel_20230629_831
    • from code as of 2023-06-29 at git hash ab522f6df7c, RocksDB 8.3.1
  • fbmy8028_rel_221222
    • from code as of 2023-12-22 at git hash 2ad105fc, RocksDB 8.7.0 at git hash 29005f0b
MyRocks 8.0.32 builds:
  • fbmy8032_rel_221222
    • from code as of 2023-12-22 at git hash 76707b44, RocksDB 8.7.0 at git hash 29005f0b

Benchmark

The server is a Beelink SER7 described here with 8 cores, 32G RAM, Ubuntu 22.04 and XFS on a fast m.2 NVMe device. The benchmark is run with 1 client.

The benchmark is a sequence of steps that are run in order:
  • l.i0
    • insert 60M rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One does inserts as fast as possible and the other does deletes at the same rate as the inserts to avoid changing the number of rows in the table. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions).
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow where X is max(1200, 60 + #nrows/1M). While waiting do things to reduce writeback debt where the things are:
      • MyRocks (see here) - set rocksdb_force_flush_memtable_now to flush the memtable, wait 20 seconds and then set rocksdb_compact_lzero_now to flush L0. Note that rocksdb_compact_lzero_now wasn't supported until mid-2023.
  • qr100
    • use 3 connections/client. One does range queries as fast as possible and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results

The performance reports are here for
The summary has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version on the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is my version and $base is the version of the base case. When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

The range query benchmark steps suffer from too much noise that I have yet to explain.

From the summary for 5.6
  • The base case is fbmy5635_rel_202104072149
  • Comparing throughput in fbmy5635_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.950.980.970.95
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 0.65, 1.110.70
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 1.000.990.99
From the summary for 8.0
  • The base case is fbmy8028_rel_20220829_752
  • Comparing throughput in fbmy8032_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.951.01, 1.000.97
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 0.980.721.04
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 0.991.000.99
From the summary for 5.6, 8.0 with many versions
  • The base case is fbmy5635_rel_202104072149
  • Comparing throughput in fbmy8032_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.660.890.820.81
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 0.931.04, 0.69
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 0.860.860.83
From the summary for 5.6, 8.0 with latest versions
  • The base case is fbmy5635_rel_221222
  • Comparing throughput in fbmy8032_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.690.910.850.84
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 1.440.930.98
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 0.860.870.84





No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...