Thursday, January 11, 2024

Updated Insert benchmark: MyRocks 5.6 and 8.0, medium server, cached database, v2

This has results for the Insert Benchmark using MyRocks 5.6 and 8.0, a medium server and a cached workload. This replaces a recent report. The difference between this and the recent report is that I changed the benchmark scripts to reduce writeback and compaction debt between the last write-only benchmark step (l.i2) and the first read-write benchmark step (qr100). The intention is to reduce variance and make it easier to spot regressions. Alas, that is still an unsolved problem especially on the range query benchmark steps.

tl;dr - context matters

The biggest concerns I have are the ~16% slowdown on the initial load (l.i0) benchmark step from MyRocks 5.6.35 to 8.0.32 and the ~5% slowdown for benchmark steps that do point queries (qp*) from MyRocks 8.0.28 to 8.0.32.

Comparing latest MyRocks 8.0.32 relative to latest MyRocks 5.6.35
  • Initial load is ~17% slower
  • Other write-heavy benchmark steps are ~3% slower
  • Range queries are between 6% and 14% faster
  • Point queries are ~7% faster
Comparing latest MyRocks 8.0.32 to an old build of MyRocks 5.6.35
  • Initial load is ~16% slower
  • Other write-heavy benchmarks steps are between 2% and 6% slower
  • Range queries are between 5% slower and 5% faster
  • Point queries are 5% to 11% faster
Comparing latest MyRocks 8.0.32 to latest MyRocks 8.0.28
  • Initial load is ~4% slower
  • Other write-heavy benchmark steps are between 3% slower and 2% faster
  • Range queries are between 1% slower and 6% faster
  • Point queries are ~5% slower

Build + Configuration

See the previous report.

Benchmark

See the previous report

Benchmark steps

The benchmark is run with 8 clients and a client per table.

The benchmark is a sequence of steps that are run in order:
  • l.i0
    • insert 20M rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One does inserts as fast as possible and the other does deletes at the same rate as the inserts to avoid changing the number of rows in the table. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions).
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow where X is max(1200, 60 + #nrows/1M). While waiting do things to reduce writeback debt where the things are:
      • MyRocks (see here) - set rocksdb_force_flush_memtable_now to flush the memtable, wait 20 seconds and then set rocksdb_compact_lzero_now to flush L0. Note that rocksdb_compact_lzero_now wasn't supported until mid-2023.
  • qr100
    • use 3 connections/client. One does range queries as fast as possible and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1200 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • lik qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results

The performance reports are here for
The summary has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version on the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is my version and $base is the version of the base case. When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

From the summary for 5.6
  • The base case is fbmy5635_rel_202104072149
  • Comparing throughput in fbmy5635_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 1.02, 0.97, 0.97, 1.01
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 0.930.920.99
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 0.981.031.01
From the summary for 8.0
  • The base case is fbmy8028_rel_221222
  • The cost of the perf schema is <= 3% for write-heavy, <= 14% for range queries and <= 5% for point queries
  • Comparing throughput in fbmy8032_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.961.020.980.97
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 1.011.060.99
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 0.950.960.95
From the summary for 5.6, 8.0 with many versions
  • The base case is fbmy5635_rel_202104072149
  • Comparing throughput in fbmy8032_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.840.940.950.98
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 0.951.051.00
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 1.051.111.09
From the summary for 5.6, 8.0 with latest versions
  • The base case is fbmy5635_rel_221222
  • Comparing throughput in fbmy8032_rel_221222 to the base case
    • Write-heavy
      • l.i0, l.x, l.i1, l.i2 - relative QPS is 0.830.970.970.97
    • Range queries
      • qr100, qr500, qr1000 - relative QPS is 1.061.061.14
    • Point queries
      • qp100, qp500, qp1000 - relative QPS is 1.071.071.07

No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...