Wednesday, June 12, 2024

The Insert Benchmark vs MyRocks and InnoDB, small server, cached database

This has results from the Insert Benchmark for many versions of MyRocks and MySQL (InnoDB)  on a small server with a cached database and low-concurrency workload.

The goal here is to document performance regressions over time for both upstream MySQL with InnoDB and FB MyRocks. If they get slower at a similar rate from MySQL 5.6 to 8.0 then the culprit is code above the storage engine. Otherwise, the regressions are from the storage engine.

tl;dr

  • Regressions from 5.6 to 8.0 are worse for InnoDB than for MyRocks
  • InnoDB is faster than MyRocks here because the workload is CPU-bound and InnoDB uses less CPU than MyRocks. A result from an IO-bound setup will be different.
  • The worst case for MyRocks vs InnoDB here is on the range query benchmark steps (qr100, qr500, qr1000) because range queries with an LSM usually don't benefit from a bloom filter and do suffer from merging iterators -- which all increases the CPU overhead.
  • I need to spend more time getting flamegraphs to document the differences.
Build + Configuration

All DBMS were compiled from source.

I tested InnoDB from upstream MySQL 5.6.51, 5.7.44, 8.0.28, 8.0.32, 8.0.36 and 8.0.37. 

I tested MyRocks from FB MySQL 5.6.35, 8.0.28 and 8.0.32.
The my.cnf files are here for the SER4 and the PN53 servers.

The Benchmark

The benchmark is run with 1 client, a cached workload and 1 table. It is explained here.

There were two server types. The SER4 server has 8 cores and 16G of RAM and is named v3 here. The PN53 has 8 cores and 32G of RAM and is named v8 here. Both have Ubuntu 22.04 and use XFS with 1 m.2 device.

The benchmark steps are:

  • l.i0
    • insert X million rows in PK order. The table has a PK index but no secondary indexes. There is one connection per client. The value of X is 20M for SER4 and 30M for PN53.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results

The performance reports are here:
The summary in each performance report has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is my version and $base is the version of the base case. When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: InnoDB

The base case is MySQL 5.6.51 - my5651_rel.cz11a_bee or my5651_rel.cz11a_c8r32. 

It is compared with MySQL 5.7.44 and 8.0.37 (my5744_rel.cz11a_bee, my5744_rel.cz11a_c8r32, my8037_rel.cz11a_bee, my8037_rel.cz11a_c8r32).

tl;dr
  • There is a large regression from 5.6 to 5.7 and then again from 5.7 to 8.0
  • MySQL 8.0.37 gets ~30% less throughput relative to 5.6.51
From the summary for SER4 the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.84 in 5.7.44 with SER4
    • relative QPS is 0.57 in 8.0.37 with SER4
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 1.150.87 in 5.7.44 with SER4
    • relative QPS is 1.020.71 in 8.0.37 with SER4
  • qr100, qr500, qr1000
    • relative QPS is 0.750.740.73 in 5.7.44 with SER4
    • relative QPS is 0.640.640.63 in 8.0.37 with SER4
  • qp100, qp500, qp1000
    • relative QPS is 0.810.820.83 in 5.7.44 with SER4
    • relative QPS is 0.610.610.63 in 8.0.37 with SER4
From the summary for PN53 the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.89 in 5.7.44 with PN53
    • relative QPS is 0.61 in 8.0.37 with PN53
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 1.061.01 in 5.7.44 with PN53
    • relative QPS is 0.980.84 in 8.0.37 with PN53
  • qr100, qr500, qr1000
    • relative QPS is 0.830.850.83 in 5.7.44 with PN53
    • relative QPS is 0.750.760.74 in 8.0.37 with PN53
  • qp100, qp500, qp1000
    • relative QPS is 0.870.860.86 in 5.7.44 with PN53
    • relative QPS is 0.700.690.69 in 8.0.37 with PN53
Results: MyRocks

The base case is MyRocks 5.6.35:
  • fbmy5635_rel_240606_4f3a57a1.cza1_bee or fbmy5635_rel_240606_4f3a57a1.cza1_c8r32
The base case is compared with MyRocks 8.0.28 and 8.0.32 
  • fbmy8028_rel_240606_c6c83b18.cza1_bee, fbmy8028_rel_240606_c6c83b18.cza1_c8r32
  • fbmy8032_rel_240606_59f03d5a.cza1_bee, fbmy8032_rel_240606_59f03d5a.cza1_c8r32
tl;dr
  • These results have more variance than the InnoDB results above, especially for range queries. It might be that I need to run the qr* benchmark steps for more than 1800 seconds.
From the summary for SER4 the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.72 in 8.0.28 with SER4
    • relative QPS is 0.67 in 8.0.32 with SER4
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 0.900.85 in 8.0.28 with SER4
    • relative QPS is 0.910.82 in 8.0.32 with SER4
  • qr100, qr500, qr1000
    • relative QPS is 0.73, 1.041.06 in 8.0.28 with SER4
    • relative QPS is 0.640.920.85 in 8.0.32 with SER4
  • qp100, qp500, qp1000
    • relative QPS is 0.970.950.92 in 8.0.28 with SER4
    • relative QPS is 0.870.880.88 in 8.0.32 with SER4
From the summary for PN53 the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.70 in 8.0.28 with PN53
    • relative QPS is 0.66 in 8.0.32 with PN53
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 0.870.90 in 8.0.28 with PN53
    • relative QPS is 0.920.88 in 8.0.32 with PN53
  • qr100, qr500, qr1000
    • relative QPS is 1.401.120.91 in 8.0.28 with PN53
    • relative QPS is 0.811.000.83 in 8.0.32 with PN53
  • qp100, qp500, qp1000
    • relative QPS is 0.890.890.90 in 8.0.28 with PN53
    • relative QPS is 0.870.870.88 in 8.0.32 with PN53
Results: MyRocks vs InnoDB

The base case is MySQL 8.0.32 - my8032_rel.cz11a_bee or my8032_rel.cz11a_c8r32.

It is compared with MyRocks 8.0.32 - fbmy8032_rel_240606_59f03d5a.cza1_bee or fbmy8032_rel_240606_59f03d5a.cza1_c8r32.

tl;dr
  • For l.i0
    • CPU overhead (see cpupq for SER4 and for PN53) increases a lot from 5.6 to 8.0 for both InnoDB and MyRocks, and the regressions are probably from code above the storage engine
  • For l.i1 and l.i2
    • CPU overhead (see cpupq for SER4 and for PN53) increases from 5.6 to 8.0 for MyRocks while for InnoDB there is more variance but less of an increase. Thus, while MyRocks was faster on l.i1 it became slower on l.i2 -- perhaps from tombstones, compaction and other MVCC GC overheads). Another difference between l.i1 and l.i2 is that the fraction of query time spent in the optimizer relative is larger with l.i2 because less work is done per DELETE statement. But I am speculating here and need more time to debug it.
    • The charts with per-second rates for inserts and deletes show more variance for MyRocks (for SER4 and PN53) than for InnoDB (for SER4 and PN53). 
  • For qr100
    • On the SER4 server the CPU overhead increases by ~1.5X from 5.6 to 8.0 for both MyRocks and InnoDB. But in absolute terms the increase is ~3X larger for MyRocks. On PN53 the relative increase is ~1.3X for both but the absolute increase for MyRocks is ~2X larger than for InnoDB. I need flamegraphs to understand why. See the cpupq column for SER4 and for PN53
  • For qp100 
    • For InnoDB, MyRocks the relative increases in CPU overhead from 5.6 to 8.0 are 1.6X, 1.2X on SER4 and 1.5X, 1.2X on PN53. So InnoDB get more new CPU overheads than MyRocks. See cpupq for SER4 and for PN53.
From the summaries for SER4 and for PN53
  • l.i0
    • relative QPS is 0.95 in MyRocks 8.0.32 with SER4
    • relative QPS is 0.92 in MyRocks 8.0.32 with PN53
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 1.180.79 in MyRocks 8.0.32 with SER4
    • relative QPS is 1.300.85 in MyRocks 8.0.32 with PN53
  • qr100, qr500, qr1000
    • relative QPS is 0.400.440.41 in MyRocks 8.0.32 with SER4
    • relative QPS is 0.360.400.43 in MyRocks 8.0.32 with PN53
  • qp100, qp500, qp1000
    • relative QPS is 0.830.850.83 in MyRocks 8.0.32 with SER4
    • relative QPS is 0.820.820.82 in MyRocks 8.0.32 with PN53


No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...