Friday, June 14, 2024

The Insert Benchmark vs MyRocks and InnoDB, small server, IO-bound database

This has results from the Insert Benchmark for many versions of MyRocks and MySQL (InnoDB)  on a small server with an IO-bound database and low-concurrency workload. It complements the results from the same setup but with a cached database.

The goal here is to document performance regressions over time for both upstream MySQL with InnoDB and FB MyRocks. If they get slower at a similar rate from MySQL 5.6 to 8.0 then the culprit is code above the storage engine. Otherwise, the regressions are from the storage engine.

tl;dr

  • MyRocks and InnoDB have similar throughput on the initial load (l.i0). Something changed from MyRocks 5.6 to MyRocks 8.0 that increases write-amp.
  • MyRocks is much faster on random writes (l.i1, l.i2) thanks to read free secondary index maintenance
  • InnoDB gets more range-query QPS (qr100, ...) because MyRocks uses a lot more CPU per query
  • MyRocks gets more point-query QPS (qp100, ...) thanks to bloom filters

Build + Configuration

All DBMS were compiled from source.

I tested InnoDB from upstream MySQL 5.6.51, 5.7.44, 8.0.28, 8.0.32, 8.0.36 and 8.0.37. 

I tested MyRocks from FB MySQL 5.6.35, 8.0.28 and 8.0.32.
The my.cnf files are here for the SER4 server.

The Benchmark

The benchmark is run with 1 client, a cached workload and 1 table. It is explained here.

While the results for the cached database used two types of servers (SER4, PN53), here I only have results from SER4. The SER4 server has 8 cores and 16G of RAM and is named v3 here. It uses Ubuntu 22.04 and XFS with 1 m.2 device.

The benchmark steps are:

  • l.i0
    • insert 800 million rows in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results

The performance reports are here: all DBMS, MyRocks only, InnoDB only, MyRocks vs InnoDB.

The summary in each performance report has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is my version and $base is the version of the base case. When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: InnoDB

The base case is MySQL 5.6.51 - my5651_rel.cz11a_bee.

It is compared with MySQL 5.7.44 and 8.0.37 - my5744_rel.cz11a_bee, my8037_rel.cz11a_bee.

tl;dr
  • Results from MySQL 5.6.51 were not great and much was improved for InnoDB in MySQL 5.7
  • MySQL 8.0 suffers from more CPU overhead, see the cpupq column here (CPU/operation)
From the summary the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.83 in 5.7.44
    • relative QPS is 0.55 in 8.0.37
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 1.751.62 in 5.7.44
    • relative QPS is 2.470.70 in 8.0.37
  • qr100, qr500, qr1000
    • relative QPS is 0.760.830.95 in 5.7.44
    • relative QPS is 0.700.760.87 in 8.0.37
  • qp100, qp500, qp1000
    • relative QPS is 0.971.03, 1.28 in 5.7.44
    • relative QPS is 0.931.011.26 in 8.0.37
Results: MyRocks

The base case is MyRocks 5.6.35 - fbmy5635_rel_240606_4f3a57a1.cza1_bee. Note that MyRocks 5.6.35 and 8.0.28 here use RocksDB version 8.7.0 while MyRocks 8.0.32 used RocksDB version 9.3.1.

The base case is compared with MyRocks 8.0.28 and 8.0.32 
  • fbmy8028_rel_240606_c6c83b18.cza1_bee
  • fbmy8032_rel_240606_59f03d5a.cza1_bee
tl;dr
  • l.i0
    • 8.0.28 and 8.0.32 use more CPU (cpupq is CPU/insert) - see here
    • 8.0.28 and 8.0.32 write-amp (wkbpi is KB written to storage/insert) - see here and from the compaction IO stats taken at the end of the benchmark step the extra writes are done to logical L1 (physical name is L4). I am confused because both 5.6.35 and 8.0.28 use RocksDB version 8.7.0 but it looks like trivial move wasn't used as expected for MyRocks 8.0
    • Max insert response time charts are much better for 5.6.35 than for 8.0.28 or 8.0.32
  • l.i1
    • 8.0.32 has more write-amp (wkbpi is KB written to storage/insert) - see here.
    • 8.0.32 does better at avoiding write stalls than 5.6.35 or 8.0.28
  • l.i2
    • 8.0.32 has more write-amp (wkbpi is KB written to storage/insert) - see here.
    • 8.0.32 does better at avoiding write stalls than 5.6.35 or 8.0.28
  • qr100, qr500, qr1000
    • Results here have more variance, but 8.0.28 and 8.0.32 often use more CPU (cpupq) and do more reads from storage (rkbpi, rpq) relative to 5.6.35. Perhaps I need to run these benchmark steps for longer than 1800 seconds to reduce the variance. Metrics are here.
  • qp100, qp500, qp1000
    • 8.0.28 and 8.0.32 use more CPU (cpupq) - see here.
From the summary the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.73 in 8.0.28
    • relative QPS is 0.68 in 8.0.32
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 0.880.98 in 8.0.28
    • relative QPS is 1.291.29 in 8.0.32
  • qr100, qr500, qr1000
    • relative QPS is 0.720.360.72 in 8.0.28
    • relative QPS is 0.530.660.98 in 8.0.32
  • qp100, qp500, qp1000
    • relative QPS is 0.870.890.89 in 8.0.28
    • relative QPS is 0.910.920.92 in 8.0.32
Results: MyRocks vs InnoDB

The base case is MySQL 8.0.32 - my8032_rel.cz11a_bee.
It is compared with MyRocks 8.0.32 - fbmy8032_rel_240606_59f03d5a.cza1_bee.

tl;dr 
  • l.i0
    • Per insert, InnoDB and MyRocks use a similar amount of CPU (cpupq) but InnoDB writes ~2X more to storage (wkbpi) - see here
  • l.i1, l.i2
    • MyRocks benefits a lot from read free secondary index maintenance because InnoDB does ~20X more IO (rkbpi, wkbpi) than MyRocks - see here
  • qr100, qr500, qr1000 
    • Relative too each other, InnoDB uses too much read and write IO (rkbpi, wkbpi) while MyRocks uses too much CPU (cpupq) - see here. In this case the CPU overhead is the bottleneck and InnoDB gets more QPS. For reasons I don't fully understand, InnoDB struggles to cache the indexes here while MyRocks and Postgres do not.
  • qp100, qp500, qp1000
    • InnoDB uses more CPU (cpuqpq) and does more IO (rkbpi, wkibpi) - see here
From the summary the relative throughput per benchmark step is:
  • l.i0
    • relative QPS is 0.97 in MyRocks 8.0.32
  • l.x - I ignore this for now
  • l.i1, l.i2
    • relative QPS is 5.115.44 in MyRocks 8.0.32
  • qr100, qr500, qr1000
    • relative QPS is 0.160.320.25 in MyRocks 8.0.32
  • qp100, qp500, qp1000
    • relative QPS is 1.22, 1.31, 1.39 in MyRocks 8.0.32

No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...