This post compares performance from sysbench read-only microbenchmarks before and after the database has been subject to random writes. Results are shared for MyRocks, InnoDB and Postgres.
tl;dr
- if you care about read-write performance then run read-write benchmarks and be wary of read-only benchmarks. Alas, a common benchmark practice is load and go (load, create indexes, run read-only or read-heavy tests).
- The penalty for the random writes (the post result) was largest for MyRocks and smallest for InnoDB. The penalty for Postgres range queries was closer to MyRocks and for point queries was closer to InnoDB.
- The penalties were larger on the c2 server than on the Beelink. At least for MyRocks this isn't a surprise because the LSM tree on the c2 server has more levels.
- In the worst case, MyRocks gets ~50% of the QPS for point and ~61% of the QPS for range queries after random writes have been done versus before they have been done
Editorial
Measuring read-only performance for a database can be misleading because there is much context that affects the results and that context isn't always documented. The context is the state of the various database structures that will be accessed for queries. This state has a significant impact on the CPU overhead per query.
The problem is that the database structures can be in one of two states:
- dynamic - by dynamic I mean there is background work in progress -- memtable flush and compaction for an LSM, vacuum and writeback for Postgres, purge and writeback for InnoDB. When the work is in progress there is competition for resources including CPU and IO. When the work has not been completed then queries might be more expensive - for example, a query on a covering secondary index might have to fetch the base row with Postgres and InnoDB.
- static - by static I mean that background work is not in progress. The problem here is that the database can settle in a state that isn't deterministic across benchmarks. One example is the amount of data in the memtable and number of L0 files with an LSM. When looking for CPU regressions it is hard to distinguish between a regression and more/less overhead from non-deterministic states.
- Load in key order - while the LSM tree probably uses multiple levels to store the data the SST files will not overlap and bloom filter checks will only be done for at most one SST as the min/max key per SST can prune all but the SST that has the data. Thus, some CPU/query will be saved. But even in this state the amount of data in the memtable and the number of SSTs in the L0 can have a significant impact on CPU/query.
- After random writes - once enough random writes have been done there will be overlap between SST files and bloom filter checks must be done.
If you want to understand how a database performs for a read-write workload then ignore read-only benchmark results and get storage/DBMS into a steady state before taking measurements. For more advice on how to do LSM benchmarks see here and here.
I hope that RocksDB gets adaptive behavior that detects when a workload shifts to read-only or read-mostly mode and does the right thing (flush memtable, flush L0, maybe flush L1). Long-ago RocksDB inherited read-triggered compaction from LevelDB, which is adaptive behavior, but that was disabled years ago. Issue 8004 is a request to restore it, but the behavior I seek is more comprehensive.
Benchmarks
A description of how I run sysbench is here. This uses the results from benchmarks I recently ran using my home Beelink servers and larger c2-standard-60 servers from GCP. In both cases the database is cached by the DBMS. For the Beelink server there is 1 table with 20M rows and for the c2 server there are 4 tables with 50M rows per table. There were 42 microbenchmarks put into 3 groups -- point queries, range queries and writes. I focus on the point and range query microbenchmarks here. Each of the 42 was run for 10 minutes. For an overview of the benchmarks see here and here.
Results are provided for MyRocks from FB MySQL 8.0.28, InnoDB from upstream MySQL 8.0.28 and Postgres 15.2.
The benchmark sequence is:
- Load tables in PK order
- Create secondary indexes
- Do the post-write work
- Run read-only microbenchmarks. These are the pre results.
- Do random writes
- Do the post-write work
- Run read-only microbenchmarks. These are the post results.
The post-write work is explained in the How I run sysbench blog post which includes links to the code for the post-write work. It is done to get database structures into a more deterministic state and from above there are pre and post results. The pre (post) results are the read-only microbenchmarks run before (after) subjecting the database to many random writes. The post-write work includes sleep time which is a function of table size and was 20 minutes for the c2 server and 3 minutes for the Beelink server.
This is the shape of the MyRocks LSM tree:
- from pre: for the Beelink and the c2 server
- from post: for the Beelink and the c2 server
Graphs
A spreadsheet with results is here, the pre results are the read-only microbenchmarks run prior to random writes and the post results are from after the random writes.
Results from the c2 server.
Summary statistics
Summary statistics are provided for the point and range query benchmarks to compare the pre and post results.
Summary of the summary stats:
- The penalty for random writes (the post result) was largest for MyRocks and smallest for InnoDB. The penalty for Postgres range queries was closer to MyRocks and for point queries was closer to InnoDB.
- The penalties were larger on the c2 server than on the Beelink. At least for MyRocks this isn't a surprise because the LSM tree on the c2 server has more levels.
Summary statistics from the Beelink server:
No comments:
Post a Comment