Sunday, May 18, 2025

RocksDB 10.2 benchmarks: large & small servers with a cached workload

I previously shared benchmark results for RocksDB using the larger server that I have. In this post I share more results from two other large servers and one small server. This is arbitrary but I mean >= 20 cores for large, 10 to 19 cores for medium and less than 10 cores for small.

tl;dr

  • There are several big improvements
  • There might be small regression in fillseq performance, I will revisit this
  • For the block cache hyperclock does much better than LRU on CPU-bound tests
  • I am curious about issue 13546 but not sure the builds I tested include it

Software

I used RocksDB versions 6.29.5, 7.10.2, 8.11.4, 9.0.1, 9.1.2, 9.2.2, 9.3.2, 9.4.1, 9.5.2, 9.6.2, 9.7.4, 9.8.4, 9.9.3, 9.10.0, 9.11.2, 10.0.1, 10.1.3 and 10.2.1. Everything was compiled with gcc 11.4.0.

For 8.x, 9.x and 10.x the benchmark was repeated using both the LRU block cache (older code) and hyperclock (newer code). That was done by setting the --cache_type argument:

  • lru_cache was used for versions 7.6 and earlier
  • hyper_clock_cache was used for versions 7.7 through 8.5
  • auto_hyper_clock_cache was used for versions 8.5+

Hardware

My servers are described here. From that list I used:

  • The small server is a Ryzen 7 (AMD) CPU with 8 cores and 32G of RAM. It is v5 in the blog post.
  • The first large server has 24 cores with 64G of RAM. It is v6 in the blog post.
  • The other large server has 32 cores and 128G of RAM. It is v7 in the blog post.

Benchmark

Overviews on how I use db_bench are here and here.

Tests were run for a workload with the database cached by RocksDB that I call byrx in my scripts.

The benchmark steps that I focus on are:
  • fillseq
    • load RocksDB in key order with 1 thread
  • revrangeww, fwdrangeww
    • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
  • readww
    • do point queries with a rate-limited writer. Report performance for the point queries.
  • overwrite
    • overwrite (via Put) random keys using many threads

Relative QPS

Many of the tables below (inlined and via URL) show the relative QPS which is:
    (QPS for my version / QPS for base version)

The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than the base version. When it is < 1.0 then there might be a performance regression or there might just be noise 

Small server

The benchmark was run using 1 client thread and 20M KV pairs. Each benchmark step was run for 1800 seconds. Performance summaries are here

For the byrx (cached database) workload with the LRU block cache:

  • see relative and absolute performance summaries, the base version is RocksDB 6.29.5
  • fillseq is ~14% faster in 10.2 vs 6.29 with improvements in 7.x and 9.x
  • revrangeww and fwdrangeww are ~6% slower in 10.2 vs 6.29, I might revisit this
  • readww has similar perf from 6.29 through 10.2
  • overwrite is ~14% faster in10.2 vs 6.29 with most of the improvement in 7.x

For the byrx (cached database) workload with the Hyper Clock block cache

  • see relative and absolute performance summaries, the base version is RocksDB 8.11.4
  • there might be small regression (~3%) or there might be noise in the results
Results from RocksDB 10.2.1 that show relative QPS for 10.2 with the Hyper Clock block cache relative to 10.2 with the LRU block cache. Here the QPS for revrangeww, fwdrangeww and readww are ~10% better with Hyper Clock.

relQPS  test
0.99    fillseq.wal_disabled.v400
1.09    revrangewhilewriting.t1
1.13    fwdrangewhilewriting.t 1
1.15    readwhilewriting.t1
0.96    overwriteandwait.t1.s0

Large server (24 cores)

The benchmark was run using 16 client threads and 40M KV pairs. Each benchmark step was run for 1800 seconds. Performance summaries are here.

For the byrx (cached database) workload with the LRU block cache

  • see relative and absolute performance summaries, the base version is RocksDB 6.29.5
  • fillseq might have a new regression of ~4% in 10.2.1 or that might be noise, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x

For the byrx (cached database) workload with the Hyper Clock block cache

  • see relative and absolute performance summaries, the base version is RocksDB 8.11.4
  • fillseq might have a new regression of ~8% in 10.2.1 or that might be noise, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x
Results from RocksDB 10.2.1 that show relative QPS for 10.2 with the Hyper Clock block cache relative to 10.2 with the LRU block cache.  Hyper Clock is much better for workloads that have frequent access to the block cache with multiple threads.

relQPS  test
0.97    fillseq.wal_disabled.v400
1.35    revrangewhilewriting.t16
1.43    fwdrangewhilewriting.t16
1.69    readwhilewriting.t16
0.97    overwriteandwait.t16.s0

Large server (32 cores)

The benchmark was run using 24 client threads and 50M KV pairs. Each benchmark step was run for 1800 seconds. Performance summaries are here.

For the byrx (cached database) workload with the LRU block cache

  • see relative and absolute performance summaries, the base version is RocksDB 6.29.5
  • fillseq might have a new regression of ~10% from 7.10 through 10.2, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x

For the byrx (cached database) workload with the Hyper Clock block cache

  • see relative and absolute performance summaries, the base version is RocksDB 7.10.2
  • fillseq might have a new regression of ~10% from 7.10 through 10.2, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x
Results from RocksDB 10.2.1 that show relative QPS for 10.2 with the Hyper Clock block cache relative to 10.2 with the LRU block cache. Hyper Clock is much better for workloads that have frequent access to the block cache with multiple threads.

relQPS  test
1.02    fillseq.wal_disabled.v400
1.39    revrangewhilewriting.t24
1.55    fwdrangewhilewriting.t24
1.77    readwhilewriting.t24
1.00    overwriteandwait.t24.s0

Tuesday, May 6, 2025

RocksDB 10.2 benchmarks: large server

 This post has benchmark results for RocksDB 10.x, 9.x, 8.11, 7.10 and 6.29 on a large server.

\tl;dr

  • There are several big improvements
  • There are no new regressions
  • For the block cache hyperclock does much better than LRU on CPU-bound tests

Software

I used RocksDB versions 6.0.2, 6.29.5, 7.10.2, 8.11.4, 9.0.1, 9.1.2, 9.2.2, 9.3.2, 9.4.1, 9.5.2, 9.6.2, 9.7.4, 9.8.4, 9.9.3, 9.10.0, 9.11.2, 10.0.1, 10.1.3, 10.2.1. Everything was compiled with gcc 11.4.0.

For 8.x, 9.x and 10.x the benchmark was repeated using both the LRU block cache (older code) and hyperclock (newer code). That was done by setting the --cache_type argument:

  • lru_cache was used for versions 7.6 and earlier
  • hyper_clock_cache was used for versions 7.7 through 8.5
  • auto_hyper_clock_cache was used for versions 8.5+

Hardware

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and ext4.

Benchmark

Overviews on how I use db_bench are here and here.

All of my tests here use a CPU-bound workload with a database that is cached by RocksDB and the benchmark is run for 36 threads.

Tests were repeated for 3 workload+configuration setups:

  • byrx - database is cached by RocksDB
  • iobuf - database is larger than RAM and RocksDB uses buffered IO
  • iodir - database is larger than RAM and RocksDB uses O_DIRECT
The benchmark steps named on the charts are:
  • fillseq
    • load RocksDB in key order with 1 thread
  • revrangeww, fwdrangeww
    • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
  • readww
    • do point queries with a rate-limited writer. Report performance for the point queries.
  • overwrite
    • overwrite (via Put) random keys using many threads
Results: byrx

Performance summaries are here for: LRU block cache, hyperclock and LRU vs hyperclock. A spreadsheet with relative QPS and charts is here.

The graphs below shows relative QPS which is: (QPS for me / QPS for base case). When the relative QPS is greater than one than performance improved relative to the base case. The y-axis doesn't start at zero in most graphs to make it easier to see changes.

This chart has results for the LRU block cache and the base case is RocksDB 6.29.5:
  • overwrite
    • ~1.2X faster in modern RocksDB
  • revrangeww, fwdrangeww, readww
    • slightly faster in modern RocksDB
  • fillseq
    • ~15% slower in modern RocksDB most likely from new code added for correctness checks
This chart has results for the hyperclock block cache and the base case is RocksDB 8.11.4:
  • there are approximately zero regressions. The changes are small and might be normal variance.
This chart has results from RocksDB 10.2.1. The base case uses the LRU block cache and that is compared with hyperclock
  • readww
    • almost 3X faster with hyperclock because it suffers the most from block cache contention
  • revrangeww, fwdrangeww
    • almost 2X faster with hyperclock
  • fillseq
    • no change with hyperclock because the workload uses only 1 thread
  • overwrite
    • no benefit from hyperclock because write stalls are the bottleneck
Results: iobuf

Performance summaries are here for: LRU block cache, hyperclock and LRU vs hyperclock. A spreadsheet with relative QPS and charts is here.

The graphs below shows relative QPS which is: (QPS for me / QPS for base case). When the relative QPS is greater than one than performance improved relative to the base case. The y-axis doesn't start at zero in most graphs to make it easier to see changes.

This chart has results for the LRU block cache and the base case is RocksDB 6.29.5.
  • fillseq
    • ~1.6X faster since RocksDB 7.x
  • readww
    • ~6% faster in modern RocksDB
  • overwrite
  • revrangeww, fwdrangeww
    • ~5% slower since early 8.x
This chart has results for the hyperclock block cache and the base case is RocksDB 8.11.4.
  • overwrite
    • suffered from issue 12038 in versions 8.6 through 9.8. The line would be similar to what I show above had the base case been prior to 8.5 or earlier
  • fillseq
    • ~7% faster in 10.2 relative to 8.11
  • revrangeww, fwdrangeww, readww
    • unchanged from 8.11 to 10.2

This chart has results from RocksDB 10.2.1. The base case uses the LRU block cache and that is compared with hyperclock.

  • readww
    • ~8% faster with hyperclock. The benefit here is smaller than above for byrx because the workload here is less CPU-bound
  • revrangeww, fwdrangeww, overwrite
    • slightly faster with hyperclock
  • fillseq
    • no change with hyperclock because the workload uses only 1 thread

Results: iodir

Performance summaries are here for: LRU block cache, hyperclock and LRU vs hyperclock. A spreadsheet with relative QPS and charts is here

The graphs below shows relative QPS which is: (QPS for me / QPS for base case). When the relative QPS is greater than one than performance improved relative to the base case. The y-axis doesn't start at zero in most graphs to make it easier to see changes.

This chart has results for the LRU block cache and the base case is RocksDB 6.29.5.

  • fillseq
    • ~1.6X faster since RocksDB 7.x (see results above for iobuf)
  • overwrite
    • ~1.2X faster in modern RocksDB
  • revrangeww, fwdrangeww, readww
    • unchanged from 6.29 to 10.2

This chart has results for the hyperclock block cache and the base case is RocksDB 8.11.4.

  • overwrite
    • might have a small regression (~3%) from 8.11 to 10.2
  • revrangeww, fwdrangeww, readww, fillseq
    • unchanged from 8.11 to 10.2

This chart has results from RocksDB 10.2.1. The base case uses the LRU block cache and that is compared with hyperclock.

  • there are small regressions and/or small improvements and/or normal variance



Thursday, May 1, 2025

The impact of innodb_doublewrite_pages in MySQL 8.0.41

After reading a blog post from JFG on changes to innodb_doublewrite_pages and bug 111353, I wanted to understand the impact from that on the Insert Benchmark using a large server.

I test the impact from:

  • using a larger (non-default) value for innodb_doublewrite_pages
  • disabling the doublewrite buffer

tl;dr

  • Using a larger value for innodb_doublewrite_pages improves QPS by up to 10%
  • Disabling the InnoDB doublewrite buffer is great for performance, but bad for durability. I don't suggest you do this in production.

Builds, configuration and hardware

I compiled upstream MySQL 8.0.41 from source.

The server is an ax162-s from Hetzner with 48 cores, AMD 128G RAM and AMD SMT disabled. It uses Ubuntu 22.04 and storage is ext4 using SW RAID 1 over 2 locally attached NVMe devices. More details on it are here. At list prices a similar server from Google Cloud costs 10X more than from Hetzner.

The MySQL configuration files are:
  • cz11a_c32r128 - the base configuration file that does not set innodb_doublewrite_pages and gets innodb_doublewrite_pages=8
  • cz11e_c32r128 - adds innodb_doublewrite_pages=128 to the base config
  • cz11f_c32r128 - adds innodb_doublewrite=0 to the base config (disables doublewrite)
The Benchmark

The benchmark is explained here and is run with 20 clients and a table per client with an IO-bound workload. The database is larger than memory with 200M rows per table and 20 tables.

The benchmark steps are:

  • l.i0
    • insert 200 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section in the performance report has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result with the cz11e_c32r128 or cz11f_c32r128 configs and $base is the result from the cz11a_c32r128 config. The configs are explained above, cz11e_c32r128 increases innodb_doublewrite_pages and cz11f_c32r128 disabled the doublewrite buffer.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: more IO-bound

The performance summary is here.

From the cz11e_c32r128 config that increases innodb_doublewrite_pages to 128:
  • the impact on write-heavy steps is mixed: create index was ~7% slower and l.i2 was ~10% faster
  • the impact on range query + write steps is positive but small. The improvements were 0%, 0% and 4%. Note that these steps are not as IO-bound as point query + write steps and the range queries do ~0.3 reads per query (see here).
  • the impact on point query + write steps is positive and larger. The improvements were 3%, 8% and 9%. These benchmark steps are much more IO-bound than the steps that do range queries.
From the cz11f_c32r128 config that disables the InnoDB doublewrite buffer:
  • the impact on write-heavy steps is large -- from 1% to 36% faster.
  • the impact on range query + write steps is positive but small. The improvements were 0%, 2% and 15%. Note that these steps are not as IO-bound as point query + write steps and the range queries do ~0.3 reads per query (see here).
  • the impact on point query + write steps is positive and larger. The improvements were 14%, 41% and 42%.

RocksDB 10.2 benchmarks: large & small servers with a cached workload

I previously shared benchmark results for RocksDB using the larger server that I have. In this post I share more results from two other lar...