Tuesday, November 7, 2023

Checking RocksDB 4.x thru 8.x for performance regressions on a small server: IO-bound with buffered IO

This post has for performance regressions in most versions of RocksDB 4.x and all versions of 5.x, 6.x, 7.x and 8.x using a small server. The workload is IO-bound and RocksDB uses buffered IO. In a previous post I shared results for RocksDB 7.x and 8.x on a larger server. A post for a cached workload is here and for IO-bound with O_DIRECT is here.

The workload here is IO-bound and has low concurrency. When performance changes one reason is changes to CPU overheads, but that isn't the only reason.

From RocksDB 4.4 to 8.8
  • fillseq QPS decreased by 10%, mostly between RocksDB 6.10 and 7.10
  • read-only QPS increased by ~50%, mostly in late 5.x and from 5.18 to 6.0
  • read QPS in read-write workloads increased by ~40%, mostly in 5.17 and 5.18
  • random write-only (overwriteandwait) QPS increased by ~2X in RocksDB 5.x  then declined in 6.x and 7.x and has been stable in 8.x at ~1.7X more QPS vs RocksDB 4.4. I hope to do more to explain this.

Other notes

  • For RocksDB 8.7 and 8.8 you might need to reduce compaction_readahead_size as explained here
  • A spreadsheet with all of the charts is here

Builds

I compiled most versions of RocksDB 4.x and all versions of 5.x, 6.x, 7.x and 8.x using gcc. I wasn't able to compile RocksDB 4.0 thru 4.3. The build command line is:
make DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 static_lib db_bench
The versions tested were:
  • 4.4.1, 4.5.1, 4.6.1, 4.7.1, 4.8.0, 4.9.0, 4.10.2, 4.11.2, 4.12.0, 4.13.5
  • 5.0.2, 5.1.5, 5.2.1, 5.3.7, 5.4.12, 5.5.6, 5.6.2, 5.7.5, 5.8.8, 5.9.3, 5.10.5, 5.11.4, 5.12.5, 5.13.4, 5.14.3, 5.15.10, 5.16.6, 5.17.4, 5.18.4
  • 6.0.2, 6.1.2, 6.2.4, 6.3.6, 6.4.6, 6.5.3, 6.6.4, 6.7.3, 6.8.1, 6.9.4, 6.10.4, 6.11.7, 6.12.8, 6.13.4, 6.14.6, 6.15.5, 6.16.5, 6.17.3, 6.18.1, 6.19.4, 6.20.4, 6.21.3, 6.22.3, 6.23.3, 6.24.2, 6.25.3, 6.26.1, 6.27.3, 6.28.2, 6.29.5
  • 7.0.4, 7.1.2, 7.2.2, 7.3.2, 7.4.5, 7.5.4, 7.6.0, 7.7.8, 7.8.3, 7.9.3, 7.10.2
  • 8.0.0, 8.1.1, 8.2.1, 8.3.3, 8.4.4, 8.5.4, 8.6.7, 8.7.2, 8.8.0
Benchmark

The benchmark used the Beelink server explained here that has 8 cores, 16G RAM and 1TB of NVMe SSD with XFS and Ubuntu 22.04 with the 5.15.0-79-generic kernel. There is just one storage device and no RAID. The value of max_sectors_kb is 512. For RocksDB 8.7 and 8.8 I reduced the value of compaction_readahead_size from 2MB (the default) to 480KB. Everything used the LRU block cache.

I used my fork of the RocksDB benchmark scripts that are wrappers to run db_bench. These run db_bench tests in a special sequence -- load in key order, read-only, do some overwrites, read-write and then write-only. The benchmark was run using 1 client thread. How I do benchmarks for RocksDB is explained here and here.

The benchmark was repeated in three setups but in this post I only share results for iobuf
  • cached - database fits in the RocksDB block cache
  • iobuf - IO-bound, working set doesn't fit in memory, uses buffered IO
  • iodir - IO-bound, working set doesn't fit in memory, uses O_DIRECT
Results: from 4.x to 8.x

The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 4.4.1)

This has results for RocksDB versions: 4.4.1, 4.9.0, 4.13.5, 5.0.2, 5.5.6, 5.10.5, 5.18.4, 6.0.2, 6.10.4, 6.20.4, 6.29.5, 7.0.4, 7.5.4, 7.10.2, 8.0.0, 8.2.1, 8.4.4, 8.6.7, 8.8.0.

From RocksDB 4.4 to 8.8
  • fillseq QPS decreased by 10%, mostly between RocksDB 6.10 and 7.10
  • read-only QPS increased by ~50%, mostly in late 5.x and from 5.18 to 6.0
  • read QPS in read-write workloads increased by ~40%, mostly in 5.17 and 5.18
  • random write-only (overwriteandwait) QPS increased by ~2X in RocksDB 5.x  then declined in 6.x and 7.x and has been stable in 8.x at ~1.7X more QPS vs RocksDB 4.4


The following charts are limited to one benchmark per chart. I switched from a column chart to a line chart to improve readability.
Results: 8.x

The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 8.0.0)

This has results for RocksDB versions: 8.0.0, 8.1.1, 8.2.1, 8.3.3, 8.4.4, 8.5.4, 8.6.7, 8.7.2, 8.8.0

Summary
  • the drop for overwriteandwait in 8.6 occurs because the db_bench client clobbers the changed default value for compaction_readahead_size
Results: 7.x

The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 7.0.4)

This has results for RocksDB versions: 7.0.4, 7.1.2, 7.2.2, 7.3.2, 7.4.5, 7.5.4, 7.6.0, 7.7.8, 7.8.3, 7.9.3, 7.10.2

Summary
  • I hope to explain the QPS drop for overwrite in late RocksDB 7.x versions, just not today
Results: 6.x

The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 6.0.2)

This has results for RocksDB versions: 6.0.2, 6.1.2, 6.2.4, 6.3.6, 6.4.6, 6.5.3, 6.6.4, 6.7.3, 6.8.1, 6.9.4, 6.10.4, 6.11.7, 6.12.8, 6.13.4, 6.14.6, 6.15.5, 6.16.5, 6.17.3, 6.18.1, 6.19.4, 6.20.4, 6.21.3, 6.22.3, 6.23.3, 6.24.2, 6.25.3, 6.26.1, 6.27.3, 6.28.2, 6.29.5

Summary
  • QPS is stable for most benchmarks. It drops by ~10% for overwriteandwait.
Results: 5.x

The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 5.0.2)

This has results for RocksDB versions: 5.0.2, 5.1.5, 5.2.1, 5.3.7, 5.4.12, 5.5.6, 5.6.2, 5.7.5, 5.8.8, 5.9.3, 5.10.5, 5.11.4, 5.12.5, 5.13.4, 5.14.3, 5.15.10, 5.16.6, 5.17.4, 5.18.4

Summary:
  • Read QPS increases by 30% to 40%, mostly from 5.15 to 5.18
  • Put QPS during overwriteandwait increases by ~60%, mostly in 5.18
Results: 4.x

The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 4.4.1)

This has results for RocksDB versions: 4.4.1, 4.5.1, 4.6.1, 4.7.1, 4.8.0, 4.9.0, 4.10.2, 4.11.2, 4.12.0, 4.13.5

Summary
  • Read QPS during read+write benchmarks drops a lot in late 4.x releases
  • Put QPS in overwriteandwait drops a lot in late 4.x releases

No comments:

Post a Comment

Trying out Advanced MySQL

I recently learned of the Advanced MySQL project on github via a tweet . There is a book and a repo for an enhanced version of 8.0.40. I w...