This post has results for performance regressions in most versions of RocksDB 4.x and all versions of 5.x, 6.x, 7.x and 8.x using a small server and a cached workload. Posts for IO-bound workloads are here and here. In a previous post I shared results for RocksDB 7.x and 8.x on a larger server.
The workload here is mostly cached and has low concurrency so the performance is mostly a function of CPU overhead. When performance drops over time the most likely culprit is new CPU overheads.
From RocksDB 4.4 to 8.8
- fillseq QPS decreased by ~5%, there is a big drop from 6.13 to 6.14
- read-only QPS decreased by ~20%, mostly in 6.x and especially from 6.1 to 6.4
- read QPS in read-write workloads increased by ~30%, there is a big jump from 7.7 to 7.8
- random write-only (overwriteandwait) QPS increased by ~40%, mostly in late 5.x
Other notes
- For RocksDB 8.7 and 8.8 you might need to reduce compaction_readahead_size as explained here
- A spreadsheet with all of the charts is here
Builds
I compiled most versions of RocksDB 4.x and all versions of 5.x, 6.x, 7.x and 8.x using gcc. I wasn't able to compile RocksDB 4.0 thru 4.3. The build command line is:
make DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 static_lib db_bench
The versions tested were:
- 4.4.1, 4.5.1, 4.6.1, 4.7.1, 4.8.0, 4.9.0, 4.10.2, 4.11.2, 4.12.0, 4.13.5
- 5.0.2, 5.1.5, 5.2.1, 5.3.7, 5.4.12, 5.5.6, 5.6.2, 5.7.5, 5.8.8, 5.9.3, 5.10.5, 5.11.4, 5.12.5, 5.13.4, 5.14.3, 5.15.10, 5.16.6, 5.17.4, 5.18.4
- 6.0.2, 6.1.2, 6.2.4, 6.3.6, 6.4.6, 6.5.3, 6.6.4, 6.7.3, 6.8.1, 6.9.4, 6.10.4, 6.11.7, 6.12.8, 6.13.4, 6.14.6, 6.15.5, 6.16.5, 6.17.3, 6.18.1, 6.19.4, 6.20.4, 6.21.3, 6.22.3, 6.23.3, 6.24.2, 6.25.3, 6.26.1, 6.27.3, 6.28.2, 6.29.5
- 7.0.4, 7.1.2, 7.2.2, 7.3.2, 7.4.5, 7.5.4, 7.6.0, 7.7.8, 7.8.3, 7.9.3, 7.10.2
- 8.0.0, 8.1.1, 8.2.1, 8.3.3, 8.4.4, 8.5.4, 8.6.7, 8.7.2, 8.8.0
Benchmark
The benchmark used the Beelink server explained here that has 8 cores, 16G RAM and 1TB of NVMe SSD with XFS and Ubuntu 22.04 with the 5.15.0-79-generic kernel. There is just one storage device and no RAID. The value of max_sectors_kb is 512. For RocksDB 8.7 and 8.8 I reduced the value of compaction_readahead_size from 2MB (the default) to 480KB. Everything used the LRU block cache.
I used my fork of the RocksDB benchmark scripts that are wrappers to run db_bench. These run db_bench tests in a special sequence -- load in key order, read-only, do some overwrites, read-write and then write-only. The benchmark was run using 1 client thread. How I do benchmarks for RocksDB is explained here and here.
The benchmark was repeated in three setups but in this post I only share results for cached:
- cached - database fits in the RocksDB block cache
- iobuf - IO-bound, working set doesn't fit in memory, uses buffered IO
- iodir - IO-bound, working set doesn't fit in memory, uses O_DIRECT
Results: from 4.x to 8.x
The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 4.4.1)
This has results for RocksDB versions: 4.4.1, 4.9.0, 4.13.5, 5.0.2, 5.5.6, 5.10.5, 5.18.4, 6.0.2, 6.10.4, 6.20.4, 6.29.5, 7.0.4, 7.5.4, 7.10.2, 8.0.0, 8.2.1, 8.4.4, 8.6.7, 8.8.0.
From RocksDB 4.4 to 8.8
- fillseq QPS decreased by ~5%, there is a big drop from 6.13 to 6.14
- read-only QPS decreased by ~20%, mostly in 6.x and especially from 6.1 to 6.4
- read QPS in read-write workloads increased by ~30%, there is a big jump from 7.7 to 7.8
- random write-only (overwriteandwait) QPS increased by ~40%, mostly in late 5.x
The following charts are limited to one benchmark per chart. I switched from a column chart to a line chart to improve readability.
Results: 8.x
The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 8.0.0)
This has results for RocksDB versions: 8.0.0, 8.1.1, 8.2.1, 8.3.3, 8.4.4, 8.5.4, 8.6.7, 8.7.2, 8.8.0
Summary
- the drop for overwriteandwait in 8.6 occurs because the db_bench client clobbers the changed default value for compaction_readahead_size
Results: 7.x
The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 7.0.4)
This has results for RocksDB versions: 7.0.4, 7.1.2, 7.2.2, 7.3.2, 7.4.5, 7.5.4, 7.6.0, 7.7.8, 7.8.3, 7.9.3, 7.10.2
Summary
- A large increase in read QPS for read+write benchmarks arrives in RocksDB 7.8
Results: 6.x
The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 6.0.2)
This has results for RocksDB versions: 6.0.2, 6.1.2, 6.2.4, 6.3.6, 6.4.6, 6.5.3, 6.6.4, 6.7.3, 6.8.1, 6.9.4, 6.10.4, 6.11.7, 6.12.8, 6.13.4, 6.14.6, 6.15.5, 6.16.5, 6.17.3, 6.18.1, 6.19.4, 6.20.4, 6.21.3, 6.22.3, 6.23.3, 6.24.2, 6.25.3, 6.26.1, 6.27.3, 6.28.2, 6.29.5
Summary
- QPS for read-only benchmarks drops a lot in early 6.x releases
Results: 5.x
The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 5.0.2)
This has results for RocksDB versions: 5.0.2, 5.1.5, 5.2.1, 5.3.7, 5.4.12, 5.5.6, 5.6.2, 5.7.5, 5.8.8, 5.9.3, 5.10.5, 5.11.4, 5.12.5, 5.13.4, 5.14.3, 5.15.10, 5.16.6, 5.17.4, 5.18.4
Results: 4.x
The charts use relative QPS which is: (QPS for my version / QPS for RocksDB 4.4.1)
This has results for RocksDB versions: 4.4.1, 4.5.1, 4.6.1, 4.7.1, 4.8.0, 4.9.0, 4.10.2, 4.11.2, 4.12.0, 4.13.5
Summary
- Read QPS during read+write benchmarks drops a lot in late 4.x releases
- Put QPS in overwriteandwait drops a lot in late 4.x releases
No comments:
Post a Comment