Thursday, August 17, 2023

Checking MyRocks 5.6 for regressions with the Insert Benchmark and a small server, part 2

This is part 2 in my attempt to document how performance changes from old to new releases of MyRocks using the Insert Benchmark. I use MyRocks 5.6 rather than 8.0 because the 5.6 release go back further in time. A previous post is here but results there were bogus because my builds were broken.

tl;dr - a hand-wavy summary of

  • Modern MyRocks uses RocksDB 8.5.0 and classic MyRocks uses RocksDB 6.19
  • Modern MyRocks gets ~5% less throughput on read-intensive benchmark steps because there is new CPU overhead
  • Modern MyRocks gets ~3% less throughput on write-intensive benchmark steps because there is new CPU overhead
  • Modern MyRocks has ~10% less write-amplification

Builds

I started with the builds from my previous post, removed the fbmy5635_rel_jun23_7e40af677 build and then added 3 builds that use MyRocks as of what is used by fbmy5635_rel_202305292102 but upgraded RocksDB to 8.3.2, 8.4.3 and 8.5.0

I used MyRocks from FB MySQL 5.6.35 using the rel build (CMAKE_BUILD_TYPE=Release, see here) with source from 2021 through 2023. The versions are:
  • fbmy5635_rel_202104072149 - from 20210407 at git hash (f896415fa0 MySQL, 0f8c041ea RocksDB), RocksDB 6.19
  • fbmy5635_rel_202203072101 - from 20220307 at git hash (e7d976ee MySQL, df4d3cf6fd RocksDB), RocksDB 6.28.2
  • fbmy5635_rel_202205192101 - from 20220519 at git hash (d503bd77 MySQL, f2f26b15 RocksDB), RocksDB 7.2.2
  • fbmy5635_rel_202208092101 - from 20220809 at git hash (877a0e585 MySQL, 8e0f4952 RocksDB), RocksDB 7.3.1
  • fbmy5635_rel_202210112144 - from 20221011 at git hash (c691c7160 MySQL, 8e0f4952 RocksDB), RocksDB 7.3.1
  • fbmy5635_rel_202302162102 - from 20230216 at git hash (21a2b0aa MySQL, e5dcebf7 RocksDB), RocksDB 7.10.0
  • fbmy5635_rel_202304122154 - from 20230412 at git hash (205c31dd MySQL, 3258b5c3 RocksDB), RocksDB 7.10.2
  • fbmy5635_rel_202305292102 - from 20230529 at git hash (b739eac1 MySQL, 03057204 RocksDB), RocksDB 8.2.1
  • fbmy5635_rel_20230529_832 - from 20230529 at git hash (b739eac1 MySQL) but with RocksDB at version 8.3.2
  • fbmy5635_rel_20230529_843 - from 20230529 at git hash (b739eac1 MySQL) but with RocksDB at version 8.4.3
  • fbmy5635_rel_20230529_850 - from 20230529 at git hash (b739eac1 MySQL) but with RocksDB at version 8.5.0
Benchmark

The insert benchmark was run in two setups:

  • cached by RocksDB - all tables fit in the RocksDB block cache
  • IO-bound - the database is larger than memory

This benchmark used the Beelink server explained here that has 8 cores, 16G RAM and 1TB of NVMe SSD with XFS and Ubuntu 22.04. 

The benchmark is run with 1 client. The benchmark is a sequence of steps.

  • l.i0
    • insert X million rows across all tables without secondary indexes where X is 20 for cached and 800 for IO-bound
  • l.x
    • create 3 secondary indexes. I usually ignore performance from this step.
  • l.i1
    • insert and delete another 100 million rows per table with secondary index maintenance. The number of rows/table at the end of the benchmark step matches the number at the start with inserts done to the table head and the deletes done from the tail.
  • q100, q500, q1000
    • do queries as fast as possible with 100, 500 and 1000 inserts/s/client and the same rate for deletes/s done in the background. Run for 3600 seconds.

Configurations

The configuration (my.cnf) files are here and I use abbreviated names for them in this post. For each variant there are two files -- one with a 1G block cache, one with a larger block cache. The larger block cache size is 8G when LRU is used and 6G when hyper clock cache is used (see tl;dr).

  • a (see here) - base config
  • a5 (see here) - enables subcompactions via rocksdb_max_subcompactions=2
Results

Performance reports are here for Cached by RocksDB (base config and c5 config) and IO-bound (base config and c5 config).

Results: average throughput

This section explains the average throughput tables in the Summary section. I use relative throughput to save on typing where relative throughput is (throughput for some version  / throughput for base case). When relative throughput is > 1 then some version is faster than the base case. Unless stated ...

  • Base case is fbmy5635_rel_202104072149, the oldest build that uses RocksDB 6.19
  • Some version is fbmy5635_rel_20230529_850, the newest build that uses RocksDB 8.5.0

Cached by RocksDB, base config (see here)
  • Relative throughput for (l.i0, l.x, l.i1, q100, q500, q1000) is (0.94, 0.99, 0.98, 0.96, 0.96, 0.96)
  • Modern MyRocks gets ~4% less throughput on read-intensive steps, 1% less on create index and 2% to 4% less on write-intensive steps.
Cached by RocksDB, c5 config (see here)
  • Relative throughput for (l.i0, l.x, l.i1, q100, q500, q1000) is (0.94, 0.99, 0.97, 0.95, 0.94, 0.95)
  • Modern MyRocks gets ~5% less throughput on read-intensive steps, 1% less on create index and 3% to 6% less on write-intensive steps.
  • Using the HW perf metrics for l.i1 and for q100 the regressions are from new CPU overhead, see the cpupq column (CPU/operation) where it grows from 137 to 141 for the l.i1 step and from 375 to 396 for the q100 step.
IO-bound, base config (see here)
  • Relative throughput for (l.i0, l.x, l.i1, q100, q500, q1000) is (0.93, 0.99, 0.94, 0.94, 0.94, 0.95)
  • Modern MyRocks gets ~6% less throughput on read-intensive steps, 1% less on create index and 6% to 7% less on write-intensive steps.
IO-bound, c5 config (see here)
  • Relative throughput for (l.i0, l.x, l.i1, q100, q500, q1000) is (0.94, 1.00, 0.97, 0.96, 0.95, 0.96)
  • Modern MyRocks gets ~4% less throughput on read-intensive steps and 3% to 6% less on write-intensive steps.
  • Using the HW perf metrics for l.i1 and for q100 there is more CPU overhead in modern MyRocks. See the cpupq column (CPU/operation) where it grows from 166 to 170 for the l.i1 step and from 529 to 560 for the q100 step. On the bright side, write efficiency has improved based on the wkbpi (KB written to storage per insert) for the l.i1 step where it drops from 2.877 to 2.558. Based on compaction IO statistics at the end of l.i1 and the end of q1000 the improvement is from an increase in trivial move and a decrease in compaction writes to most levels. This reduces write-amplification by ~10%.

No comments:

Post a Comment

Speedb vs RocksDB on a large server

I am happy to read about storage engines that claim to be faster than RocksDB. Sometimes the claims are true and might lead to ideas for mak...