Thursday, August 10, 2023

Checking MyRocks 5.6 for regressions with the Insert Benchmark and a medium server

I found performance regressions on a large server with the Insert Benchmark when I compared builds from 2022 versus a current build. These builds were done using a complicated build script (special production compiler toolchains makes builds complex). In that previous post for the large server I claimed there was a ~15% regression for write-heavy benchmark steps and ~5% for read-heavy.

Then I shared results for tests from a small server where I did not see regressions but made a mistake while building MyRocks so the results are bogus.

Here I share results from a medium server and there are regressions. I confirmed that I did not repeat my mistake from the medium server -- via grep of the RocksDB LOG files to verify I was using the expected versions of RocksDB.

tl;dr

  • I will repeat these tests using MyRocks with RocksDB 8.3, 8.4 and 8.5 releases
  • For cached workloads the worst-case regression is 5% for write-only and 7% read+write benchmark steps.
  • For IO-bound workloads
    • The worst-case regression is 9% for write-only and 11% for read+write benchmark steps
    • Starting in RocksDB 7.10 trivial move is more common
    • Compaction CPU overhead has probably increased but compaction IO stats don't show that because along with the increase comes a more frequent usage of trivial move which reduces the total amount of compaction that must be done

Builds

I used MyRocks from FB MySQL 5.6.35 using the rel build (CMAKE_BUILD_TYPE=Release, see here) with source from 2021 through 2023. The versions are:
  • fbmy5635_rel_202104072149 - from 20210407 at git hash (f896415fa0 MySQL, 0f8c041ea RocksDB), RocksDB 6.19
  • fbmy5635_rel_202203072101 - from 20220307 at git hash (e7d976ee MySQL, df4d3cf6fd RocksDB), RocksDB 6.28.2
  • fbmy5635_rel_202205192101 - from 20220519 at git hash (d503bd77 MySQL, f2f26b15 RocksDB), RocksDB 7.2.2
  • fbmy5635_rel_202208092101 - from 20220809 at git hash (877a0e585 MySQL, 8e0f4952 RocksDB), RocksDB 7.3.1
  • fbmy5635_rel_202210112144 - from 20221011 at git hash (c691c7160 MySQL, 8e0f4952 RocksDB), RocksDB 7.3.1
  • fbmy5635_rel_202302162102 - from 20230216 at git hash (21a2b0aa MySQL, e5dcebf7 RocksDB), RocksDB 7.10.0
  • fbmy5635_rel_202304122154 - from 20230412 at git hash (205c31dd MySQL, 3258b5c3 RocksDB), RocksDB 7.10.2
  • fbmy5635_rel_202305292102 - from 20230529 at git hash (b739eac1 MySQL, 03057204 RocksDB), RocksDB 8.2.1
  • fbmy5635_rel_jun23_7e40af677 - from 20230608 at git hash (7e40af67 MySQL, 03057204 RocksDB), RocksDB 8.2.1
Benchmark

The insert benchmark was run in two setups:

  • cached by RocksDB - all tables fit in the RocksDB block cache
  • IO-bound - the database is larger than memory

This benchmark used a c2-standard-30 server with 15 cores, hyperthreads disabled, 120G of RAM and 1.5T of local NVMe (4 devices, SW RAID0, XFS) using Ubuntu 22.04. 

The benchmark is run with 8 clients and each client uses a separate table. The benchmark is a sequence of steps.

  • l.i0
    • insert X million rows across all tables without secondary indexes where X is 20 for cached and 500 for IO-bound
  • l.x
    • create 3 secondary indexes. I usually ignore performance from this step.
  • l.i1
    • insert and delete another 100 million rows per table with secondary index maintenance. The number of rows/table at the end of the benchmark step matches the number at the start with inserts done to the table head and the deletes done from the tail.
  • q100
    • do queries as fast as possible with 100 inserts/s/client and the same rate for deletes/s done in the background. Run for 3600 seconds.
  • q500
    • do queries as fast as possible with 500 inserts/s/client and the same rate for deletes/s done in the background. Run for 3600 seconds.
  • q1000
    • do queries as fast as possible with 1000 inserts/s/client and the same rate for deletes/s done in the background. Run for 3600 seconds.

Configurations

I used two config (my.cnf) files: base and c5. The c5 config adds rocksdb_max_subcompactions=4.

Results

Performance reports are here for Cached by RocksDB (base and c5 configs) and IO-bound (base and c5 configs). Comparing the 20210407 build that uses RocksDB 6.19 with the 20210407 build that uses RocksDB 8.2.1. I focus on the write-only (l.i1) and read+write (q100, q500, q1000) benchmark steps.

For Cached by RocksDB
  • the regression is 5% for write-only and 7% read+write with the base config vs 4% for write-only and 3% for read+write with the c5 config. See the summaries for the base and c5 configs.
For IO-bound
  • the regression is 9% for write-only and 11% for read+write with the base config vs 8% for write-only and 8% for read+write with the c5 config. See the summaries for the base and c5 configs.
  • the CPU overhead from compaction, see CompMergeCPU(sec) in compaction IO stats, has not changed but the amount of trivial move has increased, see Moved(GB), and write-amplification, see W-Amp, has decreased starting with the 202302162102 build that uses RocksDB 7.10.0. This means there was less work for compaction, see also the Write(GB) column, so I would compaction CPU seconds (CompMergeCPU) to drop starting with the 202302162102 build but it has not. This means that compaction CPU overhead has increased in general and it would be apparent if there were no trivial moves.







No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...