Sunday, December 31, 2023

Updates for the Insert Benchmark, December 2023

This describes the Insert Benchmark after another round of changes I recently made. Past descriptions are here and here. Source code for the benchmark is here. It is still written in Python and I hope that one day a Python JIT will arrive to reduce the overhead of the benchmark client.

The changes include:

  • switched from float to int for the price column
  • changed the marketsegment index from (price, customerid) to (productid, customerid). The other indexes are unchanged -- registersegment is on (cashregisterid, price, customerid) and pdc is on (price, dateandtime, customerid)
  • added a mode for the read-write benchmark step to do point queries on the PK index
  • changed some of the benchmark steps to do a delete per insert to avoid growing the table size. I made this change a few months ago.
  • add the l.i2 benchmark step that modifies fewer rows per transaction compared to l.i1
  • added code to reduce checkpoint (InnoDB) and compaction (RocksDB) debt that runs between the l.i2 and qr100 benchmark steps. The code for Postgres was already there.
Alas, I have yet to address coordinated omission.

Benchmark steps

The benchmark is run with X clients and usually with a client per table.

The benchmark is a sequence of steps that are run in order:
  • l.i0
    • insert Y million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One does inserts as fast as possible and the other does deletes at the same rate as the inserts to avoid changing the number of rows in the table. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions).
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow where X is max(1200, 60 + #nrows/1M). While waiting do things to reduce writeback debt where the things are:
      • Postgres (see here) - do a vacuum analyze for all tables concurrently. When that finishes do a checkpoint.
      • InnoDB (see here) - change innodb_max_dirty_pages_pct[_lwm] to 1, change innodb_idle_flush_pct to 100. When done waiting restore them to previous values.
      • MyRocks (see here) - set rocksdb_force_flush_memtable_now to flush the memtable, wait 20 seconds and then set rocksdb_compact_lzero_now to flush L0. Note that rocksdb_compact_lzero_now wasn't supported until mid-2023.
  • qr100
    • use 3 connections/client. One does range queries as fast as possible and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for a fixed amount of time. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • lik qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s

No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...