Sunday, March 17, 2024

Trying to tune Postgres for the Insert Benchmark: small server

Last year I spent much time trying to tune the Postgres configs I use to improve results for the Insert Benchmark. While this was a good education for me I wasn't able to get significant improvements. After writing about another perf problem with Postgres (optimizer spends too much time on DELETE statements in a special circumstance) I revisited the tuning but didn't make things significantly better.

The results here are from Postgres 16.2 and a small server (8 CPU cores) with a low concurrency workload. Previous benchmark reports for Postgres on this setup are here for cached and IO-bound runs.

tl;dr

  • I have yet to fix this problem via tuning
The Problem

The performance problem is explained here and here. The issue is that the optimizer spends too much time on DELETE statements under special circumstances. In this case the optimizer can read from the index to determine the true value for the min or max value of the column referenced in the WHERE clause and when there are too many deleted index entries that have yet to be removed by vacuum then there is too much time spent in the optimizer.

The problem shows up on the l.i2 benchmark step. The benchmark client sustains the same rate for inserts/s and delete/s so if deletes are too slow then the insert rate will also be too slow. The ratio of delete/s (and insert/s) for l.i2 relative to l.i1 is ~0.2 for the cached workload and ~0.05 for the IO-bound workload. 

The l.i1 benchmark step deletes more rows/statement so the optimizer overhead is more significant on the l.i2 step. The ratios are much larger for InnoDB and MyRocks (they have perf problems, just not this perf problem).

The circumstances are:
  • the table has a queue pattern (insert to one end, delete from the other)
  • the DELETE statements have WHERE pk_col < $low-const and pk_col > $high-const where $low-const and $high-const are integer constants and there is a PK on pk_col
This workload creates much MVCC garbage that is co-located in the PK index and that is a much bigger problem for Postgres than for InnoDB or MyRocks. 

I hope for a Postgres storage engine that provides MVCC without vacuum. In theory, more frequent vacuum might help and the perf overhead from frequent vacuum might be OK for the heap table given the usage of visibility bits. But when vacuum then has to do a full index scan (no visibility bits there) then that is a huge cost which limits vacuum frequency.

Build + Configuration

See the previous report for more details. I used Postgres 16.2.

The configuration files for the SER4 server are in subdirectories from here. Using the suffixes that distinguish the config file names, they are::
  • cx9a2_bee - base config
  • cx9a2a_bee - adds autovacuum_vacuum_cost_delay= 1ms
  • cx9a2b_bee - adds autovacuum_vacuum_cost_delay= 0
  • cx9a2c_bee - adds autovacuum_naptime= 1s
  • cx9a2e_bee - adds autovacuum_vacuum_scale_factor= 0.01
  • cx9a2f_bee - adds autovacuum_vacuum_insert_scale_factor= 0.01
  • cx9a2g_bee - adds autovacuum_vacuum_cost_limit= 8000
  • cx9a2acef_bee - combines cx9a2a, cx9a2c, cz9a2e, cx9a2f configs
  • cx9a2bcef_bee - combines cx9a2b, cx9a2c, cz9a2e, cx9a2f configs
The Benchmark

The benchmark is run with 1 client. It is explained here and was run in two setups
  • cached - database has 30M rows and fits in memory
  • IO-bound - database has 800M rows and is larger than memory, 
The test was run on two small servers that I have at home:
  • SER4 - Beelink SER4 with 8 cores, 16G RAM, Ubuntu 22.04 and XFS using 1 m.2 device
  • SER7 - Beelink SER7 with 8 cores, 32G RAM, Ubuntu 22.04 and XFS using 1 m.2 device. The CPU on the SER7 is a lot faster than the SER4.
The benchmark steps are:

  • l.i0
    • insert X million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client. For SER4, X is 30M for cached and 800M for IO-bound. For SER7, X is 60M for cached and 800M for IO-bound.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y rows and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate. Y is 80M for cached and 4M for IO-bound.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Y is 20M for cached and 1M for IO-bound.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow.
  • qr100
    • use 3 connections/client. One does range queries for Z seconds and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for a fixed amount of time. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. Z is 3600 for cached and 1800 for IO-bound.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: SER4 server

The performance reports are here for cached and for IO-bound.

The summary has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version on the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is my version and $base is the version of the base case. When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

From the summaries for cached and for IO-bound:
  • The base case is uses the cx9a2_bee config
  • The different config files have no impact on performance for the l.i0 and l.x benchmark steps. They have a small impact for the qr* and qp* (read+write) benchmark steps. Because the impact is non-existent to small I ignore those to focus on l.i1 and l.i2.
For l.i1 and l.i2 with a cached workload the different config files have some impact
  • The relative QPS, where Q means delete (and insert), ranges from 0.76 to 1.34 meaning a few made things slower and the best improved the delete/s rate by ~1.34X
  • The delete/s ratio for l.i2 vs l.i1 is 0.221 for the base case and the best improvement might be from the cx9a2f_bee config where the ratio increases to 0.265. But I was hoping to improve the ratio to 0.5 or larger so I was disappointed.
For l.i1 and l.i2 with an IO-bound workload the different config files have no benefit
  • Postgres 16.2 does ~2000 delete/s for the l.i1 step vs ~100/s for the l.i2 step
Results: SER7 server

The performance reports are here for cached and for IO-bound. Results from the SER7 match results from the SER4 described above so I won't explain them.


























No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...