This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.
There might be regressions from 17.5 to 18 beta1
- QPS decreases by ~5% and CPU increases by ~5% on the l.i2 (write-only) step
- QPS decreases by <= 2% and CPU increases by ~2% on the qr* (range query) steps
- QPS decreases by ~6% and ~18% on the write-heavy steps (l.i1, l.i2)
The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04. More details on it are here.
- conf.diff.cx10b_c8r32
- uses io_method='sync' to match Postgres 17 behavior
- conf.diff.cx10c_c8r32
- uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
- conf.diff.cx10d_c8r32
- uses io_method='io_uring' to do async IO via io_uring
- one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
- one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
- l.i0
- insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
- Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
- the initial load (l.i0)
- Performance is stable across versions
- 18 beta1 and 17.5 have similar performance
- rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99)
- create index (l.x)
- ~10% faster starting in 15.0
- 18 beta1 and 17.5 have similar performance
- rQPS for (17.5, 18 beta1 with io_method=sync) is (1.11, 1.12)
- first write-only step (l.i1)
- Performance decreases ~7% from version 16.9 to 17.0. CPU overhead (see cpupq here) increases by ~5% in 17.0.
- 18 beta1 and 17.5 have similar performance
- rQPS for (17.5, 18 beta1 with io_method=sync) is (0.93, 0.94)
- second write-only step (l.i2)
- Performance decreases ~6% in 15.0, ~8% in 17.0 and then ~5% in 18.0. CPU overhead (see cpupq here) increases ~5%, ~6% and ~5% in 15.0, 17.0 and 18beta1. Of all benchmark steps, this has the largest perf regression from 14.0 through 18 beta1 which is ~20%.
- 18 beta1 is ~4% slower than 17.5
- rQPS for (17.5, 18 beta1 with io_method=sync) is (0.86, 0.82)
- range query steps (qr100, qr500, qr1000)
- 18 beta1 and 17.5 have similar performance, but 18 beta1 is slightly slower
- rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99) for qr100, (0.97, 0.98) for qr500 and (0.97, 0.95) for qr1000. The issue is new CPU overhead, see cpupq here.
- point query steps (qp100, qp500, qp1000)
- 18 beta1 and 17.5 have similar performance but 18 beta1 is slightly slower
- rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.98) for qp100, (0.99, 0.98) for qp500 and (0.97, 0.96) for qp1000. The issue is new CPU overhead, see cpupq here.
- x10b with io_method=sync
- x10c with io_method=worker and io_workers=16
- x10d with io_method=io_uring
- initial load step (l.i0)
- rQPS for (x10b, x10c, x10d) was (0.99, 100, 1.00)
- create index step (l.x)
- rQPS for (x10b, x10c, x10d) was (1.01, 1.02, 1.02)
- first write-heavy ste (l.i1)
- for l.i1 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 1.01)
- second write-heavy step (l.i2)
- for l.i2 the rQPS for (x10b, x10c, x10d) was (0.96, 0.93, 0.94)
- CPU overhead (see cpupq here) increases by ~5% in 18 beta1
- range query steps (qr100, qr500, qr1000)
- for qr100 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 0.99)
- for qr500 the rQPS for (x10b, x10c, x10d) was (1.00, 0.97, 0.99)
- for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.98, 0.97)
- CPU overhead (see cpupq here, here and here) increases by ~2% in 18 beta1
- point query steps (qp100, qp500, qp1000)
- for qp100 the rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.99)
- for qp500 the rQPS for (x10b, x10c, x10d) was (0.99, 1.00, 1.00)
- for qp1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.99, 0.99)