Wednesday, June 11, 2025

Postgres 18 beta1: small server, IO-bound Insert Benchmark (v2)

This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

There might be regressions from 17.5 to 18 beta1

  • QPS decreases by ~5% and CPU increases by ~5% on the l.i2 (write-only) step
  • QPS decreases by <= 2% and CPU increases by ~2% on the qr* (range query) steps
There might be regressions from 14.0 to 18 beta1
  • QPS decreases by ~6% and ~18% on the write-heavy steps (l.i1, l.i2)

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 800M rows. I provide two performance reports:
  • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
  • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

The summary sections (herehere and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for the benchmark steps. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.5.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: Postgres 14.0 through 18 beta1

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions. The results here are similar to what I reported prior to fixing the too many connections problem in the benchmark client.

For 14.0 through 18 beta1, QPS on ...
  • the initial load (l.i0)
    • Performance is stable across versions
    • 18 beta1 and 17.5 have similar performance
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99)
  • create index (l.x)
    • ~10% faster starting in 15.0
    • 18 beta1 and 17.5 have similar performance
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.11, 1.12)
  • first write-only step (l.i1)
    • Performance decreases ~7% from version 16.9 to 17.0. CPU overhead (see cpupq here) increases by ~5% in 17.0.
    • 18 beta1 and 17.5 have similar performance
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.93, 0.94)
  • second write-only step (l.i2)
    • Performance decreases ~6% in 15.0, ~8% in 17.0 and then ~5% in 18.0. CPU overhead (see cpupq here) increases ~5%, ~6% and ~5% in 15.0, 17.0 and 18beta1. Of all benchmark steps, this has the largest perf regression from 14.0 through 18 beta1 which is ~20%.
    • 18 beta1 is ~4% slower than 17.5
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.86, 0.82)
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.5 have similar performance, but 18 beta1 is slightly slower
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99) for qr100, (0.97, 0.98) for qr500 and (0.97, 0.95) for qr1000. The issue is new CPU overhead, see cpupq here.
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 and 17.5 have similar performance but 18 beta1 is slightly slower
    • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.98) for qp100, (0.99, 0.98) for qp500 and (0.97, 0.96) for qp1000. The issue is new CPU overhead, see cpupq here.
Results: Postgres 17.5 vs 18 beta1

The performance summary is here.

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10c with io_method=worker and io_workers=16
  • x10d with io_method=io_uring
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10c, x10d) was (0.99, 1001.00)
  • create index step (l.x)
    • rQPS for (x10b, x10c, x10d) was (1.011.021.02)
  • first write-heavy ste (l.i1)
    • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 1.01)
  • second write-heavy step (l.i2)
    • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.960.930.94)
    • CPU overhead (see cpupq here) increases by ~5% in 18 beta1
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 0.99)
    • for qr500 the rQPS for (x10b, x10c, x10d) was (1.00, 0.97, 0.99)
    • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.98, 0.97)
    • CPU overhead (see cpupq here, here and here) increases by ~2% in 18 beta1
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.99)
    • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
    • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.990.99)










Sunday, June 8, 2025

Postgres 18 beta1: small server, CPU-bound Insert Benchmark (v2)

This is my second attempt at a CPU-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

tl;dr

  • Performance between 17.5 and 18 beta1 is mostly similar on read-heavy steps
  • 18 beta1 might have small regressions from new CPU overheads on write-heavy steps

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 20M rows. I provide two performance reports:
  • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
  • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

The summary sections (here and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: Postgres 14.0 through 18 beta1

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions.

For 14.0 through18 beta1, QPS on ...
  • l.i0 (the initial load)
    • Slightly faster starting in 15.0
    • Throughput was ~4% faster starting in 15.0 and that drops to ~2% in 18 beta1
    • 18 beta1 and 17.5 have similar performance
  • l.x (create index) 
    • Faster starting in 15.0
    • Throughput is between 9% and 17% faster in 15.0 through 18 beta1
    • 18 beta1 and 17.5 have similar performance
  • l.i1 (write-only)
    • Slower starting in 15.0
    • It is ~3% slower in 15.0 and that increases to between 6% and 10% in 18 beta1
    • 18 beta1 and 17.5 have similar performance
  • l.i2 (write-only)
    • Slower starting in 15.13 with a big drop in 17.0
    • 18 beta1 with io_method= sync and io_uring is worse than 17.5. It isn't clear but one problem might be more CPU/operation (see cpupq here)
  • qr100, qr500, qr1000 (range query)
    • Stable from 14.0 through 18 beta1
  • qp100, qp500, qp1000 (point query) 
    • Stable from 14.0 through 18 beta1
Results: Postgres 17.5 vs 18 beta1

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10c with io_method=worker and io_workers=16
  • x10d with io_method=io_uring
The summary of the summary is:
  • initial load step (l.i0)
    • 18 beta1 is 1% to 3% slower than 17.5
    • This step is short running so I don't have a strong opinion on the change
  • create index step (l.x)
    • 18 beta1 is 0% to 2% faster than 17.5
    • This step is short running so I don't have a strong opinion on the change
  • write-heavy step (l.i1)
    • 18 beta1 with io_method= sync and workers has similar perf as 17.5
    • 18 beta1 with io_method=io_uring is ~4% slower than 17.5. The problem might be more CPU/operation, see cpupq here
  • write-heavy step (l.i2)
    • 18 beta1 with io_method=workers is ~2% faster than 17.5
    • 18 beta1 with io_method= sync and workers is 6% and 8% slower than 17.5. The problem might be more CPU/operation, see cpupq here
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.5 have similar performance
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 and 17.5 have similar performance
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.97)
  • create index step (l.x)
    • rQPS for (x10b, x10c, x10d) was (1.00, 1.02, 1.00)
  • write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.011.00, 0.96)
    • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.941.02, 0.92)
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
    • for qr500 the rQPS for (x10b, x10c, x10d) was (0.991.011.00)
    • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10c, x10d) was (1.001.001.00)
    • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
    • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.991.00, 0.98)

Friday, June 6, 2025

Postgres 18 beta1: large server, IO-bound Insert Benchmark

This has results for a CPU-bound Insert Benchmark with Postgres on a large server. A blog post about a CPU-bound workload on the same server is here.

tl;dr

  • initial load step (l.i0)
    • 18 beta1 is 4% faster than 17.4
  • create index step (l.x)
    • 18 beta1 with io_method =sync and =workers has similar perf as 17.4 and is 7% faster than 17.4 with =io_uring
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 and 17.4 have similar performance except for l.i2 with 18 beta1 and io_method=workers where 18 beta1 is 40% faster. This is an odd result and I am repeating the benchmark.
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 is up to (3%, 2%, 3%) slower than 17.4 with io_method= (sync, workers, io_uring). The issue might be new CPU overhead.
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is up to (3%, 5%, 2%) slower than 17.4 with io_method= (sync, workers, io_uring). The issue might be new CPU overhead.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1 and 17.4. I got the source for 18 beta1 from github using the REL_18_BETA1 tag because I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for Postgres 17.4 is here and named conf.diff.cx10a_c32r128.

For 18 beta1 I tested 3 configuration files, and they are here:
  • conf.diff.cx10b_c32r128 (x10b) - uses io_method=sync
  • conf.diff.cx10cw4_c32r128 (x10cw4) - uses io_method=worker with io_workers=4
  • conf.diff.cx10d_c32r128 (x10d) - uses io_method=io_uring
The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table. The database is larger than memory. In some benchmark steps the working set is larger than memory (see the point query steps qp100, qp500, qp1000) while the working set it cached for other benchmarks steps (see the range query steps qr100, qr500 and qr1000).

The benchmark steps are:

  • l.i0
    • insert 10 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: details

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.4 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10cw4 with io_method=worker and io_workers=4
  • x10d with io_method=io_uring).
The summary of the summary is:
  • initial load step (l.i0)
    • 18 beta1 is 4% faster than 17.4
    • From metrics, 18 beta1 has a lower context switch rate (cspq) and sustains a higher write rate to storage (wmbps).
  • create index step (l.x)
    • 18 beta1 with io_method =sync and =workers has similar perf as 17.4 and is 7% faster than 17.4 with =io_uring
    • From metrics, 18 beta1 with io_method=io_uring sustains a higher write rate (wmbps)
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 and 17.4 have similar performance except for l.i2 with 18 beta1 and io_method=workers where 18 beta1 is 40% faster. This is an odd result and I am repeating the benchmark.
    • From metrics for l.i1 and l.i2, in the case where 18 beta1 is 40% faster, there is much less CPU/operation (cpupq).
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 is up to (3%, 2%, 3%) slower than 17.4 with io_method= (sync, workers, io_uring)
    • From metrics for qr100, qr500 and qr1000 the problem might be more CPU/operation (cpupq)
    • Both 17.4 and 18 beta1 failed to sustain the target rate of 20,000 inserts and 20,000 deletes/s. They were close and did ~18,000/s for each. See the third table here.
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is up to (3%, 5%, 2%) slower than 17.4 with io_method= (sync, workers, io_uring).
    • From metrics for qp100, qp500 and qp1000 the problem might be more CPU/operation (cpupq)
    • Both 17.4 and 18 beta1 failed to sustain the target rate of 20,000 inserts and 20,000 deletes/s. They were close and did ~18,000/s for each. See the third table here.
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10cw4, x10d) was (1.041.041.04)
  • create index step (l.x)
    • rQPS for (x10b, x10cw4, x10d) was (0.990.991.07)
  • write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for (x10b, x10cw4, x10d) was (1.010.991.02)
    • for l.i2 the rQPS for (x10b, x10cw4, x10d) was (1.001.400.99)
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10cw4, x10d) was (0.970.980.97)
    • for qr500 the rQPS for (x10b, x10cw4, x10d) was (0.980.980.97)
    • for qr1000 the rQPS for (x10b, x10cw4, x10d) was (1.000.990.98)
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10cw4, x10d) was (1.000.990.98)
    • for qp500 the rQPS for (x10b, x10cw4, x10d) was (1.000.95, 0.98)
    • for qp1000 the rQPS for (x10b, x10cw4, x10d) was (0.970.95, 0.99)

Wednesday, June 4, 2025

Postgres 18 beta1: large server, CPU-bound Insert Benchmark

This has results for a CPU-bound Insert Benchmark with Postgres on a large server. A blog post about a similar workload on a small server is here.

This report was delayed because I had to debug a performance regression (see below) and then repeat tests after implementing a workaround in my benchmark client.

tl;dr

  • creating connections
    • this is ~2.3X slower in 18 beta1 with io_method=io_uring vs 17.4
  • initial load step (l.i0)
    • 18 beta1 is 1% to 3% faster than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • create index step (l.x)
    • 18 beta1 is 1% to 3% slower than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 is 0% to 4% faster
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.4 have similar performance
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is 0% to 2% faster
Performance regression

Connection create is much slower in Postgres 18 beta1, at least with io_method=io_uring. On my large server it takes ~2.3X longer when the client runs on the same server as Postgres (no network latency) and the CPU overhead on the postmaster process is ~3.5X larger. When the benchmark client shares the server with Postgres it used to take ~3 milliseconds to get a connection and that increases to ~7 milliseconds with 18 beta1 when using io_method=io_uring.

More details on the regression are here. By postmaster I mean this process, because the docs claim that postmaster is deprecated:
/home/mdcallag/d/pg174_o2nofp/bin/postgres -D /data/m/pg
I stumbled by this bug on accident because my benchmark client was intermittently creating connections on a performance critical path. I have since fixed the benchmark client to avoid that. But I suspect that this regression might be an issue in production for some workloads -- one risk is that the postmaster process can run out of CPU.

I assumed my benchmark wasn't creating many connections as the connections used for the inserts, deletes and queries are created at the start of a benchmark step and closed at the end. But I missed one place in the benchmark client where it ran an extra query once every 100 point queries during the point query benchmark steps (qp100, qp500, qp1000) and the new overhead from connection create in that workflow reduced QPS by ~20% for 18 beta1 with io_method=io_uring.

From some debugging it looks like there is just more time spent in the kernel dealing with the VM (page tables, etc) when the postmaster calls fork/clone to start the new backend process that handles the new connection. And then there is also more time when that process exits which explains why the CPU overhead is larger than the latency increase.

The new overhead is a function of max_connections. I usually run with it set to 100, but did an experiment just now on my small server to time a loop that creates 1000 connections:
  • 17.4, max_conns=100 -> ~2.5 seconds
  • 18beta1 with io_method=sync, max_conns=100 -> ~2.7 seconds
  • 18beta1 with io_method=io_uring, max_conns=100 -> ~3.7 seconds
  • 18beta1 with io_method=io_uring, max_conns=200 -> ~4.1 seconds
  • 18beta1 with io_method=io_uring, max_conns=1000 -> ~7.5 seconds
Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1 and 17.4. I got the source for 18 beta1 from github using the REL_18_BETA1 tag because I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for Postgres 17.4 is here and named conf.diff.cx10a_c32r128.

For 18 beta1 I tested 3 configuration files, and they are here:
  • conf.diff.cx10b_c32r128 (x10b) - uses io_method=sync
  • conf.diff.cx10cw4_c32r128 (x10cw4) - uses io_method=worker with io_workers=4
  • conf.diff.cx10d_c32r128 (x10d) - uses io_method=io_uring
The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 10M rows per table.

The benchmark steps are:

  • l.i0
    • insert 10 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 16M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 4M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here for a setup without the regression and here for a setup with the regression. The rest of this post will focus on the results without the regression.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: details

The performance summary is here

See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.4 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
  • x10b with io_method=sync
  • x10cw4 with io_method=worker and io_workers=4
  • x10d with io_method=io_uring
The summary of the summary is:
  • initial load step (l.i0)
    • 18 beta1 is 1% to 3% faster than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • create index step (l.x)
    • 18 beta1 is 1% to 3% slower than 17.4
    • This step is short running so I don't have a strong opinion on the change
  • write-heavy steps (l.i1, l.i2)
    • 18 beta1 is 0% to 4% faster
  • range query steps (qr100, qr500, qr1000)
    • 18 beta1 and 17.4 have similar performance
  • point query steps (qp100, qp500, qp1000)
    • 18 beta1 is 0% to 2% faster
The summary is:
  • initial load step (l.i0)
    • rQPS for (x10b, x10cw4, x10d) was (1.01, 1.03, 1.02)
  • create index step (l.x)
    • rQPS for (x10b, x10cw4, x10d) was (0.99, 0.97, 0.97)
  • write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for (x10b, x10cw4, x10d) was (1.02, 1.04, 1.03)
    • for l.i2 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.04, 1.01)
  • range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for (x10b, x10cw4, x10d) was (1.00, 0.99, 1.01)
    • for qr500 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.00, 1.01)
    • for qr1000 the rQPS for (x10b, x10cw4, x10d) was (1.000.991.01)
  • point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for (x10b, x10cw4, x10d) was (1.00, 1.00, 1.02)
    • for qp500 the rQPS for (x10b, x10cw4, x10d) was (1.001.001.02)
    • for qp1000 the rQPS for (x10b, x10cw4, x10d) was (1.001.001.01)

Postgres 18 beta1: small server, IO-bound Insert Benchmark (v2)

This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt  is here  and has been deprecated b...