Friday, August 1, 2025

Postgres 18 beta2: large server, Insert Benchmark, part 2

I repeated the benchmark for one of the workloads used in a recent blog post on Postgres 18 beta2 performance. The workload used 1 client and 1 table with 50M rows that fits in the Postgres buffer pool. In the result I explain here, one of the benchmark steps was run for ~10X more time. Figuring out how long to run the steps in the Insert Benchmark is always a work in progress -- I want to test more things, so I don't want to run steps for too long, but there will be odd results if the run times are too short.

tl;dr

  • up to 2% less throughput on range queries in the qr100 benchmark step. This is similar to what I saw in my previous report.
  • up to 12% more throughput on the l.i2 benchmark step in PG beta1 and beta2. This is much better than what I saw in my previous report.

Details

Details on the benchmark are in my previous post.

The benchmark is explained here and was run for one workloads -- 1 client, cached.

  • run with 1 client, 1 table and a cached database
  • load 50M rows in step l.i0, do 160M writes in step l.i1 and 40M in l.i2. Note that here the l.i1 and l.i2 steps run for ~10X longer than in my previous post.
The benchmark steps are:

  • l.i0
    • insert X million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts Y million rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and Z million rows are inserted and deleted per table.
    • Wait for N seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of N is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

Results: 1 client, cached

Normally I summarize the summary but I don't do that here to save space.

But the tl;dr is:
  • up to 2% less throughput on range queries in the qr100 benchmark step. This is similar to what I saw in my previous report.
  • up to 12% more throughput on the l.i2 benchmark step in PG beta1 and beta2. This is much better than what I saw in my previous report.

Postgres 18 beta2: large server, Insert Benchmark, part 2

I repeated the benchmark for one of the workloads used in a recent blog post on Postgres 18 beta2 performance. The workload used 1 client a...