Tuesday, September 2, 2025

Postgres 18 beta3, large server, sysbench

This has performance results for Postgres 18 beta3, beta2, beta1, 17.5 and 17.4 using the sysbench benchmark and a large server. The working set is cached and the benchmark is run with high concurrency (40 connections). The goal is to search for CPU and mutex regressions. This work was done by Small Datum LLC and not sponsored

tl;dr

  • There might be small regressions (~2%) for several range queries that don't do aggregation. This is similar to what I reported for 18 beta3 on a small server, but here it only occurs for 3 of the 4 microbenchmarks and on the small server it occurs on all 4. I am still uncertain about whether this really is a regression.
Builds, configuration and hardware

I compiled Postgres versions 17.4, 17.5, 18 beta1, 18 beta2 and 18 beta3 from source.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for Postgres 17.4 and 17.5 is x10a_c32r128.

The config files for Postgres 18 are:
  • x10b_c32r128 is functionally the same as x10a_c32r128 but adds io_method=sync
  • x10d_c32r128 starts with x10a_c2r128 and adds io_method=io_uring

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The tests are run using 8 tables with 10M rows per table. The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for Postgres 17.5)
When the relative QPS is > 1 then some version is faster than PG 17.5.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

Relative to: pg174_o2nofp.x10a_c32r128
col-1 : pg175_o2nofp.x10a_c32r128
col-2 : pg18beta1_o2nofp.x10b_c32r128
col-3 : pg18beta1_o2nofp.x10d_c32r128
col-4 : pg18beta2_o2nofp.x10d_c32r128
col-5 : pg18beta3_o2nofp.x10d_c32r128

col-1   col-2   col-3   col-4   col-5
0.98    0.99    0.99    1.00    0.99    hot-points_range=100
1.01    1.01    1.00    1.01    1.01    point-query_range=100
1.00    1.00    0.99    1.00    1.00    points-covered-pk
1.00    1.01    1.00    1.02    1.00    points-covered-si
1.00    1.01    1.00    1.00    1.00    points-notcovered-pk
1.00    1.00    1.01    1.02    1.00    points-notcovered-si
1.00    1.00    1.00    1.00    1.00    random-points_range=1000
1.00    1.01    1.00    1.00    1.00    random-points_range=100
1.00    1.00    1.00    1.00    1.00    random-points_range=10
1.00    0.97    0.96    0.98    0.97    range-covered-pk
1.00    0.97    0.97    0.98    0.97    range-covered-si
0.99    0.99    0.99    0.99    0.98    range-notcovered-pk
1.00    1.01    1.01    1.00    1.01    range-notcovered-si
1.00    1.02    1.03    1.03    1.02    read-only-count
1.00    1.00    1.00    1.01    1.01    read-only-distinct
1.00    1.00    1.00    1.00    1.00    read-only-order
1.01    1.01    1.02    1.02    1.01    read-only_range=10000
1.00    0.99    0.99    0.99    1.00    read-only_range=100
1.01    0.99    0.99    1.00    0.99    read-only_range=10
1.00    1.01    1.01    1.01    1.01    read-only-simple
1.00    1.02    1.03    1.03    1.02    read-only-sum
1.00    1.13    1.14    1.02    0.91    scan_range=100
1.00    1.13    1.13    1.02    0.90    scan.warm_range=100
1.00    0.99    0.99    0.99    0.99    delete_range=100
0.99    1.00    1.02    0.99    1.00    insert_range=100
1.01    1.00    1.00    1.00    0.99    read-write_range=100
1.00    0.98    1.00    1.01    0.99    read-write_range=10
0.99    0.99    1.02    0.98    0.96    update-index
1.00    1.01    1.00    1.00    1.01    update-inlist
0.98    0.98    0.99    0.98    0.97    update-nonindex
0.95    0.95    0.94    0.93    0.95    update-one_range=100
0.97    0.98    0.98    0.97    0.95    update-zipf_range=100
0.98    0.99    0.99    0.98    0.98    write-only_range=10000

Monday, September 1, 2025

Postgres 18 beta3, small server, sysbench

This has performance results for Postgres 18 beta3, beta2, beta1 and 17.6 using the sysbench benchmark and a small server. The working set is cached and the benchmark is run with low concurrency (1 connection). The goal is to search for CPU regressions. This work was done by Small Datum LLC and not sponsored

tl;dr

  • There might be small regressions (~2%) for several range queries that don't do aggregation. This is similar to what I reported for 18 beta1.
  • Vacuum continues to be a problem for me and I had to repeat the benchmark a few times to get a stable result. It appears to be a big source of non-deterministic behavior leading to false alarms for CPU regressions in read-heavy tests that run after vacuum. In some ways, RocksDB compaction causes similar problems. Fortunately, InnoDB MVCC GC (purge) does not cause such problems.
Builds, configuration and hardware

I compiled Postgres versions 17.6, 18 beta1, 18 beta2 and 18 beta3 from source.

The server is a Beelink SER7 with a Ryzen 7 7840HS CPU, 32G of RAM, 8 cores with AMD SMT disabled, Ubuntu 24.04 and an NVMe devices with discard enabled and ext4 for the database.

The config file for Postgres 17.6 is x10a_c8r32.

The config files for Postgres 18 are:
  • x10b_c8r32 is functionally the same as x10a_c8r32 but adds io_method=sync
  • x10b1_c8r32 starts with x10b_c8r32 and adds vacuum_max_eager_freeze_failure_rate =0
  • x10b2_c8r32 starts with x10b_c8r32 and adds vacuum_max_eager_freeze_failure_rate =0.99

Benchmark

I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The tests are run using 1 table with 50M rows. The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

Results

The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for Postgres 17.6)
When the relative QPS is > 1 then some version is faster than PG 17.6.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

The numbers highlighted in yellow below might be from a small regression for range queries that don't do aggregation. But note that this does reproduce for the full table scan microbenchmark (scan). I am not certain it is a regression as this might be from non-deterministic CPU overheads for read-heavy workloads that are run after vacuum. I hope to look at CPU flamegraphs soon.
  • the mapping from microbenchmark name to Lua script is here
  • the range query without aggregation microbenchmarks use oltp_range_covered.lua with various flags set and the SQL statements it uses are here. All of these return 100 rows.
  • the scan microbenchmark uses oltp_scan.lua which is a SELECT with a WHERE clause that filters all rows (empty result set)
Relative to: x.pg176_o2nofp.x10a_c8r32.pk1
col-1 : x.pg18beta1_o2nofp.x10b_c8r32.pk1
col-2 : x.pg18beta2_o2nofp.x10b_c8r32.pk1
col-3 : x.pg18beta3_o2nofp.x10b_c8r32.pk1
col-4 : x.pg18beta3_o2nofp.x10b1_c8r32.pk1
col-5 : x.pg18beta3_o2nofp.x10b2_c8r32.pk1

col-1   col-2   col-3   col-4   col-5 -> point queries
1.00    1.00    0.98    0.99    0.99    hot-points_range=100
1.00    1.01    1.00    1.00    0.99    point-query_range=100
1.00    1.02    1.01    1.01    1.01    points-covered-pk
1.00    1.00    1.00    1.00    1.00    points-covered-si
1.01    1.01    1.00    1.00    1.00    points-notcovered-pk
1.01    1.00    1.00    1.00    1.00    points-notcovered-si
0.99    1.00    0.99    1.00    1.00    random-points_range=1000
1.01    1.00    1.00    1.00    1.00    random-points_range=100
1.01    1.01    1.00    1.00    0.99    random-points_range=10

col-1   col-2   col-3   col-4   col-5 -> range queries w/o agg
0.98    0.99    0.97    0.98    0.96    range-covered-pk_range=100
0.98    0.99    0.96    0.98    0.97    range-covered-si_range=100
0.98    0.98    0.98    0.97    0.98    range-notcovered-pk
0.99    0.99    0.98    0.98    0.98    range-notcovered-si
1.01    1.02    1.00    1.00    1.00    scan

col-1   col-2   col-3   col-4   col-5 -> range queries with agg
1.02    1.01    1.02    1.01    0.98    read-only-count_range=1000
0.98    1.01    1.01    1.00    1.03    read-only-distinct
0.99    0.99    0.99    0.99    0.99    read-only-order_range=1000
1.00    1.00    1.01    1.00    1.01    read-only_range=10000
0.99    0.99    0.99    0.99    0.99    read-only_range=100
0.99    0.99    0.99    0.98    0.99    read-only_range=10
1.01    1.00    1.00    1.00    1.01    read-only-simple
1.01    1.00    1.01    1.00    1.00    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5 -> writes
0.99    1.00    0.98    0.98    0.98    delete_range=100
0.99    0.98    0.98    1.00    0.98    insert_range=100
0.99    0.99    0.99    0.98    0.99    read-write_range=100
0.98    0.99    0.99    0.98    0.99    read-write_range=10
1.00    0.99    0.98    0.97    0.99    update-index_range=100
1.01    1.00    0.99    1.01    1.00    update-inlist_range=100
1.00    1.00    0.99    0.96    0.99    update-nonindex_range=100
1.01    1.01    0.99    0.97    0.99    update-one_range=100
1.00    1.00    0.99    0.98    0.99    update-zipf_range=100
1.00    0.99    0.98    0.98    1.00    write-only_range=10000

Monday, August 25, 2025

MySQL 5.6 thru 9.4: small server, Insert Benchmark

This has results for the Insert Benchmark on a small server with InnoDB from MySQL 5.6 through 9.4. The workload here uses low concurrency (1 client), a small server and a cached database. I run it this way to look for CPU regressions before moving on to IO-bound workloads with high concurrency.

tl;dr

  • good news - there are no large regressions after MySQL 8.0
  • bad news - there are large regressions from MySQL 5.6 to 5.7 to 8.0
    • load in 8.0, 8.4 and 9.4 gets about 60% of the throughput vs 5.6
    • queries in 8.0, 8.4 and 9.4 get between 60% and 70% of the throughput vs 5.6

Builds, configuration and hardware

I compiled MySQL 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0 from source.

The server is an ASUS PN53 with 8 cores, AMD SMT disabled and 32G of RAM. The OS is Ubuntu 24.04. Storage is 1 NVMe device with ext4. More details on it are here.

I used the cz12a_c8r32 config file (my.cnf) which is here for 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0.

The Benchmark

The benchmark is explained here. I recently updated the benchmark client to connect via socket rather than TCP so that I can get non-SSL connections for all versions tested. AFAIK, with TCP I can only get SSL connections for MySQL 8.4 and 9.4.

The workload uses 1 client, 1 table with 30M rows and a cached database.

The benchmark steps are:

  • l.i0
    • insert 30 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40 million rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10 million rows are inserted and deleted per table.
    • Wait for N seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of N is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA. The summary section is here.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from MySQL 5.6.51.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements.

Results: details

This table is a copy of the second table in the summary section. It lists the relative QPS (rQPS) for each benchmark step where rQPS is explained above.

The benchmark steps are explained above, they are:
  • l.i0 - initial load in PK order
  • l.x - create 3 secondary indexes per table
  • l.i1, l.i2 - random inserts and random deletes
  • qr100, qr500, qr1000 - short range queries with background writes
  • qp100, qp500, qp1000 - point queries with background writes

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.891.521.141.080.830.840.830.840.840.84
8.0.430.602.501.040.860.690.620.690.630.700.62
8.4.60.602.531.030.860.680.610.670.610.680.61
9.4.00.602.531.030.870.700.630.700.630.700.62



The summary is:
  • l.i0
    • there are large regressions starting in 8.0 and modern MySQL only gets ~60% of the throughput relative to 5.6 because modern MySQL has more CPU overhead
  • l.x
    • I ignore this but there have been improvements
  • l.i1, l.i2
    • there was a large improvement in 5.7 but new CPU overhead since 8.0 reduces that
  • qr100, qr500, qr1000
    • there are large regressions from 5.6 to 5.7 and then again from 5.7 to 8.0
    • throughput in modern MySQL is ~60% to 70% of what it was in 5.6


    Thursday, August 21, 2025

    Sysbench for MySQL 5.6 thru 9.4 on a small server

    This has performance results for InnoDB from MySQL 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0 on a small server with sysbench microbenchmarks. The workload here is cached by InnoDB and my focus is on regressions from new CPU overheads. This work was done by Small Datum LLC and not sponsored. 

    tl;dr

    • Low concurrency (1 client) is the worst case for regressions in modern MySQL
    • MySQL 8.0, 8.4 and 9.4 are much slower than 5.6.51 in all but 2 of the 32 microbenchmarks
      • The bad news - performance regressions aren't getting fixed
      • The good news - regressions after MySQL 8.0 are small

    Builds, configuration and hardware

    I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0.

    The server is an ASUS ExpertCenter PN53 with AMD Ryzen 7 7735HS, 32G RAM and an m.2 device for the database. More details on it are here. The OS is Ubuntu 24.04 and the database filesystem is ext4 with discard enabled.

    The my.cnf files are here.

    Benchmark

    I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by InnoDB.

    The tests are run using 1 table with 50M rows. The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

    Results

    All files I saved from the benchmark are here and the spreadsheet is here.

    The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I provide charts below with relative QPS. The relative QPS is the following:
    (QPS for some version) / (QPS for MySQL 5.6.51)
    When the relative QPS is > 1 then some version is faster than 5.6.51.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

    Results: point queries

    Based on results from vmstat the regressions are from new CPU overheads.
    Results: range queries without aggregation

    Based on results from vmstat the regressions are from new CPU overheads.
    Results; range queries with aggregation

    Based on results from vmstat the regressions are from new CPU overheads.
    Results: writes

    Based on results from vmstat the regressions are from new CPU overheads.


    Friday, August 1, 2025

    Postgres 18 beta2: large server, Insert Benchmark, part 2

    I repeated the benchmark for one of the workloads used in a recent blog post on Postgres 18 beta2 performance. The workload used 1 client and 1 table with 50M rows that fits in the Postgres buffer pool. In the result I explain here, one of the benchmark steps was run for ~10X more time. Figuring out how long to run the steps in the Insert Benchmark is always a work in progress -- I want to test more things, so I don't want to run steps for too long, but there will be odd results if the run times are too short.

    tl;dr

    • up to 2% less throughput on range queries in the qr100 benchmark step. This is similar to what I saw in my previous report.
    • up to 12% more throughput on the l.i2 benchmark step in PG beta1 and beta2. This is much better than what I saw in my previous report.

    Details

    Details on the benchmark are in my previous post.

    The benchmark is explained here and was run for one workloads -- 1 client, cached.

    • run with 1 client, 1 table and a cached database
    • load 50M rows in step l.i0, do 160M writes in step l.i1 and 40M in l.i2. Note that here the l.i1 and l.i2 steps run for ~10X longer than in my previous post.
    The benchmark steps are:

    • l.i0
      • insert X million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts Y million rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and Z million rows are inserted and deleted per table.
      • Wait for N seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of N is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance report is here.

    The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: 1 client, cached

    Normally I summarize the summary but I don't do that here to save space.

    But the tl;dr is:
    • up to 2% less throughput on range queries in the qr100 benchmark step. This is similar to what I saw in my previous report.
    • up to 12% more throughput on the l.i2 benchmark step in PG beta1 and beta2. This is much better than what I saw in my previous report.

    Tuesday, July 29, 2025

    Postgres 18 beta2: large server, sysbench

    This has performance results for Postgres 17.4, 17.5, 18 beta1 and 18 beta2 on a large server with sysbench microbenchmarks. Results like this from me are usually boring because Postgres has done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored. Previous work from me for Postgres 17.4 and 18 beta1 is here.

    The workload here is cached by Postgres and my focus is on regressions from new CPU overhead or mutex contention.

    tl;dr

    • there might be small regressions (~2%) for range queries on the benchmark with 1 client. One cause is more CPU in BuildCachedPlan. 
    • there might besmall regressions (~2%) for range queries on the benchmark with 40 clients. One cause is more CPU in PortalRunSelect.
    • otherwise things look great

    Builds, configuration and hardware

    I compiled Postgres from source using -O2 -fno-omit-frame-pointer.

    The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
    ext4. More details on it are here.

    The config file for 17.4 and 17.5 is conf.diff.cx10a_c32r128.

    The config files for 18 beta 1 are:
    • conf.diff.cx10b_c8r32
      • uses io_method='sync' to match Postgres 17 behavior
    • conf.diff.cx10c_c8r32
      • uses io_method='worker' and io_workers=32 to do async IO via a thread pool. I eventually learned that 32 is too large but I don't think it matters much on this workload.
    • conf.diff.cx10d_c8r32
      • uses io_method='io_uring' to do async IO via io_uring
    Benchmark

    I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

    The tests are run using two workloads. For both the read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.
    • 1-client
      • run with 1 client and 1 table with 50M rows
    • 40-clients
      • run with 40 client and 8 table with 10M rows per table
    The command lines to run all tests with my helper scripts are:
    •  bash r.sh 1 50000000 300 600 $deviceName 1 1 1
    • bash r.sh 8 10000000 300 600 $deviceName 1 1 40
    Results

    All files I saved from the benchmark are here.

    I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.4 to 18 beta2. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I provide tables below with relative QPS. The relative QPS is the following:
    (QPS for some version) / (QPS for PG 17.4)
    When the relative QPS is > 1 then some version is faster than 17.4.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided. These can help to explain why something is faster or slower because it shows how much HW is used per request.

    Results: 1-client

    Tables with QPS per microbenchmark are here using absolute and relative QPS. All of the files I saved for this workload are here.

    For point queries
    • QPS is mostly ~2% better in PG 18 beta2 relative to 17.4 (see here), but the same is true for 17.5. Regardless, this is good news.
    For range queries without aggregation
    • full table scan is ~6% faster in PG 18 beta2 and ~4% faster in 17.5, both relative to 17.4
    • but for the other microbenchmarks, PG 18 beta2, 18 beta1 and 17.5 are 1% to 5% slower than 17.4. 
      • From vmstat and iostat metrics for range-[not]covered-pk and range-[not]covered-si this is explained by an increase in CPU/query (see the cpu/o column in the previous links). I also see a few cases where CPU/query is much larger but only for 18 beta2 with configs that use io_method =worker and =io_uring. 
      • I measured CPU using vmstat which includes all CPU on the host so perhaps something odd happens with other Postgres processes or some rogue process is burning CPU. I checked more results from vmstat and iostat and don't see storage IO during the tests.
      • Code that does the vacuum and checkpoint is here, output from the vacuum work is here, and the Postgres logfiles are here. This work is done prior to the range query tests.
    For range queries with aggregation
    • there are regressions (see here), but here they are smaller than what I see above for range queries without aggregation
    • the interesting result is for the same query, but run with different selectivity to go from a larger to a smaller range and the regression increases as the range gets smaller (see here). To me this implies the likely problem is the fixed cost -- either in the optimizer or query setup (allocating memory, etc).
    For writes
    • there are small regressions, mostly from 1% to 3% (see here).
    • the regressions are largest for the 18beta configs that use io_method=io_uring, that might be expected given the benefits it provides
    Then I used Flamegraphs (see here) to try and explain the regressions. My benchmark helper scripts collect a Flamegraph about once per minute for each microbenchmark and the microbenchmarks were run for 5 minutes if read-mostly or 10 minutes if write-heavy. Then the ~5 or ~10 samples from perf (per microbenchmark) are combined to produce one Flamegraph per microbenchmark. My focus is on the distribution of time across thread stacks where there are stacks for parse, optimize, execute and network.
    • For range-covered-pk there is a small (2% to 3%) increase from PG 17.4 to 18 beta2 in BuildCachedPlan (see here for 17.4 and 18 beta2).
    • The increase in CPU for BuildCachedPlan also appears in Flamegraphs for other range query microbenchmarks
    Results: 40-clients

    Tables with QPS per microbenchmark are here using absolute and relative QPS. All of the files I saved for this workload are here, Postgres logfiles are here, output from vacuum is here and Flamegraphs are here.

    For point queries
    • QPS is similar from PG 17.4 through 18 beta1 (see here).
    For range queries without aggregation
    • full table scan is mostly ~2% faster after 17.4 (see here)
    • for the other microbenchmarks, 3 of the 4 have small regressions of ~2% (see here). The worst is range-covered-pk and the problem appears to be more CPU per query (see here). Unlike above where the new overhead was in BuildCachedPlan, here it is in the stack with PortalRunSelect.
    For range queries with aggregation
    • QPS is similar from PG 17.4 through 18 beta2 (see here)
    For writes
    • QPS drops by 1% to 5% for many microbenchmarks, but this problem starts in 17.5 (see here)
    • From vmstat and iostat metrics for update-one (which suffers the most, see here) the CPU per operation overhead does not increase (see the cpu/o column), the number of context switches per operation also does not increase (see the cs/o column).
    • Also from iostat, the amount of data written to storage doesn't change much.




























    Sunday, July 27, 2025

    Postgres 18 beta2: large server, Insert Benchmark

    This has results for the Insert Benchmark with Postgres on a large server. 

    There might be small regressions, but I have more work in progress to explain that:

    • for a workload with 1 client and a cached database I see a small increase in CPU/operation (~10%) during the l.i2 benchmark step. I am repeating that benchmark.
    • for a workload with 20 clients and an IO-bound database I see a small decrease in QPS (typically 2% to 4%) during read+write benchmark steps.

    Builds, configuration and hardware

    I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 18 beta2, 18 beta1, 17.5 and 17.4.

    The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
    ext4. More details on it are here.

    The config file for Postgres 17.4 and 17.5 is here and named conf.diff.cx10a_c32r128.

    For 18 beta1 and beta2 I tested 3 configuration files, and they are here:
    • conf.diff.cx10b_c32r128 (x10b) - uses io_method=sync
    • conf.diff.cx10cw4_c32r128 (x10cw4) - uses io_method=worker with io_workers=4
    • conf.diff.cx10d_c32r128 (x10d) - uses io_method=io_uring
    The Benchmark

    The benchmark is explained here and was run for three workloads:
    • 1 client, cached
      • run with 1 client, 1 table and a cached database
      • load 50M rows in step l.i0, do 16M writes in step l.i1 and 4M in l.i2
    • 20 clients, cached
      • run with 20 clients, 20 tables (table per client) and a cached database
      • for each client/table - load 10M rows in step l.i0, do 16M writes in step l.i1 and 4M in l.i2
    • 20 clients, IO-bound
      • run with 20 clients, 20 tables (table per client) and a database larger than RAM
      • for each client/table - load 200M rows in step l.i0, do 4M writes in step l.i1 and 1M in l.i2
      • for the qr100, qr500 and qr1000 steps the working set is cached, otherwise it is not
    The benchmark steps are:

    • l.i0
      • insert X million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts Y million rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and Z million rows are inserted and deleted per table.
      • Wait for N seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of N is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance reports are here:
    The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA. The summary sections are here:
    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: 1 client, cached

    Normally I summarize the summary but I don't do that here to save space.

    There might be regressions on the l.i2 benchmark step that does inserts+deletes with smaller transactions while l.i1 does the same but with larger transactions. These first arrive with Postgres 17.5 but I will ignore that because it sustained a higher rate on the preceding benchmark step (l.i1) and might suffer from more vacuum debt during l.i2.

    From the response time table, 18 beta2 does better than 18 beta1 based on the 256us and 1ms columns.

    From the vmstat and iostat metrics, there is a ~10% increase in CPU/operation starting in Postgres 17.5 -- the value in the cpupq column increases from 596 for PG 17.4 to ~660 starting in 17.5. With one client the l.i2 step finishes in ~500 seconds and that might be too short. I am repeating the bencchmark to run that step for 4X longer.

    Results: 20 clients, cached

    Normally I summarize the summary but I don't do that here to save space. Regardless, this is easy to summarize - there are small improvements (~4%) on the l.i1 and l.i2 benchmark steps and no regressions elsewhere.

    Results: 20 clients, IO-bound

    Normally I summarize the summary but I don't do that here to save space.

    From the summary, Postgres did not sustain the target write rates during qp1000 and qr1000 but I won't claim that it should have been able to -- perhaps I need faster IO. The first table in the summary section uses a grey background to indicate that. Fortunately, all versions were able to sustain a similar write rate. This was a also a problem for some versions on the qp500 step.

    For the l.i2 step there is an odd outlier with PG 18beta2 and the cx10cw4_c32r128 config (that uses io_method=worker). I will ignore that for now.

    For many of the read+write tests (qp100, qr100, qp500, qr500, qp1000, qr1000) thoughput with PG 18 beta1 and beta2 is up to 5% less than for PG 17.4. The regression might be explained by a small increase in CPU/operation.



























    Wednesday, June 11, 2025

    Postgres 18 beta1: small server, IO-bound Insert Benchmark (v2)

    This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

    There might be regressions from 17.5 to 18 beta1

    • QPS decreases by ~5% and CPU increases by ~5% on the l.i2 (write-only) step
    • QPS decreases by <= 2% and CPU increases by ~2% on the qr* (range query) steps
    There might be regressions from 14.0 to 18 beta1
    • QPS decreases by ~6% and ~18% on the write-heavy steps (l.i1, l.i2)

    Builds, configuration and hardware

    I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

    The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04. More details on it are here.

    For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
    • conf.diff.cx10b_c8r32
      • uses io_method='sync' to match Postgres 17 behavior
    • conf.diff.cx10c_c8r32
      • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
    • conf.diff.cx10d_c8r32
      • uses io_method='io_uring' to do async IO via io_uring
    The Benchmark

    The benchmark is explained here and is run with 1 client and 1 table with 800M rows. I provide two performance reports:
    • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
    • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
    The benchmark steps are:

    • l.i0
      • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
      • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

    The summary sections (herehere and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for the benchmark steps. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.5.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: Postgres 14.0 through 18 beta1

    The performance summary is here

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions. The results here are similar to what I reported prior to fixing the too many connections problem in the benchmark client.

    For 14.0 through 18 beta1, QPS on ...
    • the initial load (l.i0)
      • Performance is stable across versions
      • 18 beta1 and 17.5 have similar performance
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99)
    • create index (l.x)
      • ~10% faster starting in 15.0
      • 18 beta1 and 17.5 have similar performance
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.11, 1.12)
    • first write-only step (l.i1)
      • Performance decreases ~7% from version 16.9 to 17.0. CPU overhead (see cpupq here) increases by ~5% in 17.0.
      • 18 beta1 and 17.5 have similar performance
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.93, 0.94)
    • second write-only step (l.i2)
      • Performance decreases ~6% in 15.0, ~8% in 17.0 and then ~5% in 18.0. CPU overhead (see cpupq here) increases ~5%, ~6% and ~5% in 15.0, 17.0 and 18beta1. Of all benchmark steps, this has the largest perf regression from 14.0 through 18 beta1 which is ~20%.
      • 18 beta1 is ~4% slower than 17.5
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (0.86, 0.82)
    • range query steps (qr100, qr500, qr1000)
      • 18 beta1 and 17.5 have similar performance, but 18 beta1 is slightly slower
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.99) for qr100, (0.97, 0.98) for qr500 and (0.97, 0.95) for qr1000. The issue is new CPU overhead, see cpupq here.
    • point query steps (qp100, qp500, qp1000)
      • 18 beta1 and 17.5 have similar performance but 18 beta1 is slightly slower
      • rQPS for (17.5, 18 beta1 with io_method=sync) is (1.00, 0.98) for qp100, (0.99, 0.98) for qp500 and (0.97, 0.96) for qp1000. The issue is new CPU overhead, see cpupq here.
    Results: Postgres 17.5 vs 18 beta1

    The performance summary is here.

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
    • x10b with io_method=sync
    • x10c with io_method=worker and io_workers=16
    • x10d with io_method=io_uring
    The summary is:
    • initial load step (l.i0)
      • rQPS for (x10b, x10c, x10d) was (0.99, 1001.00)
    • create index step (l.x)
      • rQPS for (x10b, x10c, x10d) was (1.011.021.02)
    • first write-heavy ste (l.i1)
      • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 1.01)
    • second write-heavy step (l.i2)
      • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.960.930.94)
      • CPU overhead (see cpupq here) increases by ~5% in 18 beta1
    • range query steps (qr100, qr500, qr1000)
      • for qr100 the rQPS for (x10b, x10c, x10d) was (1.00, 0.99, 0.99)
      • for qr500 the rQPS for (x10b, x10c, x10d) was (1.00, 0.97, 0.99)
      • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.98, 0.97)
      • CPU overhead (see cpupq here, here and here) increases by ~2% in 18 beta1
    • point query steps (qp100, qp500, qp1000)
      • for qp100 the rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.99)
      • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
      • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.99, 0.990.99)










    Sunday, June 8, 2025

    Postgres 18 beta1: small server, CPU-bound Insert Benchmark (v2)

    This is my second attempt at a CPU-bound Insert Benchmark results with a small server. The first attempt is here and has been deprecated because sloppy programming by me meant the benchmark client was creating too many connections and that hurt results in some cases for Postgres 18 beta1.

    tl;dr

    • Performance between 17.5 and 18 beta1 is mostly similar on read-heavy steps
    • 18 beta1 might have small regressions from new CPU overheads on write-heavy steps

    Builds, configuration and hardware

    I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

    The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

    For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
    • conf.diff.cx10b_c8r32
      • uses io_method='sync' to match Postgres 17 behavior
    • conf.diff.cx10c_c8r32
      • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
    • conf.diff.cx10d_c8r32
      • uses io_method='io_uring' to do async IO via io_uring
    The Benchmark

    The benchmark is explained here and is run with 1 client and 1 table with 20M rows. I provide two performance reports:
    • one to compare Postgres 14.0 through 18 beta1, all using synchronous IO
    • one to compare Postgres 17.5 with 18 beta1 using 3 configurations for 18 beta1 -- one for each of io_method= sync, workers and io_uring.
    The benchmark steps are:

    • l.i0
      • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
      • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance report is here for Postgres 14 through 18 and here for Postgres 18 configurations.

    The summary sections (here and here) have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: Postgres 14.0 through 18 beta1

    The performance summary is here

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 14.0 is the base version and that is compared with more recent Postgres versions.

    For 14.0 through18 beta1, QPS on ...
    • l.i0 (the initial load)
      • Slightly faster starting in 15.0
      • Throughput was ~4% faster starting in 15.0 and that drops to ~2% in 18 beta1
      • 18 beta1 and 17.5 have similar performance
    • l.x (create index) 
      • Faster starting in 15.0
      • Throughput is between 9% and 17% faster in 15.0 through 18 beta1
      • 18 beta1 and 17.5 have similar performance
    • l.i1 (write-only)
      • Slower starting in 15.0
      • It is ~3% slower in 15.0 and that increases to between 6% and 10% in 18 beta1
      • 18 beta1 and 17.5 have similar performance
    • l.i2 (write-only)
      • Slower starting in 15.13 with a big drop in 17.0
      • 18 beta1 with io_method= sync and io_uring is worse than 17.5. It isn't clear but one problem might be more CPU/operation (see cpupq here)
    • qr100, qr500, qr1000 (range query)
      • Stable from 14.0 through 18 beta1
    • qp100, qp500, qp1000 (point query) 
      • Stable from 14.0 through 18 beta1
    Results: Postgres 17.5 vs 18 beta1

    The performance summary is here

    See the previous section for the definition of relative QPS (rQPS). For the rQPS formula, Postgres 17.5 is the base version and that is compared with results from 18 beta1 using the three configurations explained above:
    • x10b with io_method=sync
    • x10c with io_method=worker and io_workers=16
    • x10d with io_method=io_uring
    The summary of the summary is:
    • initial load step (l.i0)
      • 18 beta1 is 1% to 3% slower than 17.5
      • This step is short running so I don't have a strong opinion on the change
    • create index step (l.x)
      • 18 beta1 is 0% to 2% faster than 17.5
      • This step is short running so I don't have a strong opinion on the change
    • write-heavy step (l.i1)
      • 18 beta1 with io_method= sync and workers has similar perf as 17.5
      • 18 beta1 with io_method=io_uring is ~4% slower than 17.5. The problem might be more CPU/operation, see cpupq here
    • write-heavy step (l.i2)
      • 18 beta1 with io_method=workers is ~2% faster than 17.5
      • 18 beta1 with io_method= sync and workers is 6% and 8% slower than 17.5. The problem might be more CPU/operation, see cpupq here
    • range query steps (qr100, qr500, qr1000)
      • 18 beta1 and 17.5 have similar performance
    • point query steps (qp100, qp500, qp1000)
      • 18 beta1 and 17.5 have similar performance
    The summary is:
    • initial load step (l.i0)
      • rQPS for (x10b, x10c, x10d) was (0.98, 0.99, 0.97)
    • create index step (l.x)
      • rQPS for (x10b, x10c, x10d) was (1.00, 1.02, 1.00)
    • write-heavy steps (l.i1, l.i2)
      • for l.i1 the rQPS for (x10b, x10c, x10d) was (1.011.00, 0.96)
      • for l.i2 the rQPS for (x10b, x10c, x10d) was (0.941.02, 0.92)
    • range query steps (qr100, qr500, qr1000)
      • for qr100 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
      • for qr500 the rQPS for (x10b, x10c, x10d) was (0.991.011.00)
      • for qr1000 the rQPS for (x10b, x10c, x10d) was (0.99, 1.001.00)
    • point query steps (qp100, qp500, qp1000)
      • for qp100 the rQPS for (x10b, x10c, x10d) was (1.001.001.00)
      • for qp500 the rQPS for (x10b, x10c, x10d) was (0.991.001.00)
      • for qp1000 the rQPS for (x10b, x10c, x10d) was (0.991.00, 0.98)

    Postgres 18 beta3, large server, sysbench

    This has performance results for Postgres 18 beta3, beta2, beta1, 17.5 and 17.4 using the sysbench benchmark and a large server. The working...