Tuesday, January 2, 2024

Updated Insert benchmark: Postgres 9.x to 16.x, small server, cached database

This has results for the Insert Benchmark using Postgres versions 9.x through 16.x using a small server and cached workload. The benchmark code has been updated since my last blog post for PG vs the Insert Benchmark on small servers. I also included results for the latest point releases from Postgres versions 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6 and 10. Because time is finite, I didn't include results from these versions in my post about CPU performance regressions.

tl;dr

  • Comparing Postgres 16.1 to 9.0.23 all benchmark steps are faster in 16.1 except for point queries which are ~2% slower on one small server and ~10% slower on the other. This regression arrived in 9.6 and perf has been stable since then.
  • For write-heavy workloads there were regressions in the 9.X releases, but since then perf has been improving with a few exceptions (like PG 13).
  • Perf for write-heavy workloads improved a lot starting in Postgres 9.5
Build + Configuration

I compiled Postgres from source for versions using this script. The config files are linked below for the SER4 server. The configs for SER7 are the same except shared_buffers is increased from 10G to 23G. I tried to make them as similar as possible:
Benchmark

The benchmark was run with 1 client using my old and new small servers.
  • SER4 - The old small server is a a Beelink SER 4700u described here that has 8 cores, hyperthreads disabled, 16G RAM, Ubuntu 22.04 and XFS using an NVMe SSD. 
  • SER7 - The new small server is a Beelink SER7 7840HS described here that has 8 cores, hyperthreads disabled, 32G RAM, Ubuntu 22.04 and XFS using an NVMe SSD.
I used the updated Insert Benchmark so there are more benchmark steps described below. In order, the benchmark steps are:

  • l.i0
    • insert X million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client. X is 20M for SER4 and 40M for SER7.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 50M rows and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions).
  • qr100
    • use 3 connections/client. One does range queries for 1800 seconds and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for a fixed amount of time. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s

Results

The performance report is here for SER4 and for SER7. It has a lot more detail including charts, tables and metrics from iostat and vmstat to help explain the performance differences.

The summary has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version on the first row of the table. The third shows the background insert rate for benchmark steps with background inserts and all systems sustained the target rates. The second table makes it easy to see how performance changes over time.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is my version and $base is the version of the base case. When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
From the summary for SER4
  • The base case is Postgres 9.0.23
  • There are no regressions in Postgres 14, 15 & 16 relative to Postgres 9
  • There are regressions for some write-heavy benchmark steps from Postgres 9.0 to 9.6
  • Postgres 13.13 isn't great for write-heavy (see l.i2)
  • For read-heavy, modern Postgres is better at range queries than at point relative to older Postgres
  • Throughput per benchmark step in Postgres 16.1 relative to 9.0.23
    • l.i0 - relative QPS is 1.23
    • l.x - relative QPS is 1.71
    • l.i1, l.i2 - relative QPS is 3.44, 2.18
    • qr100, qr500, qr1000 - relative QPS is 1.16, 1.21, 1.27
    • qp100, qp500, qp1000 - relative QPS is 1.11, 1.03, 0.98
From the summary for SER7
  • The base case is Postgres 9.0.23
  • There are small regressions for point queries in Postgres 14, 15 & 16 relative to Postgres 9
  • There are regressions for some write-heavy benchmark steps from Postgres 9.0 to 9.6
  • Postgres 13.13 isn't great for write-heavy (see l.i2)
  • For read-heavy, modern Postgres is better at range queries than at point relative to older Postgres
  • Throughput per benchmark step in Postgres 16.1 relative to 9.0.23
    • l.i0 - relative QPS is 1.53
    • l.x - relative QPS is 1.69
    • l.i1, l.i2 - relative QPS is 4.053.52
    • qr100, qr500, qr1000 - relative QPS is 1.381.611.52
    • qp100, qp500, qp1000 - relative QPS is 1.010.930.88






















No comments:

Post a Comment