Wednesday, May 28, 2025

Postgres 18 beta1: large server, Insert Benchmark, bad configurations

While testing Postgres 18 beta1 on a large server I used several configurations with io_workers set to values the are too large and performance suffered. The default value for it is io_workers and that appears to be a great default. Perhaps other people won't repeat my mistakes.

tl;dr

  • the default value for io_workers is 3 and that is a good value to use
  • be careful about using larger values for io_workers as the performance penalty ranges from 0% (no penalty) to 24% (too much penalty
Update - I am deprecating this result because my benchmark clients creates a few too many connections and there is a perf regression in some cases in 18 beta1 WRT create connection performance -- see here.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for version 18 beta1. I got the source for 18 beta1 from github using the REL_18_BETA1 tag. I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config files for 18 beta1 use names like conf.diff.cx10cw${Z}_c32r128 where $Z is the value for io_workers. All of these use io_method=workers. The files are here. I repeated tests for io_workers set to 2, 4, 6, 8, 16 and 32.

The Benchmark

The benchmark is explained here and is run with 20 client and tables (table per client) and 200M rows per table.

The benchmark steps are:

  • l.i0
    • insert 200 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result with io_workers=2.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: details

The performance summary is here.

The summary of the summary is that larger values for io_workers ...
  • increase throughput by up to 4% for the initial load (l.i0) 
  • increase throughput by up to 12% for create index (l.x)
  • decrease throughput by up to 6% for write heavy (l.i1)
  • decrease throughput by up to 16% for write heavy (l.i2)
  • decrease throughput by up to 3% for range queries, note that this step is CPU-bound
  • decrease throughput by up to 24% for point queries, note that this step is IO-bound
The summary is:
  • the initial load step (l.i0)
    • rQPS for io_workers in (4, 6, 8, 16) was (1.03, 1.03, 1.03, 1.02, 1.04) so these were slightly faster than io_workers=2.
    • rQPS for io_workers=32 was 1.00
  • the create index step (l.x)
    • rQPS for io_workers in (4, 6, 8, 16, 32) was (1.06, 1.05, 1.07, 1.12, 1.11) so these were all faster than io_workers=2.
  • the write-heavy steps (l.i1, l.i2)
    • for l.i1 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.99, 0.99, 0.96, 0.94)
    • for l.i2 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.84, 0.95, 0.90, 0.88, 0.88)
    • I am surprised that larger values for io_workers doesn't help here but did help during the previous steps (l.i0, l.x) which are also write heavy.
  • the range query steps (qr100, qr500, qr1000)
    • for qr100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.99, 0.99, 0.99, 0.99)
    • for qr500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.98, 0.97, 0.97)
    • for qr1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.01, 1.00, 0.99, 0.98, 0.97)
    • note that this step is usually CPU-bound for Postgres because the indexes fit in memory
  • the point query steps (qp100, qp500, qp1000)
    • for qp100 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.98, 0.98, 0.97, 0.94, 0.90)
    • for qp500 the rQPS for io_workers in (4, 6, 8, 16, 32) was (1.00, 0.98, 0.97, 0.89, 0.81)
    • for qp1000 the rQPS for io_workers in (4, 6, 8, 16, 32) was (0.99, 0.95, 0.93, 0.86, 0.76)
    • these steps are IO-bound
For the regressions in one of the write-heavy steps (l.i2) I don't see an obvious problem in the vmstat and iostat metrics -- the amount of CPU, context switches and IO per operation have some variance there isn't difference that explains the change.

For the regressions in the point query steps (qp100, qp500, qp1000) the vmstat and iostat metrics for qp1000 help to explain the problem. Metrics that increase as io_workers increases include:
  • CPU/operation (see cpupq) has a large increase
  • context switches /operation (see cspq) has a small increase
  • iostat reads /operation (rpq) and KB read /operation (rkbpq) have small increases
Finally, average rates from iostat. These are not normalized by QPS. There aren't many differences, although rps (reads/s) is higher for io_workers=2 because throughput was higher in that case.

Legend:
* rps, wps - read /s and write /s
* rKBps, wKBps - KB read /s & KB written /s
* rawait, wawait - read & write latency
* rareqsz, wareqsz - read & write request size

-- from l.i2 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
3468    34622   0.08    8.9     5374    85567   1.41    17.3     2
2959    24026   0.08    8.3     4866    74547   0.05    17.5    32

-- from qp1000 benchmark step

rps     rKBps   rawait  rareqsz wps     wKBps   wawait  wareqsz io_workers
81949   659030  0.13    8.0     39546   589789  168.21  16.5     2
68257   549016  0.12    8.0     36005   549028  130.44  16.2    32
 

Postgres 18 beta1: small server, IO-bound Insert Benchmark

I recently published results for Postgres 18 beta1 on a small server using the Insert Benchmark with a cached workload and low concurrency. Here I share results for it with an IO-bound workload.

tl;dr - for 17.5 vs 18 beta

  • the write-heavy steps (l.i1, l.i2), are up to 5% slower in 18 beta1 vs 17.5
  • the range query steps (qr100, qr500, qr1000) are up to 3% slower in 18 beta1 vs 17.5
  • the point query steps (qp100, qp500, qp1000) are up to 2% slower in 18 beta1 vs 17.5
tl;dr for 14.0 through 18 beta1
  • the write-heavy steps (l.i1, l.i2), are up to 15% slower in 18 beta1 vs 14.0
  • the range query steps (qr100, qr500, qr1000) are up to 4% slower in 18 beta1 vs 14.0
  • the point query steps (qp100, qp500, qp1000) are up to 1% faster in 18 beta1 vs 14.0
Update - I am deprecating this result because my benchmark clients creates a few too many connections and there is a perf regression in some cases in 18 beta1 WRT create connection performance -- see here.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 800M rows.

The benchmark steps are:

  • l.i0
    • insert 800 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 4M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 1M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
  • All versions -- 14.0 through 18 beta1
    • See here, this uses the results from 14.0 as the base version
  • Only 17.5 and 18 beta1
    • See here, this uses the results from 17.5 as the base version and there are three results for 18 beta1, one for each of the configurations listed above.
The summary sections linked above from the performance report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from either 14.0 or 17.5.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: 17.5 and 18 beta1

The performance summary is here.

Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.

The summary of the summary is:
  • the write-heavy steps (l.i1, l.i2), are up to 5% slower in 18 beta1 vs 17.5
  • the range query steps (qr100, qr500, qr1000) are up to 3% slower in 18 beta1 vs 17.5
  • the point query steps (qp100, qp500, qp1000) are up to 2% slower in 18 beta1 vs 17.5
The summary is:
  • the initial load step (l.i0)
    • rQPS is (1.00, 0.99, 1.00) with io_method= (sync, worker, io_uring) vs 17.5
  • the create index step (l.x)
    • rQPS is (1.00, 1.02, 1.00) with io_method= (sync, worker, io_uring) vs 17.5
  • the write-heavy steps (l.i1, l.i2)
    • rQPS is (0.95, 0.98) in 18 beta1 with io_method=sync vs 17.5
    • rQPS is (0.98, 0.96) in 18 beta1 with io_method=worker vs 17.5
    • rQPS is (0.99, 0.98) in 18 beta1 with io_method=io_uring vs 17.5
  • the range query steps (qr100, qr500, qr1000)
    • rQPS is (0.98, 0.97, 0.98) in 18 beta1 with io_method=sync vs 17.5
    • rQPS is (0.99, 0.97, 0.97) in 18 beta1 with io_method=worker vs 17.5
    • rQPS is (0.99, 0.99, 0.99) in 18 beta1 with io_method=io_uring vs 17.5
  • the point query steps (qp100, qp500, qp1000)
    • rQPS is (1.00, 1.00, 0.99) in 18 beta1 with io_method=sync vs 17.5
    • rQPS is (0.99, 0.99, 0.98) in 18 beta1 with io_method=worker vs 17.5
    • rQPS is (1.00, 0.99. 0.98) in 18 beta1 with io_method=io_uring vs 17.5
The regressions in the write-heavy steps (l.i1, l.i2) might be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation). Otherwise, the vmstat and iostat metrics, when divided by throughput, look similar. From the throughput vs time charts, the performance bottleneck was the response time for deletes.

The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100, qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.

Results: 14.0 through 18 beta1

The performance summary is here.

Below I use relativeQPS (rQPS) to compare 18 beta1 with 17.5, when rQPS is > 1 then 18 beta1 is faster than 17.5, when rQPS is < 1 then 18 beta1 is slower, when it is 1.0 then they have the same throughput. When rQPS is 0.90 then I might say that 18 beta1 is 10% slower.

The summary of the summary is:
  • the write-heavy steps (l.i1, l.i2), are up to 15% slower in 18 beta1 vs 14.0
  • the range query steps (qr100, qr500, qr1000) are up to 4% slower in 18 beta1 vs 14.0
  • the point query steps (qp100, qp500, qp1000) are up to 1% faster in 18 beta1 vs 14.0
Comparing 18 beta1 with io_method=sync vs 14.0
  • the initial load step (l.i0)
    • rQPS is 1.01 for 18 beta1 vs 17.5
  • the create index step (l.x)
    • rQPS is 1.14 for 18 beta1 vs 17.5
  • the write-heavy steps (l.i1, l.i2)
    • rQPS is (0.87, 0.85) for 18 beta1 vs 17.5
    • Regressions for these steps are not new, they started in the 14.x releases
  • the range query steps (qr100, qr500, qr1000)
    • rQPS is (0.99, 0.98, 0.96) for 18 beta1 vs 17.5
  • the point query steps (qp100, qp500, qp1000)
    • rQPS is (1.01, 1.00, 1.00) for 18 beta1 vs 17.5
The regressions in the write-heavy steps (l.i1, l.i2) might be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation). Otherwise, the vmstat and iostat metrics, when divided by throughput, look similar. From the throughput vs time charts, the performance bottleneck was the response time for deletes.

The regressions in the range query steps might also be explained by new CPU overhead. See the cpupq column here (cpupq is CPU/operation) for qr100qr500 and qr1000. Otherwise the iostat and vmstat metrics look similar.










Tuesday, May 27, 2025

Postgres 18 beta1: small server, cached Insert Benchmark

I recently published results for Postgres 18 beta1 on a small server using sysbench with a cached and IO-bound workload. This post has results for the Insert Benchmark on a small server with a cached workload and low concurrency.

tl;dr - for 17.5 vs 18 beta

  • the l.i1 benchmark step (write-only with inserts and deletes) was ...
    • 5% slower in 18 beta1 with io_method=sync
    • ~10% slower in 18 beta1 with io_method= worker or io_uring
  • the point query benchmark steps (qp100, qp500, qp1000) were ...
    • 1% or 2% slower in 18 beta1 when using io_method= sync or worker
    • ~6% slower in 18 beta1 when using io_method=io_uring
tl;dr for 14.0 through 18 beta1
  • l.x (create index) is ~1.2X faster in 17.5 vs 14.0
  • l.i1, l.i2 (write-only) are ~5% slower in 17.5 vs 14.0
  • qp100, qp500, qp1000 (point query) are 1% to 3% slower in 17.5 vs 14.0
Update - I am deprecating this result because my benchmark clients creates a few too many connections and there is a perf regression in some cases in 18 beta1 WRT create connection performance -- see here. A result without the regression is here.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
The Benchmark

The benchmark is explained here and is run with 1 client and 1 table with 20M rows.

The benchmark steps are:

  • l.i0
    • insert 20 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40M rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10M rows are inserted and deleted per table.
    • Wait for X seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of X is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance reports are here for:
  • All versions -- 14.0 through 18 beta1
    • See here, this uses the results from 14.0 as the base version
  • Only 17.5 and 18 beta1
    • See here, this uses the results from 17.5 as the base version and there are three results for 18 beta1, one for each of the configurations listed above.
The summary sections linked above from the performance report have 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

Below I use relative QPS to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from either 14.0 or 17.5.

When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with red for <= 0.95, green for >= 1.05 and grey for values between 0.95 and 1.05.

Results: 17.5 and 18 beta1

The performance summary is here.

The summary is:
  • the l.i0 (initial load) step was ...
    • 1% or 2% faster in 18 beta1 vs 17.5
  • the create index step (l.x) was ...
    • as fast with 18 beta1 as with 17.5 when using io_method=sync
    • 2% slower in 18 beta1 when using the new io_method= worker or io_uring
  • the l.i1 step was ...
    • 5% slower in 18 beta1 with io_method=sync
    • ~10% slower in 18 beta1 with io_method =worker =sync
  • the range query steps (qr100, qr500, qr1000) were ...
    • 1% to 3% slower in 18 beta1
  • the point query steps (qp100, qp500, qp1000) were ...
    • 1% or 2% slower in 18 beta1 when using io_method =sync or =worker
    • ~6% slower in 18 beta1 when using io_method=io_uring
For regressions in the l.i1 step
  • This step does inserts and deletes as fast as possible with 50 rows per transaction. The regressions were smaller for the l.i2 step that only changes 5 rows per transaction.
  • From vmstat and iostat metrics 18 beta1 uses more CPU per operation (see cpupq here)
For regressions in the point query steps (qp100, qp500, qp1000)
  • The worst regression is from 18 beta1 with io_method=io_uring and the CPU /operation there is the largest. See cpupq for qp100, qp500 and qp1000.
Results: 14.0 through 18 beta1

The performance summary is here.

For 17.5 vs 18 beta1 see the previous section.

For 14.0 through 17.5, QPS on ...
  • l.i0 (the initial load) is stable
  • l.x (create index) is ~1.2X faster in 17.5 vs 14.0
  • l.i1, l.i2 (write-only) is ~5% slower in 17.5 vs 14.0
  • qr100, qr500, qr1000 (range query) is similar between 17.5 and 14.0
  • qp100, qp500, qp1000 (point query) is 1% to 3% slower in 17.5 vs 14.0

Sunday, May 25, 2025

Postgres 18 beta1: large server, sysbench

This has performance results for Postgres 18 beta1 and 17.4 from the sysbench benchmark and a large server. Results like this from me are usually boring because Postgres has done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored.

The workload here is cached by Postgres and my focus is on regressions from new CPU overhead or mutex contention.

tl;dr

  • scan is faster for 18 beta1 
  • some range queries without aggregation are ~3% slower for 18 beta1
  • some writes are 2% to 5% slower for 18 beta1
I have work in progress to reproduce and/or explain this. There might be regressions or this might just be noise that is to be expected from a complex system. But when all three of the 18 beta1 results show a problem then I suspect there is a regression. The many cases where 18 beta1 is slower than 17.4 are accompanied by an increase in CPU and context switches so I hope to explain why 18 beta1 does that.

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions 17.4 and 18 beta1. I got the source for 18 beta1 from github using the REL_18_BETA1 tag. I started this benchmark effort a few days before the official release.

The server is an ax162-s from Hetzner with an AMD EPYC 9454P processor, 48 cores, AMD SMT disabled and 128G RAM. The OS is Ubuntu 22.04. Storage is 2 NVMe devices with SW RAID 1 and 
ext4. More details on it are here.

The config file for 17.4 is conf.diff.cx10a_c32r128.

The config files for 18 beta 1 are:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=32 to do async IO via a thread pool. I eventually learned that 32 is too large but I don't think it matters much on this workload.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
Benchmark

I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The tests run with 40 client and 8 table with 10M rows per table. The read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.

The command line to run all tests is:  bash r.sh 8 10000000 300 600 $deviceName 1 1 40

Results

I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.4 to 18 beta1. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I do provide tables below with relative QPS. The relative QPS is the following:
(QPS for PG 18beta1) / (QPS for PG 17.4)
When the relative QPS is > 1 then 18 beta1 is faster than 17.4.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS also provided. Theses can help to explain why something is faster or slower because it shows how much HW is used per request.

Results: Postgres 18 beta1 vs 17.4

For the results I compare throughput from Postgres 17.4 with 18 beta1 using the three configurations listed above: cx10b_c8r32, cx10c_c8r32 and cx10d_c8r32. 

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

For these tests the database should be cached, the database directory (size on disk) is ~24G and shared_buffers is set to 96G.

The results are interesting and require more debugging to explain, there might be regressions or this might be natural jitter in my results. However when the jitter makes all three of the 18 beta1 results worse then I don't think it is jitter.
  • scan is faster in pg18 beta1 than 17.4
  • there might be a small regression (about 3%) in 18 beta1 for range queries without aggregation
    • using range-covered-pk* as the example, CPU/query is larger in 18 beta1, see cpu/o here
  • there might be small regressions for writes in 18 beta1
    • for update-nonindex* the CPU, context switches and KB written to storage /query are larger in 18 beta1, see cpu/o, cs/o, and wKB/o here
    • While update-nonindex* is the outlier for an increase in wKB/o, the increases in CPU/query occur in most cases for 18 beta1

Relative to: PG 17.4 with cx10a_c32r128
col-1 : PG 18b1git with cx10b_c32r128
col-2 : PG 18b1git with cx10c_c32r128
col-3 : PG 18b1git with cx10d_c32r128

col-1   col-2   col-3   --> point queries
0.99    1.02    1.00    hot-points_range=100
0.99    0.98    1.01    point-query_range=100
1.00    1.01    0.99    points-covered-pk_range=100
1.03    1.00    1.03    points-covered-si_range=100
0.99    1.00    0.99    points-notcovered-pk_range=100
1.02    1.00    1.02    points-notcovered-si_range=100
1.00    1.01    1.00    random-points_range=1000
1.00    1.01    0.99    random-points_range=100
1.00    1.02    1.00    random-points_range=10

col-1   col-2   col-3   --> range queries without aggregation
0.96    0.96    0.97    range-covered-pk_range=100
0.96    0.96    0.96    range-covered-si_range=100
0.97    0.98    0.97    range-notcovered-pk_range=100
0.99    1.00    0.99    range-notcovered-si_range=100
1.13    1.02    1.13    scan_range=100

col-1   col-2   col-3   --> range queries with aggregation
1.01    1.00    1.01    read-only-count_range=1000
1.00    1.00    1.00    read-only-distinct_range=1000
0.99    1.00    0.99    read-only-order_range=1000
1.01    1.01    1.01    read-only_range=10000
0.99    1.00    0.99    read-only_range=100
0.97    0.98    0.98    read-only_range=10
0.99    1.00    1.00    read-only-simple_range=1000
1.00    1.00    1.00    read-only-sum_range=1000

col-1   col-2   col-3   --> writes
0.97    1.00    0.99    delete_range=100
0.97    0.98    0.97    insert_range=100
0.98    0.98    0.98    read-write_range=100
0.96    0.98    0.97    read-write_range=10
1.00    0.99    0.99    update-index_range=100
1.00    1.00    1.00    update-inlist_range=100
0.94    0.96    0.94    update-nonindex_range=100
0.90    1.03    0.98    update-one_range=100
0.96    0.96    0.95    update-zipf_range=100
1.00    1.00    0.99    write-only_range=10000














Saturday, May 24, 2025

Postgres 18 beta1: small server, sysbench, IO-bound

This has performance results for Postgres 18 beta1 and several older Postgres releases using the sysbench benchmark and a small server. The difference between this post and the previous post is that the working set here is larger than memory while it was cached by Postgres in the previous post.

I almost always run sysbench with a cached workload so there might be more noise in the results here.

Results like this from me are usually boring because Postgres has done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored.

tl;dr - with io_method='worker' in Postgres 18 beta1

  • IO-bound scans are 1.53X faster
  • Context switches per query are up ~10X. I don't know if this is a problem.
  • But I had set io_workers to 16 which was probably too big
tl;dr - with io_method='io_uring' in Postgres 18 beta1
  • IO-bound scans are 1.29X faster
  • Context switches per query are up ~5X. I don't know if this is a problem.
tl;dr - with io_method='sync' in Postgres 18 beta1
  • Performance is similar to Postgres 17.5 which is good
Builds, configuration and hardware

I used a small server with 8 cores, 32G of RAM and 1 NVMe device. More details are in the previous post.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large but I don't think it matters much on this workload.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
Benchmark

I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. For most of the microbenchmarks the working set is larger than memory.

The tests run with 1 client and 1 table with 500M rows. The read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.

The command line to run all tests is:  bash r.sh 1 500000000 300 600 $deviceName 1 1 1

Results

I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.5 to 18 beta1. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I do provide tables below with relative QPS. The relative QPS is the following:
(QPS for $some_version) / (QPS for $base_version)
And $base_version is either Postgres 17.5 or Postgres 14.0 as specified below. When the relative QPS is > 1 then $some_version is faster than $base_version.  When it is < 1 then there might be a regression.

Values from iostat and vmstat divided by QPS also provided. Theses can help to explain why something is faster or slower because it shows how much HW is used per request.

Results: Postgres 18 beta1 vs 17.5

For the results here $base_version is Postgres 17.5 and that is compared with Postgres 18 beta1 using the three configurations listed above: cx10b_c8r32, cx10c_c8r32 and cx10d_c8r32. 

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

Results for the scan microbenchmark are much better with new configs I can use with 18 beta1:
  • for the cx10c_c8r32 config that uses io_method='worker' and io_workers=16 
    • throughput is 1.53X better than Postgres 17.5
    • the rate of context switches per query is almost 10X larger with this config, see cs/o here
    • the IO efficiency rates (reads/query & read KB /query) are similar to 17.5
    • I assume that 16 is too large for io_workers
  • for the cx10d_c8r32 config that uses  io_method='io_uring'  
    • throughput is 1.29X better than Postgres 17.5
    • the rate of context switches per query is about 5X larger with this config, see cs/o here
    • the IO efficiency rates (reads/query & read KB /query) are similar to 17.5
More iostat and vmstat metrics from the scan microbenchmark

Legend:
* r/s - iostat reads /s
* rMB/s - iostat read MB /s
* r/o - iostat reads /query
* rKB/o - iostat read KB /query
* o/s - queries /s
* cs/s - context switches /s
* cpu/s - cpu utilization (vmstat us + sy)
* cs/o - context switches /query
* cpu/o - cpu microseconds /query

r/s     rMB/s   r/o     rKB/o   o/s
3717.7  434.5   1.787   213.909 2080
6845.4  651.3   2.170   211.381 3155
5015.9  556.7   1.878   213.417 2671

cs/s    cpu/s   cs/o    cpu/o
1699    13.1    0.817   0.006282
23639   20.2    7.493   0.006395
12487   17.1    4.675   0.006417

Relative QPS per microbenchmark

Relative to: PG 17.5 with cx10a_c8r32
col-1 : PG 18 beta1 with cx10b_c8r32
col-2 : PG 18beta1 with cx10c_c8r32
col-3 : PG 18beta1 with cx10d_c8r32

col-1   col-2   col-3   --> point queries
1.00    1.00    1.01    hot-points_range=100
0.99    0.99    0.99    point-query_range=100
1.00    1.00    1.02    points-covered-pk_range=100
1.01    1.02    1.00    points-covered-si_range=100
1.00    0.99    0.99    points-notcovered-pk_range=100
0.99    0.98    0.99    points-notcovered-si_range=100
1.00    1.00    1.00    random-points_range=1000
1.00    0.99    1.00    random-points_range=100
0.99    0.95    1.02    random-points_range=10

col-1   col-2   col-3   --> range queries, part 1, no aggregation
0.97    0.97    0.99    range-covered-pk_range=100
0.98    0.98    1.00    range-covered-si_range=100
0.99    0.99    0.99    range-notcovered-pk_range=100
0.99    0.99    1.00    range-notcovered-si_range=100
1.01    1.53    1.29    scan_range=100

col-1   col-2   col-3   --> range queries, part 2, with aggregation
0.99    1.00    1.04    read-only-count_range=1000
0.99    0.99    0.99    read-only-distinct_range=1000
0.98    0.99    0.99    read-only-order_range=1000
0.99    0.98    0.98    read-only_range=10000
0.99    0.99    0.99    read-only_range=100
0.99    0.99    0.99    read-only_range=10
0.99    0.99    0.99    read-only-simple_range=1000
0.99    0.99    0.99    read-only-sum_range=1000

col-1   col-2   col-3   --> writes
0.99    1.00    1.00    delete_range=100
0.99    0.98    0.98    insert_range=100
0.99    0.99    0.99    read-write_range=100
1.00    0.99    0.99    read-write_range=10
1.00    1.00    0.99    update-index_range=100
0.99    0.97    1.00    update-inlist_range=100
1.02    1.02    0.99    update-nonindex_range=100
0.98    0.98    0.99    update-one_range=100
1.00    1.00    0.98    update-zipf_range=100
1.00    1.00    1.00    write-only_range=10000

Results: Postgres 14.0 through 18 beta1

For the results here $base_version is Postgres 14.0 and that is compared with more recent Postgres releases. The purpose for this is to see how performance changes over time. Postgres 18beta1 is between 7% slower and 13% faster than 14.0 and there are more improvements than regressions. This is yet another boring result from Postgres, but it is great to see that it continues to focus on CPU efficiency.

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

The data below is also here which can be easier to read.

The results for 18 beta1 are similar to 17.5.

Relative to: PG 14.0 with cx10a_c8r32
col-1 : PG 14.18 with cx10a_c8r32
col-2 : PG 15.0 with cx10a_c8r32
col-3 : PG 15.13 with cx10a_c8r32
col-4 : PG 16.0 with cx10a_c8r32
col-5 : PG 16.9 with cx10a_c8r32.
col-6 : PG 17.0 with cx10a_c8r32
col-7 : PG 17.5 with cx10a_c8r32
col-8 : PG 18 beta1 with cx10b_c8r32

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> point queries
0.99    1.00    1.00    0.99    1.00    2.03    2.00    2.00    hot-points_range=100
0.99    0.99    0.99    0.99    0.99    0.99    1.00    0.98    point-query_range=100
0.98    1.01    1.00    1.02    1.01    0.99    1.00    1.01    points-covered-pk_range=100
1.00    0.98    0.97    1.00    1.00    0.97    0.97    0.98    points-covered-si_range=100
1.00    1.00    1.00    1.00    1.00    0.98    1.00    1.00    points-notcovered-pk_range=100
1.01    1.01    1.01    1.01    1.01    1.00    1.01    1.00    points-notcovered-si_range=100
1.00    1.00    1.00    1.00    1.00    1.00    1.00    1.00    random-points_range=1000
1.00    1.00    1.00    1.00    1.00    0.96    1.00    1.00    random-points_range=100
1.00    1.00    1.00    1.00    1.00    1.00    1.00    0.99    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> range queries, no aggregation
0.98    0.98    0.96    0.97    0.96    0.99    0.98    0.95    range-covered-pk_range=100
0.97    0.97    0.97    0.98    0.96    0.99    0.97    0.95    range-covered-si_range=100
1.00    0.99    1.00    0.99    0.99    0.98    0.99    0.98    range-notcovered-pk_range=100
1.01    1.00    1.01    1.01    1.01    1.00    1.01    1.00    range-notcovered-si_range=100
0.94    0.87    0.85    0.94    0.96    0.97    0.96    0.97    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> range queries, with aggregation
1.00    1.00    1.00    1.00    0.99    0.99    1.00    0.99    read-only-count_range=1000
0.99    0.99    0.99    1.01    1.00    1.00    1.01    0.99    read-only-distinct_range=1000
0.99    1.00    1.01    1.02    1.00    1.01    1.02    1.00    read-only-order_range=1000
1.00    1.02    1.01    1.03    1.03    1.03    1.05    1.03    read-only_range=10000
1.00    0.99    0.99    1.00    1.00    1.00    1.00    0.99    read-only_range=100
0.99    0.99    0.99    1.00    0.99    1.00    1.00    0.99    read-only_range=10
0.99    0.98    0.99    0.99    0.99    0.99    1.00    0.98    read-only-simple_range=1000
1.00    0.99    1.00    1.00    0.99    1.00    1.00    0.99    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> writes
0.99    0.99    0.99    0.99    0.99    0.99    1.00    1.00    delete_range=100
1.01    0.99    0.99    0.99    1.00    1.01    1.01    1.00    insert_range=100
1.02    1.05    1.03    1.04    1.01    0.97    0.97    0.97    read-write_range=100
1.03    1.03    1.04    1.04    1.03    0.94    0.94    0.94    read-write_range=10
1.03    1.03    1.03    1.04    1.02    1.05    1.04    1.04    update-index_range=100
0.99    0.99    0.99    0.99    0.99    1.05    1.05    1.04    update-inlist_range=100
1.01    1.03    1.02    1.02    1.01    1.03    1.02    1.05    update-nonindex_range=100
0.98    0.99    0.99    0.99    0.99    1.11    1.11    1.08    update-one_range=100
1.00    0.99    1.00    1.00    1.00    0.98    0.98    0.98    update-zipf_range=100
1.00    1.00    1.00    1.00    1.00    0.89    0.89    0.89    write-only_range=10000


Friday, May 23, 2025

Postgres 18 beta1: small server, sysbench

This has performance results for Postgres 18 beta1 and several older Postgres releases using the sysbench benchmark and a small server. Results like this from me are usually boring because Postgres has done a great job at avoiding performance regressions over time. This work was done by Small Datum LLC and not sponsored.

tl;dr - from Postgres 17.5 to 18 beta1

  • there might be small regressions (1% to 4%) in a few range query microbenchmarks
  • there might be a few small improvements (<= 3%) in other microbenchmarks
  • overall there are neither big improvements nor big regressions which is great news
tl;dr - from Postgres 14.0 to 18 beta1
  • 18 beta1 ranges from 7% slower to 13% faster
  • there are more improvements than regressions
  • the largest regressions occur on range query microbenchmarks

Builds, configuration and hardware

I compiled Postgres from source using -O2 -fno-omit-frame-pointer for versions  14.0, 14.18, 15.0, 15.13, 16.0, 16.9, 17.0, 17.5 and 18 beta1.

The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM and one NVMe device for the database. The OS has been updated to Ubuntu 24.04 -- I used 22.04 prior to that. More details on it are here.

For Postgres versions 14.0 through 17.5 the configuration files are in the pg* subdirectories here with the name conf.diff.cx10a_c8r32. For Postgres 18 beta1 the configuration files are here and I used 3 variations, which are here:
  • conf.diff.cx10b_c8r32
    • uses io_method='sync' to match Postgres 17 behavior
  • conf.diff.cx10c_c8r32
    • uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large but I don't think it matters much on this workload.
  • conf.diff.cx10d_c8r32
    • uses io_method='io_uring' to do async IO via io_uring
Benchmark

I used sysbench and my usage is explained here. To save time I only run 27 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by Postgres.

The tests run with 1 client and 1 table with 50M rows. The read-heavy microbenchmarks run for 300 seconds and write-heavy run for 600 seconds.

The command line to run all tests is:  bash r.sh 1 50000000 300 600 $deviceName 1 1 1

Results

I don't provide graphs in this post to save time and because there are few to no regressions from Postgres 17.5 to 18 beta1. The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

I do provide tables below with relative QPS. The relative QPS is the following:
(QPS for $some_version) / (QPS for $base_version)
And $base_version is either Postgres 17.5 or Postgres 14.0 as specified below. When the relative QPS is > 1 then $some_version is faster than $base_version.  When it is < 1 then there might be a regression.

Values from iostat and vmstat divided by QPS also provided. Theses can help to explain why something is faster or slower because it shows how much HW is used per request.

Results: Postgres 18 beta1 vs 17.5

For the results here $base_version is Postgres 17.5 and that is compared with Postgres 18 beta1 using the three configurations listed above: cx10b_c8r32, cx10c_c8r32 and cx10d_c8r32. 

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

From the relative QPS results below, there might be small regressions for the microbenchmarks highlighted in yelllow -- range-covered-pk*, range-covered-si*, range-notcovered-pk*, read-only-distinct*. All of these do range queries and the last also does aggregation. If there is a regression it is small - between 1% and 4%. However, that requires more investigation because small regressions are not easy to detect with sysbench.

The HW efficiency metrics show there is more CPU per query. The results are here for range-covered-pk*, range-covered-si*, range-notcovered-pk*, read-only-distinct*

I will revisit this.

The results also show possible small improvements for many of the microbenchmarks. I highlight them in green and again am not certain about the results. The changes (both good and bad) are small enough in most cases that they can be from normal variance.

Relative to: PG 17.5 with cx10a_c8r32
col-1 : PG 18beta1 with cx10b_c8r32
col-2 : PG 18beta1 with cx10c_c8r32
col-3 : PG 18beta1 with cx10d_c8r32

col-1   col-2   col-3   --> point queries
1.01    1.01    0.99    hot-points_range=100
1.01    1.00    1.01    point-query_range=100
1.00    0.99    0.98    points-covered-pk_range=100
0.99    0.99    0.99    points-covered-si_range=100
1.00    1.01    1.01    points-notcovered-pk_range=100
1.01    1.00    1.00    points-notcovered-si_range=100
1.00    1.01    1.01    random-points_range=1000
1.03    1.01    1.02    random-points_range=100
1.01    1.01    1.02    random-points_range=10

col-1   col-2   col-3   --> range queries, part 1, no aggregation
0.98    0.98    0.98    range-covered-pk_range=100
0.98    0.99    0.99    range-covered-si_range=100
0.99    0.98    0.98    range-notcovered-pk_range=100
0.99    0.99    1.00    range-notcovered-si_range=100
1.02    1.02    1.03    scan_range=100

col-1   col-2   col-3   --> range queries, part 2, with aggregation
1.04    1.03    1.02    read-only-count_range=1000
0.97    0.96    0.96    read-only-distinct_range=1000
1.00    1.00    0.99    read-only-order_range=1000
1.00    1.00    1.00    read-only_range=10000
1.00    0.99    0.99    read-only_range=100
1.00    0.99    1.00    read-only_range=10
1.01    1.00    1.00    read-only-simple_range=1000
1.02    1.02    1.02    read-only-sum_range=1000

col-1   col-2   col-3   --> writes
0.99    0.99    0.99    delete_range=100
0.99    0.99    0.99    insert_range=100
1.00    0.99    0.99    read-write_range=100
1.00    0.99    1.00    read-write_range=10
0.99    1.00    1.00    update-index_range=100
1.00    0.99    0.99    update-inlist_range=100
1.00    1.00    1.00    update-nonindex_range=100
1.00    1.00    1.00    update-one_range=100
1.00    0.99    1.00    update-zipf_range=100
0.99    0.99    1.00    write-only_range=10000

Results: Postgres 14.0 through 18 beta1

For the results here $base_version is Postgres 14.0 and that is compared with more recent Postgres releases. The purpose for this is to see how performance changes over time. Postgres 18beta1 is between 7% slower and 13% faster than 14.0 and there are more improvements than regressions. This is yet another boring result from Postgres, but it is great to see that it continues to focus on CPU efficiency.

The hardware efficiency metrics, counters from iostat and vmstat normalized by QPS, are here.

The data below is also here which can be easier to read.

Relative to: PG 14.0 cx10a_c8r32
col-1 : PG 14.18 cx10a_c8r32
col-2 : PG 15.0 cx10a_c8r32\
col-3 : PG 15.13 cx10a_c8r32
col-4 : PG 16.0 cx10a_c8r32
col-5 : PG 16.9 cx10a_c8r32
col-6 : PG 17.0 cx10a_c8r32
col-7 : PG 17.5 cx10a_c8r32
col-8 : PG 18beta1 cx10b_c8r32

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> point queries
1.00    1.03    1.03    1.03    1.03    1.98    1.94    1.95    hot-points_range=100
1.04    1.03    1.02    1.03    1.04    1.05    1.04    1.05    point-query_range=100
1.00    1.02    1.02    1.03    1.02    1.00    1.01    1.01    points-covered-pk_range=100
0.99    0.99    1.00    1.01    0.99    0.99    0.99    0.98    points-covered-si_range=100
1.00    1.00    1.00    1.02    1.01    0.99    0.99    0.99    points-notcovered-pk_range=100
0.99    0.99    0.99    0.99    0.99    0.99    0.98    0.99    points-notcovered-si_range=100
0.99    1.01    1.00    1.02    1.02    0.99    0.99    0.98    random-points_range=1000
1.00    1.00    1.00    1.01    1.01    0.99    0.98    1.00    random-points_range=100
1.01    1.02    1.01    1.03    1.03    1.02    1.00    1.01    random-points_range=10

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8   --> range queries, part 1, no aggregation
0.98    0.98    0.95    0.97    0.97    0.98    0.98    0.95    range-covered-pk_range=100
0.97    0.97    0.96    0.98    0.97    0.97    0.97    0.95    range-covered-si_range=100
1.01    1.00    1.00    0.98    0.98    0.98    0.98    0.97    range-notcovered-pk_range=100
0.99    0.99    0.98    0.99    0.98    1.00    0.97    0.96    range-notcovered-si_range=100
0.90    0.80    0.78    0.89    1.02    0.91    0.91    0.93    scan_range=100

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8. --> range queries, part 2, with aggregation
0.99    0.98    0.99    1.00    0.97    0.96    0.96    0.99    read-only-count_range=1000
1.01    1.03    1.02    1.03    1.02    1.01    1.02    0.99    read-only-distinct_range=1000
1.00    1.03    1.03    1.04    1.03    1.05    1.06    1.06    read-only-order_range=1000
1.00    1.03    1.03    1.05    1.04    1.04    1.04    1.04    read-only_range=10000
1.00    1.00    1.00    1.01    1.01    1.01    1.01    1.01    read-only_range=100
1.00    1.01    1.00    1.00    1.00    1.01    1.01    1.00    read-only_range=10
1.01    1.00    1.01    1.00    1.00    1.00    1.00    1.00    read-only-simple_range=1000
1.01    1.00    1.01    1.01    1.00    1.00    0.99    1.01    read-only-sum_range=1000

col-1   col-2   col-3   col-4   col-5   col-6   col-7   col-8 --> writes
1.00    1.02    0.98    0.98    0.98    1.09    1.10    1.09    delete_range=100
0.98    1.00    1.00    0.98    0.98    1.07    1.06    1.05    insert_range=100
1.00    1.00    1.00    1.01    1.01    1.02    1.03    1.02    read-write_range=100
1.00    1.01    1.00    1.01    1.01    1.01    1.01    1.01    read-write_range=10
0.99    1.01    0.99    1.00    1.00    1.07    1.07    1.06    update-index_range=100
1.01    1.02    1.01    1.01    1.02    0.99    1.02    1.02    update-inlist_range=100
1.01    1.03    1.00    1.01    1.02    1.07    1.09    1.09    update-nonindex_range=100
1.00    1.01    0.98    1.00    1.01    1.13    1.12    1.11    update-one_range=100
1.01    1.04    1.00    1.01    1.03    1.11    1.13    1.13    update-zipf_range=100
1.00    1.02    1.00    1.00    1.01    1.06    1.07    1.06    write-only_range=10000

Sunday, May 18, 2025

RocksDB 10.2 benchmarks: large & small servers with a cached workload

I previously shared benchmark results for RocksDB using the larger server that I have. In this post I share more results from two other large servers and one small server. This is arbitrary but I mean >= 20 cores for large, 10 to 19 cores for medium and less than 10 cores for small.

tl;dr

  • There are several big improvements
  • There might be small regression in fillseq performance, I will revisit this
  • For the block cache hyperclock does much better than LRU on CPU-bound tests
  • I am curious about issue 13546 but not sure the builds I tested include it

Software

I used RocksDB versions 6.29.5, 7.10.2, 8.11.4, 9.0.1, 9.1.2, 9.2.2, 9.3.2, 9.4.1, 9.5.2, 9.6.2, 9.7.4, 9.8.4, 9.9.3, 9.10.0, 9.11.2, 10.0.1, 10.1.3 and 10.2.1. Everything was compiled with gcc 11.4.0.

For 8.x, 9.x and 10.x the benchmark was repeated using both the LRU block cache (older code) and hyperclock (newer code). That was done by setting the --cache_type argument:

  • lru_cache was used for versions 7.6 and earlier
  • hyper_clock_cache was used for versions 7.7 through 8.5
  • auto_hyper_clock_cache was used for versions 8.5+

Hardware

My servers are described here. From that list I used:

  • The small server is a Ryzen 7 (AMD) CPU with 8 cores and 32G of RAM. It is v5 in the blog post.
  • The first large server has 24 cores with 64G of RAM. It is v6 in the blog post.
  • The other large server has 32 cores and 128G of RAM. It is v7 in the blog post.

Benchmark

Overviews on how I use db_bench are here and here.

Tests were run for a workload with the database cached by RocksDB that I call byrx in my scripts.

The benchmark steps that I focus on are:
  • fillseq
    • load RocksDB in key order with 1 thread
  • revrangeww, fwdrangeww
    • do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
  • readww
    • do point queries with a rate-limited writer. Report performance for the point queries.
  • overwrite
    • overwrite (via Put) random keys using many threads

Relative QPS

Many of the tables below (inlined and via URL) show the relative QPS which is:
    (QPS for my version / QPS for base version)

The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than the base version. When it is < 1.0 then there might be a performance regression or there might just be noise 

Small server

The benchmark was run using 1 client thread and 20M KV pairs. Each benchmark step was run for 1800 seconds. Performance summaries are here

For the byrx (cached database) workload with the LRU block cache:

  • see relative and absolute performance summaries, the base version is RocksDB 6.29.5
  • fillseq is ~14% faster in 10.2 vs 6.29 with improvements in 7.x and 9.x
  • revrangeww and fwdrangeww are ~6% slower in 10.2 vs 6.29, I might revisit this
  • readww has similar perf from 6.29 through 10.2
  • overwrite is ~14% faster in10.2 vs 6.29 with most of the improvement in 7.x

For the byrx (cached database) workload with the Hyper Clock block cache

  • see relative and absolute performance summaries, the base version is RocksDB 8.11.4
  • there might be small regression (~3%) or there might be noise in the results
Results from RocksDB 10.2.1 that show relative QPS for 10.2 with the Hyper Clock block cache relative to 10.2 with the LRU block cache. Here the QPS for revrangeww, fwdrangeww and readww are ~10% better with Hyper Clock.

relQPS  test
0.99    fillseq.wal_disabled.v400
1.09    revrangewhilewriting.t1
1.13    fwdrangewhilewriting.t 1
1.15    readwhilewriting.t1
0.96    overwriteandwait.t1.s0

Large server (24 cores)

The benchmark was run using 16 client threads and 40M KV pairs. Each benchmark step was run for 1800 seconds. Performance summaries are here.

For the byrx (cached database) workload with the LRU block cache

  • see relative and absolute performance summaries, the base version is RocksDB 6.29.5
  • fillseq might have a new regression of ~4% in 10.2.1 or that might be noise, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x

For the byrx (cached database) workload with the Hyper Clock block cache

  • see relative and absolute performance summaries, the base version is RocksDB 8.11.4
  • fillseq might have a new regression of ~8% in 10.2.1 or that might be noise, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x
Results from RocksDB 10.2.1 that show relative QPS for 10.2 with the Hyper Clock block cache relative to 10.2 with the LRU block cache.  Hyper Clock is much better for workloads that have frequent access to the block cache with multiple threads.

relQPS  test
0.97    fillseq.wal_disabled.v400
1.35    revrangewhilewriting.t16
1.43    fwdrangewhilewriting.t16
1.69    readwhilewriting.t16
0.97    overwriteandwait.t16.s0

Large server (32 cores)

The benchmark was run using 24 client threads and 50M KV pairs. Each benchmark step was run for 1800 seconds. Performance summaries are here.

For the byrx (cached database) workload with the LRU block cache

  • see relative and absolute performance summaries, the base version is RocksDB 6.29.5
  • fillseq might have a new regression of ~10% from 7.10 through 10.2, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x

For the byrx (cached database) workload with the Hyper Clock block cache

  • see relative and absolute performance summaries, the base version is RocksDB 7.10.2
  • fillseq might have a new regression of ~10% from 7.10 through 10.2, I will revisit this
  • revrangeww, fwdrangeww, readww and overwrite are mostly unchanged since 8.x
Results from RocksDB 10.2.1 that show relative QPS for 10.2 with the Hyper Clock block cache relative to 10.2 with the LRU block cache. Hyper Clock is much better for workloads that have frequent access to the block cache with multiple threads.

relQPS  test
1.02    fillseq.wal_disabled.v400
1.39    revrangewhilewriting.t24
1.55    fwdrangewhilewriting.t24
1.77    readwhilewriting.t24
1.00    overwriteandwait.t24.s0

Postgres 18 beta1: small server, IO-bound Insert Benchmark (v2)

This is my second attempt at an IO-bound Insert Benchmark results with a small server. The first attempt  is here  and has been deprecated b...