Monday, August 25, 2025

MySQL 5.6 thru 9.4: small server, Insert Benchmark

This has results for the Insert Benchmark on a small server with InnoDB from MySQL 5.6 through 9.4. The workload here uses low concurrency (1 client), a small server and a cached database. I run it this way to look for CPU regressions before moving on to IO-bound workloads with high concurrency.

tl;dr

  • good news - there are no large regressions after MySQL 8.0
  • bad news - there are large regressions from MySQL 5.6 to 5.7 to 8.0
    • load in 8.0, 8.4 and 9.4 gets about 60% of the throughput vs 5.6
    • queries in 8.0, 8.4 and 9.4 get between 60% and 70% of the throughput vs 5.6

Builds, configuration and hardware

I compiled MySQL 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0 from source.

The server is an ASUS PN53 with 8 cores, AMD SMT disabled and 32G of RAM. The OS is Ubuntu 24.04. Storage is 1 NVMe device with ext4. More details on it are here.

I used the cz12a_c8r32 config file (my.cnf) which is here for 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0.

The Benchmark

The benchmark is explained here. I recently updated the benchmark client to connect via socket rather than TCP so that I can get non-SSL connections for all versions tested. AFAIK, with TCP I can only get SSL connections for MySQL 8.4 and 9.4.

The workload uses 1 client, 1 table with 30M rows and a cached database.

The benchmark steps are:

  • l.i0
    • insert 30 million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
  • l.x
    • create 3 secondary indexes per table. There is one connection per client.
  • l.i1
    • use 2 connections/client. One inserts 40 million rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
  • l.i2
    • like l.i1 but each transaction modifies 5 rows (small transactions) and 10 million rows are inserted and deleted per table.
    • Wait for N seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of N is a function of the table size.
  • qr100
    • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
  • qp100
    • like qr100 except uses point queries on the PK index
  • qr500
    • like qr100 but the insert and delete rates are increased from 100/s to 500/s
  • qp500
    • like qp100 but the insert and delete rates are increased from 100/s to 500/s
  • qr1000
    • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
  • qp1000
    • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
Results: overview

The performance report is here.

The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA. The summary section is here.

Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from MySQL 5.6.51.

When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
  • insert/s for l.i0, l.i1, l.i2
  • indexed rows/s for l.x
  • range queries/s for qr100, qr500, qr1000
  • point queries/s for qp100, qp500, qp1000
Below I use colors to highlight the relative QPS values with yellow for regressions and blue for improvements.

Results: details

This table is a copy of the second table in the summary section. It lists the relative QPS (rQPS) for each benchmark step where rQPS is explained above.

The benchmark steps are explained above, they are:
  • l.i0 - initial load in PK order
  • l.x - create 3 secondary indexes per table
  • l.i1, l.i2 - random inserts and random deletes
  • qr100, qr500, qr1000 - short range queries with background writes
  • qp100, qp500, qp1000 - point queries with background writes

dbmsl.i0l.xl.i1l.i2qr100qp100qr500qp500qr1000qp1000
5.6.511.001.001.001.001.001.001.001.001.001.00
5.7.440.891.521.141.080.830.840.830.840.840.84
8.0.430.602.501.040.860.690.620.690.630.700.62
8.4.60.602.531.030.860.680.610.670.610.680.61
9.4.00.602.531.030.870.700.630.700.630.700.62



The summary is:
  • l.i0
    • there are large regressions starting in 8.0 and modern MySQL only gets ~60% of the throughput relative to 5.6 because modern MySQL has more CPU overhead
  • l.x
    • I ignore this but there have been improvements
  • l.i1, l.i2
    • there was a large improvement in 5.7 but new CPU overhead since 8.0 reduces that
  • qr100, qr500, qr1000
    • there are large regressions from 5.6 to 5.7 and then again from 5.7 to 8.0
    • throughput in modern MySQL is ~60% to 70% of what it was in 5.6


    Thursday, August 21, 2025

    Sysbench for MySQL 5.6 thru 9.4 on a small server

    This has performance results for InnoDB from MySQL 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0 on a small server with sysbench microbenchmarks. The workload here is cached by InnoDB and my focus is on regressions from new CPU overheads. This work was done by Small Datum LLC and not sponsored. 

    tl;dr

    • Low concurrency (1 client) is the worst case for regressions in modern MySQL
    • MySQL 8.0, 8.4 and 9.4 are much slower than 5.6.51 in all but 2 of the 32 microbenchmarks
      • The bad news - performance regressions aren't getting fixed
      • The good news - regressions after MySQL 8.0 are small

    Builds, configuration and hardware

    I compiled MySQL from source for versions 5.6.51, 5.7.44, 8.0.43, 8.4.6 and 9.4.0.

    The server is an ASUS ExpertCenter PN53 with AMD Ryzen 7 7735HS, 32G RAM and an m.2 device for the database. More details on it are here. The OS is Ubuntu 24.04 and the database filesystem is ext4 with discard enabled.

    The my.cnf files are here.

    Benchmark

    I used sysbench and my usage is explained here. To save time I only run 32 of the 42 microbenchmarks and most test only 1 type of SQL statement. Benchmarks are run with the database cached by InnoDB.

    The tests are run using 1 table with 50M rows. The read-heavy microbenchmarks run for 600 seconds and the write-heavy for 900 seconds.

    Results

    All files I saved from the benchmark are here and the spreadsheet is here.

    The microbenchmarks are split into 4 groups -- 1 for point queries, 2 for range queries, 1 for writes. For the range query microbenchmarks, part 1 has queries that don't do aggregation while part 2 has queries that do aggregation. 

    I provide charts below with relative QPS. The relative QPS is the following:
    (QPS for some version) / (QPS for MySQL 5.6.51)
    When the relative QPS is > 1 then some version is faster than 5.6.51.  When it is < 1 then there might be a regression. Values from iostat and vmstat divided by QPS are also provided here. These can help to explain why something is faster or slower because it shows how much HW is used per request.

    Results: point queries

    Based on results from vmstat the regressions are from new CPU overheads.
    Results: range queries without aggregation

    Based on results from vmstat the regressions are from new CPU overheads.
    Results; range queries with aggregation

    Based on results from vmstat the regressions are from new CPU overheads.
    Results: writes

    Based on results from vmstat the regressions are from new CPU overheads.


    Friday, August 1, 2025

    Postgres 18 beta2: large server, Insert Benchmark, part 2

    I repeated the benchmark for one of the workloads used in a recent blog post on Postgres 18 beta2 performance. The workload used 1 client and 1 table with 50M rows that fits in the Postgres buffer pool. In the result I explain here, one of the benchmark steps was run for ~10X more time. Figuring out how long to run the steps in the Insert Benchmark is always a work in progress -- I want to test more things, so I don't want to run steps for too long, but there will be odd results if the run times are too short.

    tl;dr

    • up to 2% less throughput on range queries in the qr100 benchmark step. This is similar to what I saw in my previous report.
    • up to 12% more throughput on the l.i2 benchmark step in PG beta1 and beta2. This is much better than what I saw in my previous report.

    Details

    Details on the benchmark are in my previous post.

    The benchmark is explained here and was run for one workloads -- 1 client, cached.

    • run with 1 client, 1 table and a cached database
    • load 50M rows in step l.i0, do 160M writes in step l.i1 and 40M in l.i2. Note that here the l.i1 and l.i2 steps run for ~10X longer than in my previous post.
    The benchmark steps are:

    • l.i0
      • insert X million rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
    • l.x
      • create 3 secondary indexes per table. There is one connection per client.
    • l.i1
      • use 2 connections/client. One inserts Y million rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
    • l.i2
      • like l.i1 but each transaction modifies 5 rows (small transactions) and Z million rows are inserted and deleted per table.
      • Wait for N seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of N is a function of the table size.
    • qr100
      • use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. This step is run for 1800 seconds. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested.
    • qp100
      • like qr100 except uses point queries on the PK index
    • qr500
      • like qr100 but the insert and delete rates are increased from 100/s to 500/s
    • qp500
      • like qp100 but the insert and delete rates are increased from 100/s to 500/s
    • qr1000
      • like qr100 but the insert and delete rates are increased from 100/s to 1000/s
    • qp1000
      • like qp100 but the insert and delete rates are increased from 100/s to 1000/s
    Results: overview

    The performance report is here.

    The summary section has 3 tables. The first shows absolute throughput by DBMS tested X benchmark step. The second has throughput relative to the version from the first row of the table. The third shows the background insert rate for benchmark steps with background inserts. The second table makes it easy to see how performance changes over time. The third table makes it easy to see which DBMS+configs failed to meet the SLA.

    Below I use relative QPS (rQPS) to explain how performance changes. It is: (QPS for $me / QPS for $base) where $me is the result for some version $base is the result from Postgres 17.4.

    When rQPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. When it is 0.90 then I claim there is a 10% regression. The Q in relative QPS measures: 
    • insert/s for l.i0, l.i1, l.i2
    • indexed rows/s for l.x
    • range queries/s for qr100, qr500, qr1000
    • point queries/s for qp100, qp500, qp1000
    Below I use colors to highlight the relative QPS values with red for <= 0.97, green for >= 1.03 and grey for values between 0.98 and 1.02.

    Results: 1 client, cached

    Normally I summarize the summary but I don't do that here to save space.

    But the tl;dr is:
    • up to 2% less throughput on range queries in the qr100 benchmark step. This is similar to what I saw in my previous report.
    • up to 12% more throughput on the l.i2 benchmark step in PG beta1 and beta2. This is much better than what I saw in my previous report.

    Postgres 18rc1 vs sysbench

    This post has results for Postgres 18rc1 vs sysbench on small and large servers. Results for Postgres 18beta3 are here for a small and larg...