Friday, June 9, 2017

Linkbench, in-memory & Intel NUC for MySQL 5.6, 5.7 & 8

Update - the regression isn't as bad as I have been reporting. Read this post to understand why.

Next up in my series on low-concurrency performance is Linkbench on Intel NUC servers with an in-memory workload. The database fits in RAM and there are no reads from but many writes to storage. This is part of my work on bug 86215. One day I will stop running tests and start working on code improvements but not yet. For more on Linkbench read the Facebook post and paper and my latest post.

tl;dr
  • MySQL 5.7 and 8 get about 80% of insert rate and 60% of the transaction rate compared to MySQL 5.6 in the worst case at low concurrency. This is explained by an increase in the CPU cost per insert and per transaction. I assume more code & more features is the cause.
  • For the i3 NUC some of the regression is from MySQL 5.6 to 5.7 and some is from 5.7 to 8.
  • For the i5 NUC most of the regression is from MySQL 5.7 to 8.

Configuration

I tested MySQL using upstream 5.6.35, 5.7.17 and 8.0.1. For 8.0.1 I repeated the test: first with the default charset/collation and then with latain1/latin1_bin for the i3 NUC and latin1/latin1_swedish_ci for the i5 NUC.

My servers are described here. The charts below refer to the servers as i3 NUC and i5 NUC. Compared to the i3 NUC, the i5 NUC is newer, has a CPU that is 2X faster, 2X more RAM and storage that is more than 2X faster.

The my.cnf used for each database engine is described here. The my.cnf files I share were for the i3 NUC. For the i5 NUC the InnoDB buffer pool and IO capacity options were increased using these values. I use the same server for mysqld and the sysbench clients. The binlog is enabled but sync-on-commit is disabled.

For this test I used maxid1=2M in the Linkbench configuration, about 2M nodes will be loaded and the InnoDB database is about 3gb. I use Linkbench from here and helper scripts from here. My usage of Linkbench is described here. A sample command line to run the tests is:
bash all.sh rx ~/bin/mysql /data/m/data 2000001 sdb 1 3600 mysql lb.sql.inno 24 127.0.0.1 1
Results

The first pair of charts shows the absolute and relative insert rates during the load test. Insert rates drop from MySQL 5.6 to 5.7 and 8. For the i3 NUC about half of the regression is from MySQL 5.6 to 5.7. For the i5 NUC about 2/3 of the regression is from MySQL 5.6 to 5.7.

The next pair of charts show the absolute value for CPU per insert followed by the inverse of the relative rates. The absolute value comes from the Mcpu/t column in my performance data. More CPU is used in MySQL 5.7 and 8 compared to 5.6. For the i3 NUC about 1/3 of the CPU regression is from MySQL 5.6 to 5.7. For the i5 NUC about 2/3 of the CPU regression is from MySQL 5.6 to 5.7. The inverse of CPU per insert values are similar to the relative insert rates above and the CPU regression explains the drop in insert rates from MySQL 5.6 to 5.7 and 8.

The next pair of charts show the absolute transactions/second rates and the the rates relative to the value for MySQL 5.6. These are from the 24th hour of the transaction test. For the i3 NUC about 1/2 of the regression is from MySQL 5.6 to 5.7. For the i5 NUC most of the regression is from MySQL 5.6 to 5.7.


The final pair of charts shows the absolute value for CPU per transaction followed by the inverse of the value relative to MySQL 5.6. These are from the 24th hour of the transaction test. The absolute value is from the Mcpu/t column in the performance data. The first chart shows that for the i3 NUC about 1/2 of the CPU regression is from MySQL 5.6 to 5.7 and for the i5 NUC most of the regression is from MySQL 5.6 to 5.7. The second chart shows that the inverse of the relative CPU predicts the transaction rate and the CPU regression explains the transaction rate regression.

Metrics

All of the performance data is here and explained by a previous post.

No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...