Wednesday, August 30, 2017

In-memory sysbench and a large server

Today I share worst-case performance results for MyRocks -- in-memory sysbench and a small database. I like MyRocks because it reduces space and write amplification, but I don't show results for that here. Besides, there isn't much demand for better compression from such a small database. This is part 1 with results from a large server.

tl;dr
  • MyRocks does worse than InnoDB 5.6.35 at low and mid concurrency for all tests except update-index. It suffers more on the read-heavy tests.
  • MyRocks and TokuDB are more competitive at mid and high concurrency than at low
  • InnoDB QPS at low concurrency tends to decrease after 5.6
  • InnoDB QPS at mid and high concurrency tends to increase after 5.6

Configuration

I use my sysbench fork and helper scripts, release specific my.cnf files and a server with 48 HW threads, fast SSD and 256gb of RAM. The binlog was enabled and sync-on-commit was disabled for the binlog and database log. I remembered to disable SSL.

I tested MyRocks, TokuDB and InnoDB, with buffered IO and a 180g database cache for MyRocks/TokuDB and O_DIRECT and a 180gb buffer pool for InnoDB. The server is shared by the sysbench client and mysqld. For MyRocks I used a build from August 15 with git hash 0d76ae. For TokuDB I used Percona Server 5.7.17-12. For InnoDB I used upstream 5.6.35, 5.7.17 and 8.0.2. For InnoDB 8.0.2 I used latin1 charset and latin1_swedish_ci collation. Compression was not used for any engines. More details are in the release specific my.cnf files and I used the same my.cnf for InnoDB with 8.0.1 and 8.0.2. All mysqld use jemalloc.

The test used 8 tables with 1M rows/table. My use of sysbench is explained here. Tests are run in an interesting pattern -- load, write-heavy, read-only, insert-only. On the large server each test is run for 1, 2, 4, 8, 16, 24, 32, 40, 48 and 64 concurrent connections for either 3 or 5 minutes per concurrency level. So each test runs for either 30 or 50 minutes total and I hope that is long enough to get the database into a steady state. An example command line to run the test with my helper scripts is:
bash all.sh 8 1000000 180 300 180 innodb 1 0 /orig5717/bin/mysql none /sysbench.new

Results without charts

All of the data is here. Below I share the QPS ratios to compare the QPS for one engine with the QPS from InnoDB from MySQL 5.6.35. The engine is slower than InnoDB 5.6.35 when the ratio is less than 1.0. Things that I see include:
  • MyRocks does worse than InnoDB 5.6.35 at low and mid concurrency for all tests except update-index. It suffers more on the read-heavy tests.
  • MyRocks and TokuDB are more competitive at mid and high concurrency than at low
  • InnoDB QPS at low concurrency tends to decrease after 5.6
  • InnoDB QPS at mid and high concurrency tends to increase after 5.6

QPS ratio:
* rocks = myrocks / inno5635
* inno = inno5717 / inno5635
* toku = toku5717 / inno5635

1 connection
rocks   inno    toku
0.476   0.959   0.201   update-inlist
0.828   0.923   0.278   update-one
1.146   1.090   0.458   update-index
0.648   0.901   0.243   update-nonindex
0.740   0.898   0.236   update-nonindex-special
0.836   0.917   0.347   delete-only
0.711   1.041   0.586   read-write.range100
0.809   1.671   1.038   read-write.range10000
0.664   1.096   0.753   read-only.range100
0.801   1.657   1.263   read-only.range10000
0.641   0.905   0.735   point-query
0.442   0.923   0.753   random-points
0.480   0.900   0.694   hot-points
0.742   0.855   0.259   insert-only
-----   -----   -----
0.711   1.052   0.560   average of the above

8 connections
rocks   inno    toku
0.966   1.611   0.308   update-inlist
0.871   1.029   0.276   update-one
1.201   1.467   0.417   update-index
0.858   1.093   0.315   update-nonindex
0.898   1.090   0.314   update-nonindex-special
0.949   1.058   0.338   delete-only
0.710   1.039   0.534   read-write.range100
0.811   1.621   1.128   read-write.range10000
0.675   1.098   0.851   read-only.range100
0.810   1.639   1.263   read-only.range10000
0.648   0.910   0.746   point-query
0.541   1.097   0.931   random-points
0.754   1.317   1.037   hot-points
0.776   1.028   0.286   insert-only
-----   -----   -----
0.819   1.221   0.625   average of the above

48 connections
rocks   inno    toku
1.649   3.127   0.922   update-inlist
0.760   1.193   0.372   update-one
1.316   2.236   0.700   update-index
1.360   1.982   0.937   update-nonindex
1.374   1.965   0.995   update-nonindex-special
1.126   1.845   0.566   delete-only
0.804   1.129   0.507   read-write.range100
0.838   1.310   0.956   read-write.range10000
0.711   1.098   0.866   read-only.range100
0.823   1.305   1.034   read-only.range10000
0.932   1.347   1.084   point-query
1.417   2.920   2.248   random-points
1.840   3.226   2.350   hot-points
1.096   1.927   0.567   insert-only
-----   -----   -----
1.146   1.901   1.007   average of the above

Results with charts

The charts below have the same data from the previous section - the QPS for the engine relative to the QPS for InnoDB from MySQL 5.6.35.

No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...