Tuesday, August 29, 2017

IO-bound sysbench on a smaller server

This is part 3 of my performance report on IO-bound sysbench. In part 1 and part 2 I used a large server with 48 HW threads. Here I use a core i5 NUC with 4 HW threads.

tl;dr
  • results are similar to the first post
  • MyRocks is competitive on the write-heavy tests
  • MyRocks is slower on the read-heavy tests
  • I don't attempt to show that MyRocks wins on space and write efficiency and that is why I like it

Configuration

I use my sysbench fork and helper scripts, release specific my.cnf files. The server is a core i5 NUC with 4 HW threads, 16gb of RAM and a fast SSD. The binlog was enabled and sync-on-commit was disabled for the binlog and database log. I remembered to disable SSL.

I tested MyRocks and InnoDB, with buffered IO and a 4g block cache for MyRocks and O_DIRECT and a 12gb buffer pool for InnoDB. The server is shared by the sysbench client and mysqld. For MyRocks I used a build from August 15 with git hash 0d76ae. For InnoDB I used upstream 5.6.35, 5.7.17 and 8.0.2. For InnoDB 8.0.2 I used latin1 charset and latin1_swedish_ci collation. Compression was not used for InnoDB or MyRocks. Note that this description is correct where it disagrees with the my.cnf files that I shared.

The test used 4 tables with 80M rows/table. My use of sysbench is explained here. Tests are run in an interesting pattern -- load, write-heavy, read-only, insert-only. On the core i5 NUC each test is run for 1 and then 2 concurrent connections for either 5 or 10 minutes per concurrency level. So each test runs for either 10 or 20 minutes total and I hope that is long enough to get the database into a steady state. An example command line to run the test with my helper scripts is:
bash all.sh 4 80000000 600 600 300 innodb 1 0 /orig5717/bin/mysql none /sysbench.new

Results without charts

All of the data is here. The results here are mostly similar to the results from the large server however MyRocks does worse here on some of the read-heavy tests. The QPS ratio for point-query is 0.651 for MyRocks here versus 0.850 on the large server. Note that the hot-points workload is not IO-bound, regardless we need to make MyRocks more effective on it. MySQL did something to make InnoDB range scans more efficient starting in 5.7. I don't know whether the problem for MyRocks is CPU or IO overhead in the range-scan heavy tests (read-write.*, read-only.*).

QPS ratio:
* rocks = myrocks.none / inno5635
* inno = inno5717 / inno5635
* value less than 1.0 means that InnoDB 5.6 is faster

1 connection
rocks   inno
2.909   1.074   update-index
1.068   1.095   update-nonindex
1.006   0.935   update-nonindex-special
1.682   0.988   delete-only
1.053   0.961   read-write.range100
0.881   1.554   read-write.range10000
0.776   1.348   read-only.range100
0.898   1.584   read-only.range10000
0.651   1.197   point-query
1.000   1.285   random-points
0.267   0.943   hot-points
0.989   0.941   insert-only

Results with charts

Sorry, no charts this time. Charts from the previous post are close enough.

No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...