Thursday, August 31, 2017

In-memory sysbench, a larger server and contention - part 1

Yesterday I shared results for in-memory sysbench on a large server. Today I have more results from a similar test but with more contention. The tests yesterday used 8 tables with 1M rows/table. The test here uses 1 table with 8M rows. In some ways the results are similar to yesterday. But there are some interesting differences that I will explain in part 2 (another post).

tl;dr
  • MyRocks does worse than InnoDB 5.6.35 at low and mid concurrency for all tests except update-index. It suffers more on the read-heavy tests.
  • MyRocks and TokuDB are more competitive at mid and high concurrency than at low. MyRocks is faster than InnoDB 5.6.35 for many of the high concurrency tests.
  • InnoDB QPS at low concurrency tends to decrease after 5.6 for tests heavy on point queries but something in MySQL 5.7 made InnoDB range scans much faster.
  • InnoDB QPS at mid and high concurrency tends to increase after 5.6

Configuration

Everything is the same as explained in the previous post except I used 1 table with 8M rows here and 8 tables with 1M rows/table there. Using 1 table instead of 8 means there can be more contention in the database engine on things like the InnoDB per-index mutex. A command line is:
bash all.sh 1 8000000 180 300 180 innodb 1 0 /orig5717/bin/mysql none /sysbench.new

Results without charts

All of the data is here. Below I share the QPS ratios to compare the QPS for one engine with the QPS from InnoDB from MySQL 5.6.35. The engine is slower than InnoDB 5.6.35 when the ratio is less than 1.0.

QPS ratio:
* rocks = myrocks / inno5635
* inno = inno5717 / inno5635
* toku = toku5717 / inno5635

1 connection
rocks   inno    toku
0.460   0.946   0.195   update-inlist
0.862   0.906   0.278   update-one
1.496   1.239   0.569   update-index
0.674   0.906   0.248   update-nonindex
0.756   0.921   0.240   update-nonindex-special
0.793   0.866   0.319   delete-only
0.701   1.027   0.612   read-write.range100
0.812   1.657   1.033   read-write.range10000
0.701   1.089   0.737   read-only.range100
0.804   1.676   1.281   read-only.range10000
0.675   0.904   0.731   point-query
0.508   0.923   0.732   random-points
0.554   0.904   0.633   hot-points
0.760   0.857   0.257   insert-only
-----   -----   -----
0.754   1.059   0.562   average

8 connections
rocks   inno    toku
0.968   1.587   0.293   update-inlist
1.014   0.843   0.190   update-one
1.837   2.183   0.608   update-index
0.879   1.090   0.307   update-nonindex
0.928   1.094   0.312   update-nonindex-special
0.968   1.068   0.340   delete-only
0.722   1.045   0.560   read-write.range100
0.814   1.626   1.108   read-write.range10000
0.714   1.126   0.825   read-only.range100
0.811   1.639   1.255   read-only.range10000
0.690   0.914   0.727   point-query
0.718   1.156   0.840   random-points
0.966   1.354   0.832   hot-points
0.859   1.104   0.310   insert-only
-----   -----   -----
0.921   1.274   0.608   average

48 connections
rocks   inno    toku
1.679   3.087   0.788   update-inlist
0.982   0.979   0.231   update-one
1.222   1.986   0.606   update-index
1.379   1.947   0.886   update-nonindex
1.387   1.936   0.854   update-nonindex-special
1.189   1.876   0.578   delete-only
0.826   1.148   0.514   read-write.range100
0.840   1.316   0.953   read-write.range10000
0.743   1.112   0.740   read-only.range100
0.850   1.342   1.034   read-only.range10000
0.941   1.368   1.066   point-query
2.042   1.445   0.686   random-points
0.793   1.507   0.711   hot-points
1.820   1.605   0.692   insert-only
-----   -----   -----
1.192   1.618   0.739   average

Results with charts

Charts using the data from the previous section. For some of them I truncate the x-axis to make it easier to see differences between engines.





No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...