Monday, December 11, 2017

Sysbench: in-memory and a fast server

In this post I share results for in-memory sysbench on a fast server using MyRocks, InnoDB and TokuDB. To save time I share throughput results at low, mid and high concurrency but skip the HW efficiency metrics that I derive from vmstat and iostat output.

tl;dr - for in-memory sysbench
  • MyRocks does worse than InnoDB for most tests, sometimes a lot worse
  • MyRocks gets up to 2X more QPS for write-heavy tests with the binlog disabled. The cost from the binlog is larger for it than for InnoDB. This is an opportunity to make MyRocks better.
  • InnoDB 5.7 and 8.0 tend to do better than InnoDB 5.6 at high concurrency and worse at low concurrency. 
  • For mid concurrency InnoDB 5.7 and 8.0 tend to do better than InnoDB 5.6 for write-heavy but worse for read-heavy except for range queries
  • InnoDB 5.7 and 8.0 benefit from improvements to range scan efficiency and a reduction in mutex contention. But InnoDB 5.7/8.0 has more overhead from code above the storage engine and that costs up to 20% of QPS.

Configuration

My usage of sysbench is described here. The test server has 48 HW threads, fast SSD and 256gb of RAM. The database block cache (buffer pool) was large enough to cache all tables. Sysbench was run with 8 tables and 1M rows/table. Tests were repeated for 1, 2, 4, 8, 16, 24, 32, 40, 48 and 64 concurrent clients. At each concurrency level the read-only tests run for 180 seconds, the write-heavy tests for 300 seconds and the insert test for 180 seconds.

Tests were run for MyRocks, InnoDB from upstream MySQL, InnoDB from FB MySQL and TokuDB. The binlog was enabled but sync on commit was disabled for the binlog and database log, but tests were repeated for MyRocks with the binlog disabled. All engines used jemalloc. Mostly accurate my.cnf files are here.
  • MyRocks was compiled on October 16 with git hash 1d0132. Compression was not used.
  • Upstream 5.6.35, 5.7.17, 8.0.1, 8.0.2 and 8.0.3 were used with InnoDB. SSL was disabled and 8.x used the same charset/collation as previous releases.
  • InnoDB from FB MySQL 5.6.35 was compiled on June 16 with git hash 52e058.
  • TokuDB was from Percona Server 5.7.17. Compression was not used.
The performance schema was enabled for upstream InnoDB and TokuDB. It was disabled at compile time for MyRocks and InnoDB from FB MySQL because FB MySQL 5.6 has user & table statistics for monitoring.

Results

All of the data for the tests is on github. Graphs for each test are below. The graphs show the QPS for a test relative to the QPS for InnoDB 5.6.35 and a value > 1 means the engine gets more QPS than InnoDB 5.6.35. The graphs have data for tests with 1, 8 and 48 concurrent clients and I refer to these as low, mid and high concurrency. The tests are explained here and the results are in the order in which the tests are run except where noted below. The graphs exclude results for InnoDB from FB MySQL to improve readability.

For the write heavy tests I provide results for MyRocks with the binlog enabled (MyRocks) and with it disabled (MyRocks.nobl). MyRocks gets up to 2X more write QPS with the binlog disabled. It suffers much more than InnoDB when the binlog is enabled. Work is in progress to make that better.

update-inlist

Interesting results:
  • MyRocks is lousy at low and mid concurrency
  • MyRocks gets up to 2X more QPS with the binlog disabled
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 at low concurrency and better at mid & high concurrency


update-one

Interesting results:
  • MyRocks is worse than InnoDB
  • MyRocks gets up to 2X more QPS with the binlog disabled
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 at low concurrency and slightly better at high concurrency. But new code overheads limit the difference.

update-index

Interesting results:
  • MyRocks is better here than on other write-heavy tests relative to InnoDB because non-unique secondary index maintenance is read-free.
  • MyRocks gets up to 2X more QPS with the binlog disabled
  • InnoDB 5.7/8.0 are similar to InnoDB 5.6 at low concurrency and better at mid/high concurrency

update-nonindex

Interesting results:
  • MyRocks is worse than InnoDB 5.6 except at high concurrency
  • MyRocks gets up to 2X more QPS with the binlog disabled
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 at low concurrency, similar at mid and better at high concurrency. The difference between InnoDB 5.6 and 5.7/8.0 is smaller here than for update-inlist because this spends a larger fraction of time in optimizer/parser.

read-write range=100

Interesting results:
  • MyRocks is worse than InnoDB
  • InnoDB 5.7/8.0 are similar to InnoDB 5.6


read-write range=10000

Interesting results:
  • MyRocks is worse than InnoDB
  • InnoDB 5.7/8.0 are better than InnoDB 5.6 because range scans were improved

read-only range=100

Interesting results:
  • MyRocks is worse than InnoDB
  • InnoDB 5.7/8.0 are similar to InnoDB 5.6. Range scan improvements offset cost of new code.

read-only.pre range=10000

This test is run before the write heavy tests and the InnoDB B-Tree might be less fragmented as a result. Interesting results:
  • MyRocks is worse than InnoDB
  • InnoDB 5.7/8.0 are better than InnoDB 5.6 because range scans were improved and the range scan here is longer than in the previous section.

read-only range=10000

This test is run after the write heavy tests. Interesting results:
  • MyRocks is worse than InnoDB
  • InnoDB 5.7/8.0 are better than InnoDB 5.6 because range scans were improved

point-query.pre

This test is run before the write heavy tests and the InnoDB B-Tree might be less fragmented as a result. Interesting results:
  • MyRocks is worse than InnoDB except at high concurrency
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 except at high concurrency

point-query

This test is run after the write heavy tests. Interesting results:
  • MyRocks is worse than InnoDB except at high concurrency
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 except at high concurrency

random-points.pre

This test is run before the write heavy tests and the InnoDB B-Tree might be less fragmented as a result. Interesting results:
  • MyRocks is worse than InnoDB except at high concurrency
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 except at high concurrency

random-points

This test is run after the write heavy tests. Interesting results:
  • MyRocks is worse than InnoDB except at high concurrency. The gap with InnoDB is larger here than for random-points.pre. I assume that RocksDB suffers more than a B-Tree from the LSM equivalent of a fragmented search tree.
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 except at high concurrency

hot-points

The results here are similar to the results for random-points. The hot-points test is similar to random-points except there is more data contention. But as that is split across 8 tables it isn't significant. It will be significant for the test that uses 1 table.


insert

Interesting results:
  • MyRocks is worse than InnoDB except at high concurrency
  • MyRocks gets up to 2X more QPS with the binlog disabled
  • InnoDB 5.7/8.0 are worse than InnoDB 5.6 at low concurrency, similar at mid and better at high concurrency.

No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...