Wednesday, December 6, 2017

Insert benchmark: in-memory, high-concurrency, fast server - part 2

This is similar to the previous insert benchmark result for in-memory and high-concurrency except it uses 1 table rather than 16 to determine how a storage engine behaves with more contention. The results for 16 vs 1 table are more interesting on the IO-bound test where there are more stalls in the 1-table results.

One example of performance lost from contention is the per-index mutex for InnoDB which is locked during pessimistic changes to the B-Tree. I know this has been improved over the years but the problem has not been eliminated.

Configuration

Start by reading my previous post. The test still uses 500M rows but there is only one table here when the previous test used 16 tables. The load test still uses 16 concurrent clients. The read-write test still uses 16 read clients and 16 write clients. But the scan test uses 1 client here versus 16 clients on the previous test and the scan test takes longer to finish.

Results

All of the data for the tests is here. I adjusted iostat bytes written metrics for MyRocks because it currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact.

For most of the results below I compare rates for this test with rates for the 16-table test and skip the graphs that show HW efficiency metrics.

Load

This is interesting:
  • Some engines get more inserts/second with 16 tables - 1.12X more for MyRocks, 1.20X more for InnoDB 5.7, 1.17X more for InnoDB 8.0 and 3.26X more for TokuDB
  • InnoDB 5.6 gets more inserts/second with 1 table - 1.04X more for FB MySQL and 1.14X more for upstream



Scan

Scan results for 1 table are similar to scan results for 16 tables. The MyRocks scans are ~2X slower than InnoDB and InnoDB scans got faster with 5.7.


Read-write with 1000 inserts/second

The QPS for 1 table is similar to the QPS for 16 tables. I didn't mention this on the previous test but the 16 concurrent writers should sustain ~16,000 inserts/second. If they don't then the engine has a performance problem. For this test using 1 table, the October 16 build of MyRocks didn't sustain the target write rate. The average rate for it was 15677 while other engines get 15842 or better and the data is in the ips.av column here. Note that the max that my ibench client code will sustain is ~15845/second rather than 16,000 and I have yet to fix that. Regardless I will look at this the next time I run the test to understand whether MyRocks has a problem.


Read-write with 100 inserts/second

The QPS for 1 table is similar to the QPS for 16 tables.

No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...