Wednesday, July 8, 2015

WiredTiger improvements from MongoDB 3.0 to 3.1

There has been progress in the 3.1 branch on performance bugs I reported for the WiredTiger B-Tree so I repeated the insert benchmark for it and the improvements are great. The average insert and query rates for WiredTiger are almost 2X better in 3.1 versus 3.0. RocksDB still does better than WiredTiger for inserts but worse for queries.

Results

I tested 3 binaries: WiredTiger via MongoDB 3.0.x, RocksDB via MongoDB 3.0.x and WiredTiger via MongoDB 3.1.5. I use "3.0.x" because the build is from a special branch used for MongoRocks development.

The test is run with 10 loader threads and 1 query thread. The test HW has 40 hyperthread cores, PCIe flash storage and 144G of RAM. The database is much larger than RAM at test end. The insert rate is faster for RocksDB because it is write-optimized and benefits from not doing read-before-write during secondary index maintenance. It also benefits from a better cache hit rate because the database is smaller than for WiredTiger.

The queries are short, 10 document, range scans on a secondary index. WiredTiger does better than RocksDB because a write-optimized algorithm like RocksDB trades read for write performance and there is more work to be done on range scans. The read penalty is less for point queries.

The results below include:
  • irate - the average rate of inserts per second
  • Nslow - the number of write operations that take at least two seconds. This is computed from the slow operations reported in the log.
  • qrate - the average rate of queries per second
  • size - the database size in GB at test end
irate   Nslow   qrate   size    server
24187   430     136     312g    RocksDB 3.0.x
20273   2409    624     442g    WiredTiger 3.1.5
10823   3996    311     416g    WiredTiger 3.0.x

The charts display the insert and query rates for each binary.

Configuration

The hardware and configuration I use is described in a previous post. I used my fork of iibench and am glad that Tim wrote and shared the code. I used this command line to run iibench and load 500M documents with 10 loader threads and 1 query thread.
java -cp mongo-java-driver-2.13.1.jar:src jmongoiibench iibench 10 500000000 10 -1 10 ib.tsv quicklz 16384 60000000000 4 100000 999999 SAFE localhost 27017 1 1000 3 50 Y 1 0 1

No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...