Monday, January 18, 2016

Faster loads for MyRocks

In my previous post I evaluated Linkbench performance for MyRocks and InnoDB and the insert rate during the load was faster for InnoDB. Here I show that with tuning MyRocks can load as fast as InnoDB on SSD.

Tuning was required for SSD. On disk arrays loading is already much faster with MyRocks than InnoDB and I will publish those soon. The largest tuning benefit comes from setting the rocksdb_load_bulk session variable to disable checks for unique index constraints. A smaller tuning benefit comes from using a smaller value for write_buffer_size, 32MB rather than 128MB used in the previous test. The benefit from a smaller memtable is fewer compares per insert.

The number of load threads is configurable for Linkbench via the loaders variable and I have been using loaders=20. But that is only for the threads that load the link table. The node table in Linkbench is also large and always loaded by a single thread. I hope to make it multi-threaded but until then using too many threads for the link table makes the load slower for the node table. So I repeated the load with loaders=8.


Load Performance for MyRocks


The results below show the average load rate for many configurations. Tuned MyRocks is able to match or beat the load rate for InnoDB on SSD:

Best memtable size


What is the best memtable size? One approach is to minimize the total number of comparisons done while inserting into the memtable and when merging level 0 files during compaction into level 1. However the comparisons done during insert are in the foreground while comparisons done during compaction are in the background. It is possible that the background comparisons have no impact on latency as long as compaction can keep up with the insert rate. A small memtable reduces the number of compares done in the foreground which is why the best insert rate occurs for the smaller memtables.

Below I show the insert rate as a function of the memtable size. I used db_bench, the RocksDB benchmark client, and configured RocksDB to maximize the impact from memtable insert latency -- compaction disabled, WAL disabled, write batch of size 8, single-thread doing inserts. The insert rate declines as the memtable size is increased from 1MB to 128MB. The rate for a 2MB memtable is better than for 1MB because when the memtable is too small there are stalls while switching memtables. The data for the graph is here.


No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...