Wednesday, August 5, 2015

Insert benchmark & queries for MongoDB on fast storage

This extends the performance evaluation using the insert benchmark on a server with fast storage. After the load extra tests were run using different numbers of insert and query threads. The load was done for 2B documents with a 256 byte pad field. The databases are at least 500G and the test server has 144G of RAM and 40 hyperthread cores. My conclusions are:
  • WiredTiger has the best range query performance followed closely by RocksDB. While the LSM used by RocksDB can suffer a penalty on range reads, that wasn't visible for this workload.
  • mmapv1 did OK on range query performance when there were no concurrent inserts. Adding concurrent inserts reduces range query QPS by 4X. Holding exclusive locks while doing disk reads for index maintenance is a problem, see server-13225.
  • TokuMX 2.0.1 has a CPU bottleneck that hurts range query performance. I await advice from TokuMX experts to determine whether tuning can fix this. I have been using a configuration provided by experts.

Test details

More details on the configuration for mongod is in the previous post. The tests here were run after the database was loaded with 2B documents that used a 256 byte pad field. After the initial load the database was ~620G, ~630G, ~1400T and ~500G for TokuMX, WiredTiger, mmapv1 and RocksDB. I used my fork of iibench for MongoDB (thanks Tim) to support more than 1 query thread.  My fork has an important diff to avoid reusing the same RNG seed between tests. The range query fetches at most 4 documents via the QUERY_LIMIT=4 command line option. Tests were run in this sequence:
  • 10i.1q - 10 insert threads, 1 query thread. Run to insert 10M documents for mmapv1 and 100M documents for the other engines
  • 1i.10q - 1 insert thread, 10 query threads. The insert thread was rate limited to 1000 documents/second. Run for at least 1 hour.
  • 0i.10q - 0 insert threads, 10 query threads. Run for at least 1 hour.
  • 10i.0q - 10 insert threads, 0 query threads. Run to insert at least 10M documents.
  • 1i.10q - this is the same as the previous 1i.10 test except I increased the value of internalQueryCacheWriteOpsBetweenFlush to avoid the IO overhead from re-planning queries too frequently as explained in a previous post. Alas, this cannot be done for TokuMX 2.0.1 because that option doesn't exist in the older version of MongoDB it uses.

Results

This is the QPS from the 10 query threads with and without a concurrent insert thread. The result for 1i.10q is from the run where I increased the value of this parameter to get better performance, although the overhead from it is much more significant with a disk array than with PCIe flash --internalQueryCacheWriteOpsBetweenFlush. There are a few things to note here:
  • WiredTiger and RocksDB have similar QPS
  • QPS for mmapv1 is much worse with a concurrent insert thread (locks are held when disk reads are done for index maintenance, not yielded)
  • QPS for TokuMX is low. There is a CPU bottleneck. More details at the end of this post.
The tables that follow have results for each of the tests. The results include:
  • ipsAvg - average rate for inserts/second
  • ipsP95 - rate at 95th percentile for inserts/second using 10-second intervals
  • qpsAvg - average rate for range queries/second
  • qpsP95 - rate at 95th percentile for queries/second using 10-second intervals
  • dbSize - database size in GB at test end
  • rssSize - RSS for mongod at test end

10i.1q

These are results for the 10i.1q test. WiredTiger has significant variance for the insert and query rates. Bugs have been opened for this problem and progress has been made but I think some bugs are still open. The query rate for mmapv1 is very low as described earlier in this blog post.

ipsAvgipsP95qpsAvgqpsP95dbSizerssSize
tokumx4077536001119.5101.6622gb36gb
wiredtiger617514885014.8632gb37gb
mmapv1125710941.150.714XXgb70gb
rocksdb294242593145.130507gb54gb

1i.10q, first

These are results for the first 1i.10q test. The insert thread is rate limited to 1000 documents/second. WiredTiger and RocksDB sustain higher query rates. TokuMX (because of the CPU bottleneck) and mmapv1 (because of the larger database and RW lock) sustain lower query rates.

ipsAvgipsP95qpsAvgqpsP95dbSizerssSize
tokumx981900726696553gb39gb
wiredtiger98090874146505632gb32gb
mmapv195187476665014XXgb66gb
rocksdb98196967206189501gb53gb

0i.10q

These are results for the 0i.10q test. WiredTiger and RocksDB get about 10% more QPS compared to the 1i.10q result. The QPS for mmapv1 improves by ~4X because it isn't slowed by the RW lock from the insert threads. TokuMX continues to suffer from the CPU bottleneck.

ipsAvgipsP95qpsAvgqpsP95dbSizerssSize
tokumx00850819552gb39gb
wiredtiger0087508568632gb31gb
mmapv1003603336214XXgb62gb
rocksdb0077237667501gb52gb

I was too lazy to copy this into a table. This has results from iostat and vmstat, both absolute and normalized by the operation (query or insert) rate. The values are:
  • qps, ips - operation rate (queries or inserts)
  • r/s - average rate for r/s (storage reads/second)
  • rmb/s - average rate for read MB/second 
  • wmb/s - average rate for write MB/second
  • r/o - storage reads per operation
  • rkb/o - KB of storage reads per operation
  • wkb/o - KB of storage writes per operation
  • cs - average context switch rate
  • us - average user CPU utilization
  • sy - average system CPU utilization
  • us+sy - sum of us and sy
  • cs/o - context switches per operation
  • (us+sy)/o - CPU utilization divided by operation rate
WiredTiger and RocksDB sustain the highest QPS because they do the fewest storage reads per query. The range read penalty from an LSM doesn't occur for this workload with RocksDB. The CPU/operation value is more than 25X larger for TokuMX (0.085000) than for the other engines (0.003104 for RocksDB). TokuMX is also doing almost 4X the disk reads per query compared to RocksDB. I hope we can figure this out. The storage reads per query rate is much higher for mmapv1 in part because the database is much larger and the cache hit rate is lower.

         qps     r/s    rmb/s   wmb/s    r/o    rkb/o   wkb/o
toku     850     2122    32.0   1.4      2.50    38.6   1.7
wired   8750     4756    47.9   0.6      0.54     5.6   0.1
mmap    3603    38748   577.2   0.6     10.75   164.0   0.2
rocks   7723     5755    47.2   0.5      0.75     6.3   0.1

         qps      cs    us      sy      us+sy   cs/o    (us+sy)/o
toku     850     18644  67.1    5.1      72.2    22      0.085000
wired   8750     92629  24.3    3.0      27.3    11      0.003121
mmap    3603    255160   6.8    4.9      11.8    71      0.003264
rocks   7723     75393  21.2    2.8      24.0    10      0.003104

10i.0q

These are results for the 10i.0q test. The insert rate improvement versus 10i.1q is larger for TokuMX than for RocksDB. I assume that TokuMX suffers more than RocksDB from concurrent readers. WiredTiger still has too much variance in the insert rate.

ipsAvgipsP95qpsAvgqpsP95dbSizerssSize
tokumx436673915200653gb36gb
wiredtiger535758400658gb40gb
mmapv1125411630014XXgb62gb
rocksdb298422576200533gb54gb

The CPU utilization and storage IO per insert is lower for RocksDB & TokuMX. This is expected given they are write-optimized and benefit from not having to read secondary index pages during index maintenance. mmapv1 suffers from doing more disk reads and using more CPU.

        ips       r/s   rmb/s   wmb/s   r/o     rkb/o   wkb/o
toku    43667     587    23.7    96.1   0.01      0.6    2.3
wt       5357    9794   109.9   187.2   1.83     21.0   35.8
mmap     1254   10550   235.7    31.7   8.41    192.5   25.9
rocks   29842    1308    31.1   281.9   0.04      1.1    9.7

        ips     cs      us      sy      us+sy   cs/i    (us+sy)/i
toku    43667   189501  31.7    4.6     36.3     4       0.000830
wt       5357   105640  16.3    4.3     20.6    20       0.003840
mmap     1254    93985   1.3    2.0      3.3    75       0.002665
rocks   29842   189706  22.3    3.6     25.9     6       0.000867

1i.10q, second

These are results for the second 1i.10q test. The QPS rates for RocksDB, WiredTiger and mmapv1 are better than in the first 1i.10q test because mongod was changed to cache query plans for a longer time.

ipsAvgipsP95qpsAvgqpsP95dbSizerssSize
tokumx987907858829527gb37gb
wiredtiger97789978306821658gb32gb
mmapv194986486972714XXgb56gb
rocksdb99193770186669524gb53gb

TokuMX CPU bottleneck, test 1

I want to understand why TokuMX uses more than 25X the CPU per query than other engines. The high CPU load comes from eviction (flushing dirty data so block cache memory can be used for disk reads) but it isn't clear to me why eviction should be so much more expensive in TokuMX than in WiredTiger which also does eviction. RocksDB doesn't show this overhead but it does compaction instead of eviction. 

I reloaded the TokuMX database with 2B documents, let it sit idle for 1 day and then started the 0i.10q test. From a flat profile with Linux perf the top-10 functions are listed below. Then I ran PMP and from stack traces see that eviction is in progress which explains why QuickLZ compression uses a lot of CPU because data must be compressed during eviction. From stack traces I also see that the query threads are stalled waiting for eviction to flush dirty pages before they can use memory for data read from storage. This is a common place where a database engine can stall (InnoDB and WiredTiger have stalled there, maybe they still do) but the stalls are worse in TokuMX.

I will let the 0i.10q test run until writes stop and see if the query rate increases at that point.
45.46%  mongod   libtokufractaltree.so  [.] qlz_compress_core
19.10%  mongod   libtokufractaltree.so  [.] toku_decompress 
 7.19%  mongod   libc-2.12.so           [.] memcpy
 1.71%  mongod   libtokufractaltree.so  [.] toku_x1764_memory
 1.43%  mongod   [kernel.kallsyms]      [k] clear_page_c_e
 1.28%  mongod   [kernel.kallsyms]      [k] copy_user_enhanced_fast_string
 1.26%  mongod   mongod                 [.] free
 1.11%  mongod   libtokufractaltree.so  [.] toku_ftnode_pe_callback 
 0.96%  mongod   libtokufractaltree.so  [.] deserialize_ftnode_partition 
 0.83%  mongod   mongod                 [.] malloc

Update 1

I ran the 0i.10q test overnight and after 20 hours QPS has increased from 1074 to 1343. The CPU bottleneck from the partial eviction code remains. Partial eviction and waits for it are still frequent per PMP stack traces. The top-10 functions by CPU used are:

36.41%  mongod   libtokufractaltree.so  [.] toku_decompress
12.70%  mongod   libc-2.12.so           [.] memcpy
 5.58%  mongod   libtokufractaltree.so  [.] qlz_compress_core
 3.40%  mongod   [kernel.kallsyms]      [k] clear_page_c_e
 3.11%  mongod   libtokufractaltree.so  [.] toku_x1764_memory
 2.72%  mongod   [kernel.kallsyms]      [k] copy_user_enhanced_fast_string
 1.41%  mongod   libtokufractaltree.so  [.] bn_data::deserialize_from_rbuf
 1.40%  mongod   libtokufractaltree.so  [.] toku_ftnode_pe_callback
 1.28%  mongod   mongod                 [.] free
 0.87%  mongod   mongod                 [.] arena_chunk_dirty_remove                                                                                                                                                                                                        

No comments:

Post a Comment

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...