The key result is that with MyRocks you need less SSD and it will last longer. Does better read, write and space efficiency for MyRocks compared to InnoDB invalidate the RUM Conjecture?
tl;dr:
- MyRocks and InnoDB have similar load rates
- MyRocks has the best transaction rates
- InnoDB writes ~18X more to storage per transaction than MyRocks
- InnoDB uses ~3X more space than compressed MyRocks and ~1.5X more space than uncompressed MyRocks.
- p99 response times are more than 2X better for MyRocks than for InnoDB
Configuration
I used my Linkbench repo and helper scripts to run Linkbench with maxid1=1B, loaders=4 and requestors=16 so there will be ~5 concurrent connections for the load and 16 concurrent connections for transactions. My linkbench repo has a recent commit that changes the Linkbench workload and this test included that commit. The test pattern is 1) load and 2) transactions. The transactions were run in 12 1-hour loops and I share results from the last hour. The test server has 48 HW threads, fast SSD and 256gb of RAM. Only 50gb of RAM was available for the database cache and OS page cache and the database was much larger than 50gb. This post has more details on my usage of Linkbench.
Tests were run for MyRocks, InnoDB from upstream MySQL, InnoDB from FB MySQL and TokuDB. The binlog was enabled but sync on commit was disabled for the binlog and database log. All engines used jemalloc. Mostly accurate my.cnf files are here.
- MyRocks was compiled on October 16 with git hash 1d0132. Tests were done without and with compression. The test with compression used zstandard for the max level, none for L0/L1/L2 and then lz4 for the remaining levels.
- Upstream 5.6.35, 5.7.17, 8.0.1, 8.0.2 and 8.0.3 were used with InnoDB. SSL was disabled and 8.x used the same charset/collation as previous releases.
- InnoDB from FB MySQL 5.6.35 was compiled on June 16 with git hash 52e058. The results for it aren't interesting here but will be interesting for IO-bound linkbench.
- TokuDB was from Percona Server 5.7.17. Tests were done without compression and then with zlib.
The performance schema was enabled for upstream InnoDB and TokuDB. It was disabled at compile time for MyRocks and InnoDB from FB MySQL because FB MySQL 5.6 has user & table statistics for monitoring.
Graphs
The next two graphs show the load and transaction rates relative to the rates for InnoDB from upstream MySQL 5.6.35.
- MyRocks has the best transaction rates
- MyRocks and InnoDB have similar load rates
- Transaction throughput for InnoDB increases from 5.6.35 to 8.0.3
- InnoDB from FB MySQL 5.6.35 does ~1.2X more transactions/second than InnoDB from upstream MySQL 5.6.35. I am not sure which change explains that.
InnoDB uses ~1.5X more space than uncompressed MyRocks and ~3X more space than MyRocks with zstandard compression.
Load Results
All of the data is here. I adjusted iostat metrics for MyRocks because iostat currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact. The database sizes are rounded to the nearest 100gb for sizes above 1tb -- I need to fix my scripts. The table below has a subset of the results.
- Insert rates are similar for MyRocks and InnoDB
- Write efficiency (wkb/i) is similar for MyRocks and InnoDB
- CPU efficiency (Mcpu/i) is similar for MyRocks and InnoDB
- Space efficiency is better for MyRocks. InnoDB uses ~3X more space than compressed MyRocks and ~1.5X more space than uncompressed MyRocks.
ips wkb/i Mcpu/i size rss wMB/s cpu engine
135580 1.01 86 948 7.6 136.9 11.7 MyRocks.none
136946 1.28 95 440 5.0 175.5 13.0 MyRocks.zstd
133385 1.00 79 13xx 38.0 133.2 10.6 FbInno.5635
130206 1.04 79 15xx 42.7 135.5 10.2 Inno.5635
134997 1.04 84 15xx 39.4 140.6 11.4 Inno.5717
132452 1.04 86 15xx 39.5 138.1 11.4 Inno.801
121436 1.05 99 15xx 39.5 127.7 12.0 Inno.802
127339 1.05 96 15xx 39.5 133.8 12.2 Inno.803
37599 1.61 228 12xx 11.7 60.4 8.6 Toku.none
37413 1.17 241 443 12.3 43.7 9.0 Toku.zlib
136946 1.28 95 440 5.0 175.5 13.0 MyRocks.zstd
133385 1.00 79 13xx 38.0 133.2 10.6 FbInno.5635
130206 1.04 79 15xx 42.7 135.5 10.2 Inno.5635
134997 1.04 84 15xx 39.4 140.6 11.4 Inno.5717
132452 1.04 86 15xx 39.5 138.1 11.4 Inno.801
121436 1.05 99 15xx 39.5 127.7 12.0 Inno.802
127339 1.05 96 15xx 39.5 133.8 12.2 Inno.803
37599 1.61 228 12xx 11.7 60.4 8.6 Toku.none
37413 1.17 241 443 12.3 43.7 9.0 Toku.zlib
legend:
* ips - inserts/second
* wkb/i - iostat KB written per insert
* Mcpu/i - normalized CPU time per insert
* size - database size in GB at test end
* rss - mysqld RSS at load end
* size - database size in GB at test end
* rss - mysqld RSS at load end
* wMB/s - iostat write MB/s, average
* cpu - average value of vmstat us + sy columns
Transaction Results
These are results from the 12th 1-hour loop of the transaction test. All of the data is here. I adjusted iostat metrics to for MyRocks because iostat currently counts bytes trimmed as bytes written which is an issue for RocksDB but my adjustment is not exact. The database sizes are rounded to the nearest 100gb for sizes above 1tb -- I need to fix my scripts.
The data is split into two tables because there are too many columns. The first table has throughput, response time and CPU metrics. The second table has hardware efficiency metrics.
The data is split into two tables because there are too many columns. The first table has throughput, response time and CPU metrics. The second table has hardware efficiency metrics.
- MyRocks has the best transaction rates but InnoDB is close with 5.7 and 8.x
- Write efficiency (wkb/t) is better for MyRocks. InnoDB writes ~18X more to storage.
- CPU efficiency (Mcpu/t) is similar for MyRocks and InnoDB
- p99 response times are more than 2X better for MyRocks than for InnoDB
- Space efficiency is better for MyRocks. InnoDB uses ~3X more space than compressed MyRocks and ~1.5X more space than uncompressed MyRocks.
35380 0.6 0.5 1 0.9 641 22.7 MyRocks.none
35293 0.7 0.5 1 0.9 740 26.1 MyRocks.zstd
31119 2 2 6 3 572 17.8 FbInno.5635
25501 3 2 6 4 702 17.9 Inno.5635
34406 2 2 3 2 615 21.2 Inno.5717
34055 2 2 3 2 627 21.4 Inno.801
33672 2 2 3 2 648 21.8 Inno.802
33849 2 2 3 2 645 21.8 Inno.803
9055 4 3 13 11 2399 21.7 Toku.5717.none
12242 3 2 7 3 2442 29.9 Toku.5717.zlib
legend:
* tps - transactions/second
* un, gn, ul, gl - 99th percentile response time in millisecs for
UpdateNode, GetNode, UpdateList and GetLinkedList transactions
* un, gn, ul, gl - 99th percentile response time in millisecs for
UpdateNode, GetNode, UpdateList and GetLinkedList transactions
* Mcpu/t - normalized CPU time per transaction
* cpu - average CPU utilization from vmstat sy and us columns
* cpu - average CPU utilization from vmstat sy and us columns
1.16 22.42 0.63 998 40848 793.1 22.1 MyRocks.none
1.08 13.22 0.65 469 38214 466.4 22.9 MyRocks.zstd
1.15 18.41 11.86 15xx 35798 572.8 369.2 FbInno.5635
1.16 18.52 12.06 16xx 29523 472.4 307.4 Inno.5635
1.15 18.39 11.90 17xx 39541 632.7 409.5 Inno.5717
1.14 18.21 11.92 17xx 38768 620.3 405.8 Inno.801
1.15 18.39 11.97 17xx 38703 619.2 403.2 Inno.802
1.14 18.23 11.96 17xx 38560 617.0 405.0 Inno.803
3.01 181.79 5.38 12xx 27273 1646.1 48.7 Toku.5717.none
1.20 17.59 2.81 469 14712 215.3 34.4 Toku.5717.zlib
legend:
1.08 13.22 0.65 469 38214 466.4 22.9 MyRocks.zstd
1.15 18.41 11.86 15xx 35798 572.8 369.2 FbInno.5635
1.16 18.52 12.06 16xx 29523 472.4 307.4 Inno.5635
1.15 18.39 11.90 17xx 39541 632.7 409.5 Inno.5717
1.14 18.21 11.92 17xx 38768 620.3 405.8 Inno.801
1.15 18.39 11.97 17xx 38703 619.2 403.2 Inno.802
1.14 18.23 11.96 17xx 38560 617.0 405.0 Inno.803
3.01 181.79 5.38 12xx 27273 1646.1 48.7 Toku.5717.none
1.20 17.59 2.81 469 14712 215.3 34.4 Toku.5717.zlib
legend:
* r/t - iostat reads/transaction
* rkb/t, wkb/t - iostat KB read and KB written per transaction
* size - database size in GB at test end
* r/s - average iostat reads/second
* r/s - average iostat reads/second
* rMB/s, wMB/s - iostat read and write MB/s
No comments:
Post a Comment