Wednesday, November 23, 2016

MyRocks: use less IO on writes to have more IO for reads

Holiday is almost here and I wrote a long blog post on write-efficiency yesterday so this one will be short. A longer version of this is in progress because this is an interesting result for me to explain. We assume that an LSM is less efficient for reads because it is more efficient for writes and it is hard to be optimal for all of read, write & space efficiency.

For real workloads it is complicated and for now I include benchmarks in "real workloads".  Here is one interesting result from my IO-bound tests of Linkbench. The summary is that when you spend less on IO to write back changes then you can spend more on IO to handle user queries. That benefit is more apparent on slower storage (disk array) than on faster storage (MLC NAND flash) because slower storage is more likely to be the bottleneck.

IO-bound Linkbench means that I used a server with 50G of RAM and ran Linkbench with maxid1=1B (1B nodes). The MyRocks database was ~400G and the InnoDB database was ~1.6T. Both MyRocks and InnoDB used MySQL 5.6.26. The workload is IO-heavy and the database working set is not cached.

The interesting result is that the difference between MyRocks and InnoDB becomes larger as storage gets slower. Another way to describe this is that InnoDB loses more performance than MyRocks when moving from faster to slower storage. I assume this is because MyRocks uses less IO capacity for writing back database changes so it has more IO capacity for handling user queries.


                Transactions per second
                MyRocks InnoDB  MyRocks/InnoDB
Disk array      2195    414     5.3
Slow SSD        23484   10143   2.3
Fast SSD        28965   21414   1.4

The random operations per second provided by the storage devices above is approximately 1k for the disk array, 10k for the slow SSD and more than 100k for the fast SSD.

Tuesday, November 22, 2016

Why is MyRocks more write-efficient than InnoDB?

This year I shared results where InnoDB wrote between 10X and 20X more data to storage than MyRocks for the same workload. I use KB written to storage per transaction as a measure of write efficiency and I usually compute this with data from the benchmark client and iostat. I get KB written/second from iostat, average transaction/second from the benchmark client and divide the former by the latter to compute KB written/transaction. When using SSD this excludes the writes done by SSD firmware and I previously reported that the overhead was worse for InnoDB than for RocksDB on one vendor's device.

An engine that writes less to storage per transaction is more write efficient. It is a good thing if MyRocks writes 10X less to storage than InnoDB for the same workload. This might enable MyRocks to use lower-endurance SSD for workloads where InnoDB required higher-endurance SSD. This might enable MyRocks to use SSD for workloads in which the device would not last with InnoDB. This also means that MyRocks needs less overprovisioning on the SSD, which is another way of saying you get more capacity from the device.

This is an update on results I previously shared.

Why
There are a few reasons why MyRocks is more write-efficient than InnoDB:
  1. Doublewrite buffer
  2. Configuration
  3. Page size
Doublewrite
The InnoDB doublewrite buffer doubles the storage write rate. It does this for a good reason -- to protect against partial page writes. Perhaps one day SSD vendors will agree on an atomic-write solution that works across vendors and with popular file systems on Linux. Perhaps one day crash safe RAM will be a common thing in data centers. Perhaps one day we will have a copy-on-write filesystem that is widely used for InnoDB on Linux. Until then we are stuck with 2X write-amplification from the doublewrite buffer.

Configuration
I might be overstating this to make a point. If you have fast storage (NAND flash) and the database working set fits in RAM then you have too much RAM. If you have fast storage then configure the database to use it. Or keep the working set in RAM and use a disk array instead of NAND flash. But if you have an in-memory workload and a database engine that does random IO (update-in-place b-tree) then you still need IOPs capacity or you should switch to a proper in-memory database engine like Tarantool.

Most of the servers that I care about are setup so that the database working set isn't in RAM. I configure benchmarks like Linkbench in the same manner. I am not promising that MyRocks will write 10X less to storage than InnoDB for all use cases - workload and configuration matter. But it tends to be better.

Page size
InnoDB page size has a big impact on write-efficiency when the working set isn't cached because dirty b-tree pages will be evicted from the tail of the LRU earlier and when evicted they must be written back to storage (twice with InnoDB thanks to the doublewrite buffer). In the worst case pages are written back with only dirty row and the write-amplification in that case is sizeof(page) / sizeof(row). In the best case all rows on the page are dirty but the best case isn't likely when the working set isn't cached.

The obvious way to reduce write-amplification (and improve write-efficiency) is to reduce the database page size. The default page size for InnoDB is 16kb, but it is possible to use 8kb or 4kb pages at initialization by setting innodb_page_size or when using compressed tables. Both of these have a cost. First, InnoDB performance is greatly reduced when using compressed tables. Second, rows must fit in half of a page, excluding LOB columns, so a smaller page also means a smaller max row size. A smaller page also reduces the max size of an index key. See the manual for more details.

I repeated Linkbench with an IO-bound configuration - 50G of RAM, database is ~400G for MyRocks with zlib compression and ~1.6T for InnoDB without compression. The data below shows throughput (TPS is transactions/second) and the storage write-rate per transaction (iostat wKB/t) for MyRocks versus InnoDB with different page sizes. When using a smaller page size for InnoDB the storage write rate and TPS is better. I assume TPS is better because when less IO capacity is used for writes then more IO capacity is available for reads.


          Page-size  TPS     iostat-wKB/t
MyRocks   16kb       28965    1.25
InnoDB    4kb        24845    6.13
InnoDB    8kb        24352   10.52
InnoDB    16kb       21414   19.70

And graphs for the same data.


Monday, November 21, 2016

Sysbench, InnoDB, transaction isolation and the performance schema

I used sysbench to understand the impact of transaction isolation and the performance schema for InnoDB from upstream MySQL 5.6.26.

The test server has 24 CPU cores, 48 HW threads with hyperthreading enabled, 256G of RAM and fast SSD. For sysbench I used the 1.0 version with support for Lua. Tests were run in two configurations -- cached and IO-bound. For the cached configuration I used 8 tables, 1M rows/table and the database cache was large enough to cache all data. For the IO-bound configuration I used 8 tables, 10M rows/table, a 2G database cache and buffered IO so that all data was in the OS page cache. The database was ~2G for the cached configuration and ~20G for the IO-bound configuration. InnoDB table compression was not used and jemalloc was used. The binlog was enabled but sync-on-commit was disabled for the binlog and InnoDB redo log.

With 8 tables and 1M rows per table the database is very small -- a few GB. I am wary of drawing too many conclusions from sysbench results for such a small database but other people will use it to evaluate MyRocks so I have been spending more time to understand sysbench performance.

Both the cached and IO-bound configurations require data to be written to storage. For the cached test all page reads are served from the database cache. For the IO-bound test some reads are served from the database cache and misses are served from the OS page cache. In both cases nothing is read from storage.

tl;dr
  1. For InnoDB using repeatable-read can help performance.
  2. Disabling performance schema can help performance
  3. My tests are from MySQL 5.6.26. It will be interesting to understand what has changed in 5.7 and 8. I know that the overhead from #1 and #2 should be much less in modern MySQL.
Sysbench

I have a script that uses sysbench to run tests in an interesting pattern and used it for this test. The pattern is in the all.sh script and is explained below. The all.sh script uses the run.sh script to set the sysbench command line options. I run most of the read-write tests before the read-only tests to fragment the database before evaluating query performance.
  • load - load the 8 tables
  • update-index - uses oltp.lua and each transaction is an UPDATE statement that finds & changes 1 row by PK. Secondary index maintenance is required for the update statement.
  • update-nonindex - like update-index but secondary index maintenance is not required.
  • read-write - uses oltp.lua in read-write mode, a classic sysbench workload. Run with oltp-range-size=100.
  • read-only - uses oltp.lua in read-only mode, a classic sysbench workload. Run four times with oltp-range-size set 10, 100, 1000 and 10000.
  • point-query - uses oltp.lua to fetch one row by PK per query
  • select - uses select.lua to fetch one row per query
  • insert - uses insert.lua to insert one row per transaction. This grows the database and the growth amount depends on the the insert rate. So a faster engine will grow the database more than a slower engine. It also means that when I run the test for a long time that the database won't fit in the database or OS page cache. For these reasons I run this test last.
The test was run for 1, 2, 4, 8, 16, 24, 32, 40, 48, 64, 80, 96 and 128 concurrent clients. This is currently hardwired in the all.sh script. For each level of concurrency I ran sysbench for 3 minutes for the read-only tests and 5 minutes for the read-write tests. Eventually I will run it for more time at each level of concurrency but I had a large number of tests to run and am trying to figure out which configurations are interesting.

The my.cnf for these tests is here.

Results

Data for the results is here. The numbers provided are queries per second (QPS) not transactions per second (TPS). The configurations tested are described below:
  • innodb.8t.1m.rr.ps0 - 8 tables, 1M rows/table, repeatable-read, performance_schema=0
  • innodb.8t.1m.rr.ps1 - 8 tables, 1M rows/table, repeatable-read, performance_schema=1
  • innodb.8t.1m.rc.ps0 - 8 tables, 10M rows/table, read-committed, performance_schema=0
  • innodb.8t.1m.rc.ps1 - 8 tables, 10M rows/table, read-committed, performance_schema=1
My summary of performance is:
  • update-index - disabling the performance schema has a small impact on QPS (between 1% and 5% more QPS is common). Changing transaction isolation has no impact on QPS.
  • update-nonindex - same as update-index
  • read-write - disabling the performance schema frequently boosts QPS by 5% to 10% and the impact is greater at high concurrency. Using repeatable-read boosts performance because it reduces the mutex contention from getting a consistent read snapshot as that is done once per transaction rather than once per statement.
  • read-only - see the conclusions for read-write for shorter range scans (oltp-range-size set to 10 or 100 and maybe 1000). For longer range scans (oltp-range-size set to 10000) transaction isolation and the performance schema have little impact because the overhead is elsewhere
  • point-query - disabling the performance schema has a big impact on performance (between 5% and 10% more QPS) and the benefit is larger at high concurrency. Transaction isolation has no impact on performance because transactions are single statement.
  • select - see point-query
  • insert - disabling the performance schema has a smaller impact on QPS. Transaction isolation doesn't have an impact on performance because transactions are single statement.

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...