Thursday, September 13, 2018

Review of SlimDB from VLDB 2018

SlimDB is a paper worth reading from VLDB 2018. The highlights from the paper are that it shows:
  1. How to use less memory for filters and indexes with an LSM
  2. How to reduce the CPU penalty for queries with tiered compaction
  3. The benefit of more diversity in LSM tree shapes
Overview

Cache amplification has become more important as database:RAM ratios increase. With SSD it is possible to attach many TB of usable data to a server for OLTP. By usable I mean that the SSD has enough IOPs to access the data. But it isn't possible to grow the amount of RAM per server at that rate. Many of the early RocksDB workloads used database:RAM ratios that were about 10:1 and everything but the max level (Lmax) of the LSM tree was in memory. As the ratio grows that won't be possible unless filters and block indexes use less memory. SlimDB does that via three-level block indexes and multi-level cuckoo-filters.

Tiered compaction uses more CPU and IO for point and range queries because there are more places to check for data when compared to level compaction. The multi-level cuckoo filter in SlimDB reduces the CPU overhead for point queries as there is only one filter to check per level rather than one per sorted run per level.

The SlimDB paper shows the value of hybrid LSM tree shapes, combinations of tiered and leveled, and then how to choose the best combination based on IO costs. Prior to this year, hybrid didn't get much discussion -- the choices were usually tiered or leveled. While RocksDB and LevelDB with the L0 have always been hybrids of tiered (L0) and leveled (L1 to Lmax), we rarely discuss that. But more diversity in LSM tree shape means more complexity in tuning and the SlimDB solution is to make a cost-based decision (cost == IO overhead) subject to a constraint on the amount of memory to use.

This has been a great two years for storage engine efficiency. First we had several papers from Harvard DASLab that have begun to explain cost-based algorithm design and engine configuration and SlimDB continues in that tradition. I have much more reading to do starting with The Periodic Table of Data Structures.

Below I review the paper. Included with that is some criticism. Papers can be great without being perfect. This paper is a major contribution and worth reading.

Semi-sorted

The paper starts by explaining the principle of semi-sorted data. When the primary key can be split into two parts -- prefix and suffix -- there are some workloads that don't need data ordered over the entire primary key (prefix + suffix). Semi-sorted supports queries that fetch all data that matches the prefix of the PK while still enforcing uniqueness for the entire PK. The PK can be on (a,b,c,d) and (a,b) is prefix and queries are like "a=X and b=Y" without predicates on (c,d) that require index ordering. SlimDB takes advantage of this to use less space for the block index.

There are many use cases for this, but the paper cites Linkbench which isn't correct. See the Linkbench and Tao papers for queries that do an exact match on the prefix but only want the top-N rows in the result. So ordering on the suffix is required to satisfy query response time goals when the total number of rows that match the prefix is much larger than N. I assume this issue with top-N is important for other social graph workloads because some graph nodes are popular. Alas, things have changed with the social graph workload since those papers were published and I hope the changes are explained one day.

Note that MyRocks can use a prefix bloom filter to support some range queries with composite indexes. Assume the index is on (a,b,c) and the query has a=X and b=Y order by c limit 10. A prefix bloom on (a,b) can be used for such a query.

Stepped Merge

The paper implements tiered compaction but calls it stepped merge. I didn't know about the stepped merge paper prior to reading the SlimDB paper. I assume that people who chose the name tiered might also have missed that paper.

LSM compaction algorithms haven't been formally defined. I tried to advance the definitions in a previous post. One of the open issues for tiered is whether it requires only one sorted run at the max level or allows for N runs at the max level. With N runs at the max level the space-amplification is at least N which is too much for many workloads. With 1 run at the max level compaction into the max level is always leveled rather than tiered -- the max level is read/rewritten and the per-level write-amplification from that is larger than 1 (while the per-level write-amp from tiered == 1). With N runs at the max level many of the compaction steps into the max level can be tiered, but some will be leveled -- when the max level is full (has N runs) then something must be done to reduce the number of runs.

3-level block index

Read the paper. It is complex and a summary by me here won't add value. It uses an Entropy Coded Trie (ECT) that builds on ideas from SILT -- another great paper from CMU.

ECT uses ~2 bits/key versus at least 8 bits/key for LevelDB for the workloads they considered. This is a great result. ECT also uses 5X to 7X more CPU per lookup than LevelDB which means you might limit the use of it to the largest levels of the LSM tree -- because those use the most memory and the place where we are willing to spend CPU to save memory.

Multi-level cuckoo filter

SlimDB can use a cuckoo filter for leveled levels of the LSM tree and a multi-level cuckoo filter for tiered levels. Note that leveled levels have one sorted run and tiered levels have N sorted runs. SlimDB and the Stepped Merge paper use the term sub-levels, but I prefer N sorted runs.

The cuckoo filter is used in place of a bloom filter to save space given target false positive rates of less than 3%. The paper has examples where the cuckoo filter uses 13 bits/key (see Table 1) and a bloom filter with 10 bits/key (RocksDB default) has a false positive rate of much less than 3%. It is obvious that I need to read another interesting CMU paper cited by SlimDB -- Cuckoo Filter Practically Better than Bloom.

The multi-level cuckoo filter (MLCF) extends the cuckoo filter by using a few bits/entry to name the sub-level (sorted run) in the level that might contain the search key. With tiered and a bloom filter per sub-level (sorted run) a point query must search a bloom filter per sorted run. With the MLCF there is only one search per level (if I read the paper correctly).

The MLCF might go a long way to reduce the point-query CPU overhead when using many sub-levels which is a big deal. While a filter can't be used for general range queries, SlimDB doesn't support general range queries. Assuming the PK is on (a,b,c,d) and the prefix is (a,b) then SlimDB supports range queries like fetch all rows where a=X and b=Y. It wasn't clear to me whether the MLCF could be used in that case. But many sub-levels can create more work for range queries as iterators must be positioned in each sub-level in the worst case and that is more work.

This statement from the end of the paper is tricky. SlimDB allows for an LSM tree to use leveled compaction on all levels, tiered on all levels or a hybrid. When all levels are leveled, then performance should be similar to RocksDB with leveled, when all or some levels are tiered then write-amplification will be reduced at the cost of read performance and the paper shows that range queries are slower when some levels are tiered. Lunch isn't free as the RUM Conjecture asserts.
In contrast, with the support of dynamic use of a stepped merge algorithm and optimized in-memory indexes, SlimDB minimizes write amplification without sacrificing read performance.
The memory overhead for MLCF is ~2 bits. I am not sure this was explained by the paper but that might be to name the sub-level, in which case there can be at most 4 sub-levels per level and the cost would be larger with more sub-levels.

The paper didn't explain how the MLCF is maintained. With a bloom filter per sorted run the bloom filter is created when SST files are created during compaction and memtable flush. This is an offline or batch computation. But the MLCF covers all the sub-levels (sorted runs) in a level. And the sub-levels in a level arrive and depart one at a time, not at the same time. They  arrive as output from compaction and depart when they were compaction input. The arrival or departure of a new sub-level requires incremental changes to the MLCF. 

LSM tree shapes

For too long there has not been much diversity in LSM tree shapes. The usual choice was all tiered or all leveled. RocksDB leveled is really a hybrid -- tiered for L0, leveled for L1 to Lmax. But the SlimDB paper makes the case for more diversity. It explains that some levels (smaller ones) can be tiered while the larger levels can be leveled. And the use of multi-level cuckoo filters, three-level indexes and cuckoo filters is also a decision to make per-level.

Even more interesting is the use of a cost-model to choose the best configuration subject to a constraint -- the memory budget. They enumerate a large number of LSM tree configurations, generate estimated IO-costs per operation (write-amp, IO per point query that returns a row, IO per point query that doesn't return a row, memory overhead) and then the total IO cost is computed for for a workload -- where a workload specifies the frequency of each operation (for example - 30% writes, 40% point hits, 30% point misses).

The Dostoevsky paper also makes the case for more diversity and uses rigorous models to show how to choose the best LSM tree shape.

I think this work is a big step in the right direction. Although cost models must be expanded to include CPU overheads and constraints expanded to include the maximum write and space amplification that can be tolerated.

I disagree with a statement from the related work section. We can already navigate some of the read, write and space amplification space but I hope there is more flexibility in the future. RocksDB tuning is complex in part to support this via changing the number of levels (or growth factor per level), enabling/disabling the bloom filter, using different compression (or none) on different levels, changing the max space amplification allowed, changing the max number of sorted runs in the L0 or max number of write buffers, changing the L0:L1 size ratio, changing the number of bloom filter bits/key. Of course I want more flexibility in the future while also making RocksDB easier to tune.
Existing LSM-tree based key-value stores do not allow trading among read cost, write cost and main memory footprint. 

Performance Results


Figuring out why X was faster than Y in academic papers is not my favorite task. I realize that space constraints are a common reason for the lack of details but I am wary of results that have not been explained and I know that mistakes can be made (note: don't use serializable with InnoDB). I make many mistakes myself. I am willing to provide advice for MyRocks, MySQL and RocksDB. AFAIK most authors who hack on RocksDB or compare with it for research are not reaching out to us. We are happy to help in private.

SlimDB was faster than RocksDB on their evaluation except for range queries. There were few details about the configurations used, so I will guess. First I assume that SlimDB used stepped merge with MLCF for most levels. I am not sure why point queries were faster with SlimDB than RocksDB. Maybe RocksDB wasn't configured to use bloom filters. Writes were about 4X faster with SlimDB because stepped merge (tiered) compaction was used, write-amplification was 4X less and when IO is the bottleneck then an approach that has less write-amp will go faster.



No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...