This is my first attempt to summarize tiered compaction as implemented by RocksDB, Cassandra, ScyllaDB and HBase. I have limited experience with tiered compaction (that will change) and limited knowledge of Cassandra, ScyllaDB and HBase (that won't change soon) so I will make mistakes but enjoy the learning process. To keep this brief I ignore the variants optimizations for time-series and I don't explain compaction basics.
Different DBMS use different names for tiered compaction:
- RocksDB has universal compaction
- HBase has ExploringCompactionPolicy and RatioBasedCompactionPolicy, see here
- Cassandra has STCS (Size Tiered Compaction Strategy)
- ScyllaDB has STCS and ICS (Incremental Compaction Strategy), see here
- Overwrite-mostly - all of my time in LSM-land has been focused on overwrite-mostly workloads. Compaction is useful for overwrites because it reclaims space by removing tombstone and old versions.
- Insert-mostly - for now I assume that something removes data via TTL or explicit deletes otherwise the database size grows forever. Insert-only can be further divided into:
- Insert-only in key order - this is the easiest pattern to support. Bloom filters are less useful because pruning based on min/max keys per sorted run can eliminate many runs from being accessed on reads. Excluding implementation artifacts there are few benefits from using compaction in this case. While the sorted runs created by memtable flush will be small, they can be concatenated into larger, logical sorted runs that are range-partitioned although not all implementations do that.
- Insert-only not in key order - compaction won't reclaim space (there is nothing to reclaim) but it still helps to reduce read-amplification.
- By compaction picker I mean the algorithm that decides which sorted runs to compact.
- Major compaction merges all sorted runs, minor compaction does not.
- The metadata for a sorted run includes the min and max keys stored in the sorted run and the min and max commit timestamp used by a key in that sorted run.
- What triggers major compaction?
- Many implementations only do major compaction manually (when requested by an admin). The reasons for not doing it automatically include 1) it can take a long time and 2) there might be a transient 2X increase in space. Note that some implementations can merge the largest sorted runs without merging all sorted runs and this can cause the same problems as major compaction.
- How are sorted runs ordered as viewed by the compaction picker?
- They can be ordered by commit timestamp or size.
- Commit timestamp order frequently implies size order, but sometimes explicit deletes, compaction removing a larger/older value in favor of a smaller/new value or TTL means that older sorted runs are not larger.
- Does the compaction picker (almost) always consume a prefix of the sorted runs or can it consume a substring?
- This assumes there is an order imposed on the sorted runs (see the previous question).
- Can the compaction picker merge sorted runs that aren't adjacent in commit timestamp order?
- See the two previous questions. If this is done the sorted runs can overlap with respect to commit timestamp ranges. A side-effect of allowing sorted runs to overlap in commit timestamp order might be that merging iterators can't use a short-circuit optimization for point queries -- reading from iterators in commit timestamp order and stopping on the first matching key or tombstone. I don't know if that side-effect is a big deal.
- What read-amplification constraints trigger minor compaction?
- The trigger can be global or local. Global is triggered when there are too many sorted runs. Local is triggered when there are too many sorted runs within a size bucket.
- These are naive because they assume all sorted runs might have to be read for a query. For workloads where the newly written keys have some correlation with time then the assumption might be wrong, many sorted runs can be pruned based on their min/max key and the number of sorted runs in the LSM tree is a too-pessimistic estimate of the read-amp. Time-series workload might break the naive assumption, but I am trying to not discuss LSM optimizations for time-series in this post.
- Is there a minimum number of sorted runs to merge for minor compaction?
- Is there read-triggered compaction?
- LevelDB has a feature to trigger compaction for regions of the LSM tree with leveled compaction. Some tiered compaction implementations have a read hotness feature that prioritizes compaction for sorted runs that get more reads.
- What space-amplification constraints trigger compaction?
- This assumes there is an estimate for space-amp and there might not be one. If there is an estimate it is usually naive because a less naive estimate is expensive. The naive estimate is sizeof(all-sorted-runs) / sizeof(largest-sorted-run). I think this can be improved.
- Do large sorted runs imply large filter and index blocks in memory?
- If index and filter blocks are per sorted run, stored as a contiguous sequence of bytes and cannot be paged then a large sorted run implies there are large index and filter blocks. These are inefficient to cache and to read and decompress from storage.
- One alternative is to page them so they can be read from storage and cached in pieces.
- Another alternative is to have large logical sorted runs that are range partitioned into N sorted runs. Each of the N sorted runs has its own and smaller index and filter blocks.
- Is compaction for large sorted runs incremental?
- By incremental I mean there isn't a transient 2X increase in disk space during compaction and that disk space used by compaction input is reclaimed before a large compaction is complete.
- Is compaction single-threaded?
- Leveled compaction is usually single-thread because each compaction step is limited to a small amount of data. And then background threads can do many small compaction steps concurrently. But a compaction step can be much larger with tiered. Multi-threading can reduce this problem.
- One approach to multi-threading is to have N threads divide the compaction input into N key ranges and work on them independently.
- Another approach to multi-threading is to have one thread do the merge, 1+ threads read and decompress compaction input and 1+ threads post-process (collect stats, create bloom filters, compress) compaction output. The separate threads for reading and writing can also do async read-ahead and write-behind.
- Are there optimizations for inserts in key order?
- What is the space-amp vs read-amp vs write-amp tradeoff?
- Implementations have different tradeoffs for read, write and space amplification. I am not aware of an attempt to quantify the differences. My answer is left for a future blog post.
- The compaction picker is here and options are here
- Major compaction is triggered automatically when the space-amp constraint is exceeded. That is set by the option max_size_amplification_percent
- Sorted runs are ordered by commit timestamp (seqno)
- The compaction picker usually picks a prefix of the sorted runs. But it skips the first N if they are currently being compacted so it doesn't always pick a prefix. The next sorted run is added to the prefix when sizeof(next run) <= sizeof(selected runs) X size_ratio. When the size_ratio check fails or max_merge_width is reached then the minor compaction input is known. The stop_style option can also be used but I won't describe that here.
- Compaction input sorted runs are always adjacent in commit timestamp order.
- The min and max number of sorted runs to merge are set by min_merge_width and max_merge_width
- The read-amp constraint that triggers minor compaction is set by the option level0_file_num_compaction_trigger which is borrowed from leveled compaction. This is global.
- There is no read-triggered compaction. While it was inherited from LevelDB it is currently disabled in RocksDB.
- The space-amp constraint triggers major compaction as mentioned above. The naive estimate is used for space-amp. The estimate is truthy for overwrite-mostly workloads. I assume the space-amp constraint should be disabled for insert-mostly workloads.
- Large sorted runs can lead to large index and filter blocks but that is unlikely because the largest sorted runs are logical and range-partitioned with an index and filter per partition (see here). RocksDB also optionally supports partitioned index and filter blocks.
- Compaction for large sorted runs is not incremental (yet)
- Large compactions are multi-threaded via subcompactions that split compaction input into N ranges (see here)
- The allow_trivial_move option can use trivial moves in some cases when inserts are in key order to avoid re-writing sorted runs.
- Compaction options are here. Search for hstore.compaction.
- The compaction policies (ExploringCompactionPolicy, RatioBasedCompactionPolicy) are explained here.
- The min and max number of sorted runs to merge are set by hbase.hstore.compaction.{min,max}
- Time-based major compaction can be enabled or disabled. Search for hstore.hregion.majorcompaction
- RatioBasedCompactionPolicy seems similar to universal compaction in RocksDB but ExploringCompactionPolicy might be closer to STCS. However, I didn't spend enough time to figure them out.
- The compaction picker is here and options are here
- Major compaction is triggered manually
- Sorted runs are ordered by size and the compaction picker then groups them into size classes, then looks at size classes that have at least min_threshold sorted runs and chooses the one that gets the most reads (read hotness). The bucket_high and bucket_low options determine the range of sorted run sizes that can be in one size class (bucket). More detail is here and here.
- Sorted runs that are not adjacent in commit timestamp order can be merged
- The min and max number of sorted runs to merge are determined by the options min_threshold and max_threshold
- The read-amp constraint is min_threshold and triggers minor compaction when there is a size class with at least that many sorted runs
- There isn't support for read-triggered minor compaction but read hotness is used to prioritize which size class to compact next.
- I am not sure whether large index and filter blocks are a problem
- Compaction is single-threaded. I don't know whether the sharding built into Cassandra makes it easier to avoid large shards and the problems related to large compactions.
- Does not do incremental compaction
- I am not aware of optimizations for key-order inserts, similar to the allow_trivial_move feature in RocksDB universal compaction.
- Cassandra might use HyperLogLog to estimate overlap between sorted runs. This can help estimate whether there would be a benefit from merging sorted runs by predicting how many redundant keys would be removed. But all I have today is a DataStax blog post.
- The compaction picker for STCS is here and options are here
- Major compaction is triggered manually for STCS. ICS has an option to trigger it based on a SAG (space amplification goal). And they explain it is useful for overwrite workloads.
- The min and max number of sorted runs to merge are determined by the options min_threshold and max_threshold
- Compaction is single-threaded but ScyllaDB explains how the shard-per-core model reduces the problem from large/slow compactions.
- STCS does not do incremental compaction but ScyllaDB Enterprise includes ICS
- Added a link to the use of HyperLogLog by DataStax Cassandra to estimate sorted run overlap