Monday, September 19, 2022

Understanding some jemalloc stats for RocksDB

I have been struggling to understand the root cause for RSS being larger than expected during my usage of db_bench. In this case db_bench used jemalloc. Statistics for jemalloc are written to LOG every stats_dump_period_sec seconds and this is an abbreviated example of that output -- I hacked db_bench to disable the mutex related stats in the call to malloc_stats_print.

To keep this post short and because I am not a jemalloc expert I will only try to explain a few things in this output that might help you, and me in a few years when I have to understand jemalloc again. It helps to know a lot about RocksDB internals to understand which of the jemalloc bin sizes map to allocations in RocksDB. When you don't know that (as I don't know all of that) then a heap profiler like Massif can be used.

The first is the number of arenas that are listed on the opt.arenas line. In this case there were 320 and I assume that jemalloc uses 4 arenas per CPU (the server had 80 CPUs). There is also a line with Arenas: 321 which is confusing but the difference isn't a big deal to me. Arenas are a way to shard the allocator to reduce mutex contention and when threads are (hopefully) assigned to a different arenas. In my usage with db_bench the threads are long-lived and I assume the mapping of thread to arena doesn't change. I don't know whether that is a potential source of perf problems.

The next interesting line starts with Allocated: and displays a few counters for currently allocated memory. The value next to Allocated: is ~51GB which is reasonable given I configured RocksDB to use 50GB for the block cache. 

This is followed by an interesting section that starts with Merged arena stats. This has various counters aggregated over all 320 arenas. Start with the assigned threads: 10 line. That means that 10 threads are currently assigned to arenas. While per-arena stats might also be useful there are many arenas, that would require a lot of output to print and read and I have not been able to get malloc_stats_print to return results for each of the 320 arenas (it stops after 4) so I prefer just using malloc_stats_print(..., opts="a") to disable per-arena stats. If you use opts="xa" then per-arena and mutex counters are not displayed which makes the output easier to read.

Next are lines that start with small: and large: to show the amount of memory currently allocated for small and large allocations sizes. Most of the allocations in this case are small and <= 8kb.

The next section has counters for active, mapped, retained and more. These are frequently interesting but not so much in this case.

Given that most allocations are small the following section for bins: is the most interesting. The bin with size 8 is used for all allocations of <= 8 bytes. The bin with size 16 is used for all allocations between 9 and 16 bytes. This section has long lines and clicking on Raw improves readability but I can't link to lines in the Raw view.

I wanted to know the bins that accounted for the largest amount of currently allocated memory. Sorting these lines by curslabs * pgs would provide that. Alas I can't sort that in the gist so I just look for the lines with the largest value in curslabs. The curslabs column is the number of slabs (slab is better explained by an expert, not me) and pgs is the size of a slab. The unit for pgs is 4kb in this case so curslabs * pgs * 4096 is the number of bytes currently mapped for a given bin.

In this case the bins using the most memory have sizes 32, 96, 448 and 8192. Note that this is from db_bench configured to use Integrated BlobDB with 400-byte blob values and blob values share the block cache:

  • bins of size 32, 80 (not in this case though) and 96 are commonly used. Their usage is described by an expert as the 32-byte bucket corresponds to the blob-owning object, the 80-byte bucket to the block-owning object, and the 96-byte bucket to the cache metadata (which is an overhead that’s applicable to both blocks and blobs)
  • the bin of size 448 is used by the 400-byte blob values stored in the RocksDB block cache
  • the bin of size 8192 is used by the uncompressed data blocks that are approximately 8192 bytes each because the block_size option was set to 8192.
The util column is also interesting. This lists the utilization fraction, a value between 0 and 1. Multiply by 100 to get the utilization percentage. I read the source code to learn how it is computed. It is curregs / (regs * curslabs) where curregs is the number of currently used regions, regs is the number of regions per slab and curslabs is the number of slabs.

In this example the util value is low (~0.7) for bins of size 32, 96 and 448 but large (1) for the bin of size 8192. To me this indicates there is fragmentation for the bins of size 32, 96 and 448 and removing that fragmentation will reduce RSS and improve memory efficiency. Alas I have yet to figure how to make that happen. The root cause with respect to RocksDB is that at test start the blob values get a larger fraction of the block cache, over time that fraction is reduced and the difference between the peak memory usage near startup and the usage over time became significant (by my standards) as RSS was ~1.15X the size of the RocksDB block cache.

Many of the columns are explained in the man page, however some of the column names used in the malloc_stats_print output are different from what is used in the source code and explained in the man page. One example is the usage of slab_size in the man page vs pgs in the malloc_stats_print output where pgs * 4096 = slab_size. The source for printing per-arena metrics is in stats_arena_print. But you can find curslabs in the man page.

No comments:

Post a Comment

RocksDB on a big server: LRU vs hyperclock, v2

This post show that RocksDB has gotten much faster over time for the read-heavy benchmarks that I use. I recently shared results from a lar...