Posts

Showing posts from January, 2018

Meltdown vs storage

tl;dr - sysbench fileio throughput for ext4 drops by more than 20% from Linux 4.8 to 4.13 I shared results from sysbench with a cached database to show a small impact from the Meltdown patch in Ubuntu 16.04. Then I repeated the test for an IO-bound configuration using a 200mb buffer pool for InnoDB and database that is ~1.5gb. The results for read-only tests looked similar to what I saw previously so I won't share them. The results for write-heavy tests were odd as QPS for the kernel without the patch (4.8.0-36) were much better than for the kernel with the patch (4.13.0-26). The next step was to use sysbench fileio to determine whether storage performance was OK and it was similar for 4.8 and 4.13 with read-only and write-only tests. But throughput with 4.8 was better than 4.13 for a mixed test that does reads and writes. Configuration I used a NUC7i5bnh server with a Samsung 960 EVO SSD that uses NVMe. The OS is Ubuntu 16.04 with the HWE kernels -- either 4.13.0-26 th

Meltdown vs MySQL part 2: in-memory sysbench and a core i5 NUC

This is my second performance report for the Meltdown patch using in-memory sysbench and a small server. In this test I used a core i5 NUC with the 4.13 and 4.8 kernels. In the previous test I used a core i3 NUC with the 4.4 kernel. results for 4.13 are mixed -- sometimes there is more QPS with the fix enabled, sometimes there is more with the fix disabled. The typical difference is small, about 2%. QPS for 4.8, which doesn't have the Meltdown fix, are usually better than with 4.13, the largest difference is ~10% and the difference tend to be larger at 1 client than at 2 or 8. Configuration My usage of sysbench is  described here . The servers are  described here . For this test I used the core i5 NUC (NUC7i5bnh) with Ubuntu 16.04. I have 3 such servers and ran tests with the fix enabled ( kernel 4.13.0-26 ), the fix disabled via pti=off (kernel 4.13.0-26) and the old kernel (4.8.0-36) that doesn't have the fix. From  cat /proc/cpuinfo  I see  pcid . This server us

XFS, nobarrier and the 4.13 Linux kernel

tl;dr My day nobarrier isn't supported as a mount option for XFS in kernel 4.13.0-26 with Ubuntu 16.04. I assume this isn't limited to Ubuntu. Read this for more detail on the change. write throughput is much worse on my SSD without nobarrier there is no error on the command line when mounting a device that uses the nobarrier option there is an error message in dmesg output for this There might be two workarounds: switch from XFS to ext4 echo "write through" > /sys/block/$device/queue/write_cache The Story I have a NUC cluster at home for performance tests with 3 NUC5i3ryh and 3 NUC7i5bnh. I recently replaced the SSD devices in all of them because previous testing wore them out . I use Ubuntu 16.04 LTS and recently upgraded the kernel on some of them to get the fix for Meltdown. The NUC7i5bnh server has a Samsung 960 EVO SSD that uses NVMe. I use the HWE kernel to make wireless work. The old kernel without the Meltdown fix is 4.8.0-36 and

Meltdown vs MySQL part 1: in-memory sysbench and a core i3 NUC

This is my first performance report for the Meltdown patch using in-memory sysbench and a small server. the worst case overhead was ~5.5% a typical overhead was ~2% QPS was similar between the kernel with the Meltdown fix disabled and the old kernel the overhead with too much concurrency (8 clients) wasn't worse than than the overhead without too much concurrency (1 or 2 clients) Configuration My usage of sysbench is described here . The servers are described here . For this test I used the core i3 NUC (NUC5i3ryh) with Ubuntu 16.04. I have 3 such servers and ran tests with the fix enabled ( kernel 4.4.0-109 ), the fix disabled via pti=off (kernel 4.4.0-109) and the old kernel (4.4.0-38) that doesn't have the fix. From cat /proc/cpuinfo I see pcid . The servers have 2 cores and 4 HW threads. I normally use them for low-concurrency benchmarks with 1 or 2 concurrent database clients. For this test I used 1, 2 and 8 concurrent clients to determine whether more con