I use Intel NUC servers at home to test open source databases. I like them because they are small, quiet and don't use much power. For about 2 years I have been using NUC5i3ryh servers with a 5th gen core i3 CPU, 8gb of RAM, 2.5" SATA disk for the OS and 120gb Samsung 850 EVO m.2 for the database. I used this so much that I replaced the SSD devices last year after one reached the endurance limit.
I am upgrading to a new setup using NUC7i5bnh. This has a 7th gen core i5, 16gb of RAM, 2.5" SATA SSD (Samsung 850 EVO) for the OS and m.2 SSD (Samsung 960 EVO) for the database. It has twice the RAM, twice the CPU and more than twice the IOPs of my old setup. The old and new setups use Ubuntu 16.04 server.
First performance comparison is "make -j4" for MySQL 8.0.1 - 1307 seconds for old NUC, 684 seconds for new NUC.
BIOS
I disabled turbo mode in the BIOS on the NUC7i5bnh. There is no turbo mode on the NUC5i3rvh. This was done to avoid frequent variance in performance -- CPU goes into turbo mode, then gets too hot and disables it, repeat. Perhaps if I had these devices in a cold room this would not be a problem.
Storage
I use two storage devices and one handles the OS install while the other is used for perf tests. The perf test SSD wears out and gets replaced. I don't want to lose my OS install when that happens. I mount it at /data and the entry in /etc/fstab is listed below. I use noauto to avoid problems when the device has failed (because it is worn out) and mount fails during boot. I prefer XFS to ext4 for database workloads. The perf test device is mounted at /data. It is mounted after reboot. Sometimes I forget to do that.
Debugging
Ubuntu default security options get in the way of PMP. This fixes that:
For some engines I disable huge pages and this script makes that easy:
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never"
NUC7 networking
The install was easy with one exception. The old setup used wired networking. This time I enabled wireless after the install finished and that took a few hours to figure out. The important steps are:
I am upgrading to a new setup using NUC7i5bnh. This has a 7th gen core i5, 16gb of RAM, 2.5" SATA SSD (Samsung 850 EVO) for the OS and m.2 SSD (Samsung 960 EVO) for the database. It has twice the RAM, twice the CPU and more than twice the IOPs of my old setup. The old and new setups use Ubuntu 16.04 server.
First performance comparison is "make -j4" for MySQL 8.0.1 - 1307 seconds for old NUC, 684 seconds for new NUC.
BIOS
I disabled turbo mode in the BIOS on the NUC7i5bnh. There is no turbo mode on the NUC5i3rvh. This was done to avoid frequent variance in performance -- CPU goes into turbo mode, then gets too hot and disables it, repeat. Perhaps if I had these devices in a cold room this would not be a problem.
Storage
I use two storage devices and one handles the OS install while the other is used for perf tests. The perf test SSD wears out and gets replaced. I don't want to lose my OS install when that happens. I mount it at /data and the entry in /etc/fstab is listed below. I use noauto to avoid problems when the device has failed (because it is worn out) and mount fails during boot. I prefer XFS to ext4 for database workloads. The perf test device is mounted at /data. It is mounted after reboot. Sometimes I forget to do that.
UUID=... /data xfs noatime,nodiratime,discard,noauto 0 1This post explains how to get device endurance stats from the SSD.
Debugging
Ubuntu default security options get in the way of PMP. This fixes that:
echo -1 > /proc/sys/kernel/perf_event_paranoidHuge Pages
echo 0 > /proc/sys/kernel/yama/ptrace_scope
sudo sh -c " echo 0 > /proc/sys/kernel/kptr_restrict"
For some engines I disable huge pages and this script makes that easy:
echo $1 > /sys/kernel/mm/transparent_hugepage/defragCan also try this and then run update-grub:
echo $1 > /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag
cat /sys/kernel/mm/transparent_hugepage/enabled
GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=never"
NUC7 networking
The install was easy with one exception. The old setup used wired networking. This time I enabled wireless after the install finished and that took a few hours to figure out. The important steps are:
- Install the HWE enabled kernel to get the drivers that support Intel wireless HW in this server. I didn't do this at first and dmesg | grep iwl showed nothing even though the firmware for that Intel HW was installed. With the HWE kernel I see this in dmesg output Detected Intel(R) Dual Band Wireless AC 8265. The HWE kernel can be selected at the GRUB menu during the install. I assume this step won't be needed once the NUC7i5bnh HW becomes less new.
- After the install finishes, install wireless-tools via sudo apt-get install wireless-tools. Without this ifup -v wlan0 failed.
- Edit /etc/network/interfaces. This assumes you are using an unsecured network. See below.
I changed /etc/network/interfaces to enable wireless and disable wired using the following contents. After editing the file I tested my changes via sudo ifup -v wlp58s0. If you get it wrong this will take a few minutes to fail. Note that $name is the name for your wireless network and that this works when you are running an open network. Below is the content for /etc/network/interfaces.
# The loopback network interface
auto lo
iface lo inet loopback
# Wired networking is not started automatically
auto eno1
iface eno1 inet manual
#iface eno1 inet dhcp
# Wireless networking is started automaticallyWireless is then stopped and started via sudo ifdown wlp58s0; sudo ifup -v wlp58s0. There has been a bug that causes wireless to stop working. Fixing that requires me to connect directly via a console and restart it. Maybe I could have added a cron job to check for that, but I never got around to it.
auto wlp58s0
iface wlp58s0 inet dhcp
wireless-essid $name
wireless-mode Managed
NUC5 networking
This is similar to NUC7 networking above but the interface name is wlan0 rather than wlp58s0 and wireless is stopped and started via sudo ifdown wlan0; sudo ifup -v wlan0. Below is the contents for /etc/network/interfaces:
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet manual
#iface eth0 inet dhcp
# wireless
auto wlan0
iface wlan0 inet dhcp
wireless-essid $name
wireless-mode Managed
Intel NUCs are a real joy to use, have 2 of them running 24 hour test runs of our test suite for my own development tasks whenever I need to which is quite often. Only time I hear them is when it compiles MySQL, running the test suite it spins along nicely and produces nice results. Have a feeling I will also exhaust the SSDs eventually, but so far so good. One of them is even equipped with 2 SSDs. Had some issues installing Linux Mint on them, but some Googling solved that eventually.
ReplyDeleteReceived 2 new NUC7i5bnh boxes to repair broken SATA cables. After putting the old storage devices into them they wouldn't boot. Reinstalling grub fixed that.
ReplyDelete1) boot via USB
2) enable HWE kernel
3) choose 'repair", answer a few questions
4) mount /dev/sda2 as /
5) get a shell and then "mount /dev/sda1 /boot/efi; grub-install /dev/sda1"
Doing "reinstall grub" via the menu didn't work
One more thing - when doing the install make sure NUC is wired to internet as wifi-only setup never worked for me.
ReplyDeleteI also disabled turbo boost in the BIOS for the i5 NUC. With that enabled they ran fast, then got too hot and ran slow, repeat. Introduced too much variance in my results. I wonder if anyone publishes CPU benchmarks with turbo disabled.
ReplyDelete