Wednesday, March 8, 2017

How to build MongoRocks for MongoDB 3.4

This explains how to build MongoRocks for MongoDB 3.4 and is derived from my notes for building for MongoDB 3.2. My server uses Ubuntu 16.04.

# Install many of the dependencies for MongoRocks.
# I assume this is still valid.
sudo yum install snappy-devel zlib-devel bzip2-devel lz4-devel
sudo yum install scons gcc-g++ git

# Unpack MongoDB 3.4 source in $MONGOSRC

# Directory in which git repos are created
mkdir ~/git

# Get MongoRocks engine
cd ~/git
git clone https://github.com/mongodb-partners/mongo-rocks.git
cd mongo-rocks
git checkout --track origin/v3.4 -b v34

# figure out which version of gcc & g++ is installed
# for ubuntu 16.04 that is 5.4

g++ --version


# get and build RocksDB libraries
# disable the use of jemalloc features

git clone https://github.com/facebook/rocksdb.git
cd rocksdb
git checkout --track origin/5.2.fb -b 52fb
EXTRA_CFLAGS=-fPIC EXTRA_CXXFLAGS=-fPIC DISABLE_JEMALLOC=1 make static_lib

# prepare source build with support for RocksDB
cd $MONGOSRC
mkdir -p src/mongo/db/modules/
ln -sf ~/git/mongo-rocks src/mongo/db/modules/rocks

# Build mongod & mongo binaries
# You can edit LIBS="..." depending on the compression libs
# installed on your build server and enabled for RocksDB.
# To debug and see command lines add --debug=presub
# To use glibc rather than tcmalloc add --allocator=system

scons CPPPATH=/home/mdcallag/git/rocksdb/include \
      LIBPATH=/home/mdcallag/git/rocksdb \

      LIBS="lz4 zstd bz2" mongod mongo

# install mongod, I used ~/b/m342 you can use something else
mkdir -p ~/b/m342
cd ~/b/m342
mkdir data
mkdir bin
cp $MONGOSRC/build/opt/mongo/mongod bin
cp $MONGOSRC/build/opt/mongo/mongo bin

# create mongo.conf file with the text that follows. You must
# change $HOME and consider changing 
the value for cacheSizeGB
---
processManagement:
  fork: true
systemLog:
  destination: file
  path: $HOME/b/m342/log
  logAppend: true
storage:
  syncPeriodSecs: 600
  dbPath: $HOME/b/m342/data
  journal:
    enabled: true
operationProfiling.slowOpThresholdMs: 2000
replication.oplogSizeMB: 4000
storage.rocksdb.cacheSizeGB: 1
---

# start mongod, consider using numactl --interleave=all
bin/mongod --config mongo.conf --master --storageEngine rocksdb

# confirm RocksDB is there
ls data/db
> 000007.sst  CURRENT  IDENTITY  journal  LOCK  LOG  MANIFEST-000008  OPTIONS-000005

$ head -4 data/db/LOG
2017-03-08T09:38:33.747-0800 I CONTROL  [initandlisten] MongoDB starting : pid=19869 port=27017 dbpath=/home/mdcallag/b/m342/data master=1 64-bit host=nuc2
2017-03-08T09:38:33.747-0800 I CONTROL  [initandlisten] db version v3.4.2
2017-03-08T09:38:33.747-0800 I CONTROL  [initandlisten] git version: 3f76e40c105fc223b3e5aac3e20dcd026b83b38b
2017-03-08T09:38:33.747-0800 I CONTROL  [initandlisten] allocator: tcmalloc
2017-03-08T09:38:33.747-0800 I CONTROL  [initandlisten] modules: rocks

5 comments:

  1. Hey Mark -- I was looking through your slides from Percona 2017. Do you know if the videos will be posted somewhere eventually? I didn't get a chance to attend but I'd love to watch the session.

    ReplyDelete
    Replies
    1. I don't know. I plan to redo the material as blog posts, which gives me more time to go into depth.

      Delete
  2. Percona is slowly putting them up, so that's great :)

    https://www.percona.com/live/17/resources/videos

    ReplyDelete
  3. I just watched the talk, awesome! Great talk. I have a couple of questions, and I would love to hear you include this data in the upcoming blog posts. I love the latency histogram, very useful.

    Do you recall what versions of the DBs you used?
    Was the WT insert with compression enabled in the second (larger server) test? Looking at the db size, it was 1.8x the size on disk.

    ReplyDelete
    Replies
    1. I used Percona Server for MongoDB 3.4.2

      Result for WT inserts was from snappy compression. But for the insert-only part of the test compression doesn't change throughput. Snappy, zlib and no compression had similar insert rates on that server.

      Delete

Fixing some of the InnoDB scan perf regressions in a MySQL fork

I recently learned of Advanced MySQL , a MySQL fork, and ran my sysbench benchmarks for it. It fixed some, but not all, of the regressions f...