This repository contains instructions and source code for compiling and reproducing results in the Calibrated Interrupts (cinterrupts) paper, to appear in OSDI '21 (see the paper PDF in the top-level of this repository).
At the end of this document, we describe how to compile and install the evaluation environment, should the evaluator choose to do so. However, due to needing the cinterrupts custom kernel, we have set up an environment for the evaluators on our machine.
Our system works closely with real hardware and reproduction
of our results requires a low latency Intel Optane NVMe SSD (or similar).
In addition, we wrote our scripts with an assumption that underlying SSD
is connected to a NUMA node #1 which hosts cores 1,3,5,7.
Different configuration will require updating our scripts accordingly.
Furthermore, in our experience, different CPUs and machine setups
require different macrobenchmark (application) configurations
to saturate the CPU.
This is why we provide evaluators with an access to our setup with
Intel Optane NVMe SSD installed and preconfigured building environment.
Please contact authors how to access this setup remotely. (Account username/password and machine IPs are privileged information that we prefer to send out-of-band.)
linux-kerneldirectory with Linux kernel sources and cinterrupts patchlinux-kernel/cinterrupts-01-basis.patchdevice emulation and nvme driverlinux-kernel/cinterrupts-02-rocks-addon.patchaddition for multi queue support for rocksdb and other macrobenchmarkslinux-kernel/linux-kernel-5.0.0-16.17.tgz-part-a[abcd]split archive of the Linux vanilla kernel ver 5.0.0-16.17linux-kernel/config-fileconfig file used for our kernel compilationbuild-kernel.shscript to extract Linux kernels source, apply the cinterrupts patch and compile the kernelfiodirectory with fio 3.12 sources and cinterrupt patch for fiofio/fio-3.12.tgzsources of original fio version 3.12fio/fio-3.12-barrier.patchpatch with cinterrupts support in fio + additional statistics added to fio as we used these in our results analysisbuild-fio.shscript to extract fio source, apply cinterrupts patch and compile the fioutilsdirectory with scripts we use in our projectfig5directory with scripts to reproduce Figure 5 in the paper, cd tofig5and runmake-all.sh, seefig5.pdffig6directory with scripts to reproduce Figure 6 in the paper, cd tofig6and runmake-all.sh, seefig6.pdffig7directory with scripts to reproduce Figure 7 in the paper, cd tofig7and runmake-all.sh, seefig7.pdffig10directory with scripts to reproduce Figure 10 in the paper, cd tofig10and runmake-all.sh, seefig10.pdffig14directory with scripts to reproduce Figure 14 in the paper, cd tofig14and runmake-all.sh, seefig14.pdffig15directory with scripts to reproduce Figure 15 in the paper, cd tofig15and refer toREADMEfig16directory with scripts to reproduce Figure 16 in the paper, cd tofig16and refer toREADMErocksdbdirectory withcint.patchand RocksDB v6.4.6 sourceskvelldirectory with KVell sourcestab3directory with scripts to reproduce Table 3 in the paper, cd totab3and refer toREADMEtab5+fig17directory with scripts to reproduce Table 5 and Figure 17 in the paper, cd totab5+fig17and refer toREADME
We highly recommend that you build on Ubuntu 16.04. To build the custom cint kernel, you will need any dependencies required for the Linux kernel. These include libssl-dev, bison, flex, and optionally dh-exec. If there is a compilation error, it is likely because one of these packages is missing.
Run build-kernel.sh in the top-level directory of this repository.
This will build and install our custom kernels for micro and macro
benchmarks. You will then need to run this script once. To simplify artifact
testing we already ran this script which extracted, compiled and installed
our kernels into linux-kernel/linux-kernel-5.0.0-16.17-nvmecint and
linux-kernel/linux-kernel-5.0.0-16.17-nvmecint-rocks directories.
We install two kernels:
5.0.8-nvmecintis used to test microbenchmarks (fig5,fig6,fig7,fig10,fig14), it emulates a single SQ/CQ pair.5.0.8-nvmecint-rocksis used to test multithreaded macrobenchmarks (fig15,fig16,tab3,tab5+fig17), same as above with the addition of multiple SQ/CQ pairs emulation.
To boot into 5.0.8-nvmecint kernel run:
$> sudo grub-reboot "Ubuntu, with Linux 5.0.8-nvmecint"
$> sudo reboot
To boot into 5.0.8-nvmecint-rocks kernel run:
$> sudo grub-reboot "Ubuntu, with Linux 5.0.8-nvmecint-rocks"
$> sudo reboot
When kernel is loaded the driver is ready. If you modify the driver and need to compile it then run:
$> cd linux-kernel/linux-kernel-5.0.0-16.17-nvmecint
$> sh nvme-make.sh
After that, to switch between different NVMe interrupt emulations and the original driver, you simply need to unload and load the correct nvme driver with relevant parameters:
$> cd linux-kernel/linux-kernel-5.0.0-16.17-nvmecint
$> sh ./nvme-reload.sh our-sol
$>
$> sh ./nvme-reload.sh
Usage: ./nvme-reload.sh {orig|emul|our-sol}
orig -- original nvme driver, for-bare-metal tests
emul -- emulation of the original nvme driver on a side core
emul-100-32 -- emulation of the original nvme driver with 100 usec and 32 thr aggregation params
our-sol -- side-core emulation of our nvme prototype with URGENT and BARRIER flags
alpha -- side-core emulation of our nvme prototype, only adaptive coalescing
alpha0 -- side-core emulation of our nvme prototype, without any thresholds (new baseline0)
To change the parameters edit config files in form nvme-$(hostname)-$(mode).conf, for example:
$> cd linux-kernel/linux-kernel-5.0.0-16.17-nvmecint
$> vim nvme-$(hostname).conf # params for the cinterrupts driver
$> vim nvme-$(hostname)-clean.conf # params for the original nvme driver
$> vim nvme-$(hostname)-emul.conf # params for the emulated nvme device driver
After booting into this custom kernel, compile fio benchmark.
Run build-fio.sh in the top-level directory of this repository.
Path to fio from the top-level directory: fio/fio-3.12/fio
If you can successfully run fio, you are ready!
You should compile the following applications, which are applications we modified for cinterrupts.
- FIO (just run
build-fio.shscript in the top-level directory) - RocksDB (just run
build-rocksdb.shin the top-level directory) - KVell (just run
build-kvell.shin the top-level directory)
In the figX/ subdirectories, we have scripts and instructions for
reproducing the key figures in our paper, e.g., fig5 directory
contain all scripts needed to reproduce Figure 5 in the paper.
Enter to a figX directory and run make-all.sh.
See figX.pdf with test results, but please check README for
each directory to confirm output. For example, results for tables
in the paper are
stored directly in *.out files.
Pay attention, for microbenchmarks, our scripts run each benchmark 10 times, 60 seconds each run. Since there are multiple flavours of each test the total runtime can be very long. To reduce total runtime evaluators can change
runtimeandrunsvariables in the test scripts.For macrobenchmarks, experiments can also take a while to run (up to 45 min) as they run each benchmark 5 times. Consider using tmux to make sure the benchmark continues to run even if ssh connection is broken.