Purpose
Describe the steps you need to build and test a Lustre system (MGS, MDT, MDS, OSS, OST, client) from the master
branch on an x86_64, RHEL/Rocky linux 8.7 machine.
Prerequisite
- A newly installed RHEL/Rocky linux 8 x86_64 machine connected to the internet.
- NOTE It is suggested that you have at least 1GB of memory on the machine you are using for the build.
- NOTE Verify that SElinux is disabled.
- This manual uses the Rocky linux 8 distribution available at the link.
- It is assumed that Rocky linux 8 was installed as is, and nothing else was installed on it.
Overview
This document walks through the steps of patching the kernel, building Lustre and running a basic test of the complete system.
Process
Provision machine and installing dependencies.
Once RHEL/Rocky 8.7 is newly installed on an x86_64 machine
login as user root
.
Install the kernel development tools:
yum -y groupinstall "Development Tools"
Install additional dependencies:
yum config-manager --set-enabled powertools dnf install -y gcc autoconf libtool which make patch diffutils file binutils-devel python38 python3-devel elfutils-devel libselinux-devel libaio-devel dnf-plugins-core bc bison flex git libyaml-devel libnl3-devel libmount-devel json-c-devel redhat-lsb libssh-devel libattr-devel libtirpc-devel libblkid-devel openssl-devel libuuid-devel texinfo texinfo-tex yum -y install audit-libs-devel binutils-devel elfutils-devel kabi-dw ncurses-devel newt-devel numactl-devel openssl-devel pciutils-devel perl perl-devel python2 python3-docutils xmlto xz-devel elfutils-libelf-devel libcap-devel libcap-ng-devel llvm-toolset libyaml libyaml-devel kernel-rpm-macros kernel-abi-whitelists dnf install -y epel-release dnf install -y ccache
Install tools for kernel RPM build:
dnf install -y bpftool dwarves java-devel libbabeltrace-devel libbpf-devel libmnl-devel net-tools rsync # May only be needed on RHEL9 derivatives: dnf install -y python3-devel
Install e2fsprogs packages:
git clone "https://review.whamcloud.com/tools/e2fsprogs" e2fsprogs && cd e2fsprogs && git checkout v1.47.0-wc1 && ./configure --with-root-prefix=/usr --enable-elf-shlibs --disable-uuidd --disable-fsck --disable-e2initrd-helper --disable-libblkid --disable-libuuid --enable-quota --disable-fuse2fs && make -j8 sudo make install cd ..
Or download and install the following e2fsprogs packages from https://build.whamcloud.com/
Preparing the Lustre source.
Create a user
build
with the home directory/home/build
and switch to this user:useradd -m build su build cd $HOME
Prepare Lustre src
git clone "https://review.whamcloud.com/fs/lustre-release" && cd lustre-release && sh ./autogen.sh
Prepare a patched kernel for Lustre
In this walk-thru, the kernel is built using rpmbuild
- a tool specific to RPM based distributions.
First create the directory structure, then get the source from the RPM. Create a
.rpmmacros
file to install the kernel source in our user directory:cd $HOME && mkdir -p kernel/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS} cd kernel && echo '%_topdir %(echo $HOME)/kernel/rpmbuild' > ~/.rpmmacros
Install the kernel source:
rpm -ivh https://download.rockylinux.org/pub/rocky/8/BaseOS/source/tree/Packages/k/kernel-4.18.0-425.19.2.el8_7.src.rpm
Prepare the source using
rpmbuild
:cd ~/kernel/rpmbuild && rpmbuild -bp --target=`uname -m` ./SPECS/kernel.spec
Copy the kernel config file into lustre tree:
cp ~/kernel/rpmbuild/BUILD/kernel-4.18.0-425.19.2.el8_7/linux-4.18.0-425.19.2.el8.x86_64/configs/kernel-4.18.0-x86_64.config ~/lustre-release/lustre/kernel_patches/kernel_configs/kernel-4.18.0-4.18-rhel8.7-x86_64.config
Edit the kernel config file ~/lustre-release/lustre/kernel_patches/kernel_configs/kernel-4.18.0-4.18-rhel8.7-x86_64.config::
Find the line with '# IO Schedulers' and insert following two lines below it:CONFIG_IOSCHED_DEADLINE=y
CONFIG_DEFAULT_IOSCHED="deadline"
Or use cmd:sed -i '/# IO Schedulers/a CONFIG_IOSCHED_DEADLINE=y\nCONFIG_DEFAULT_IOSCHED="deadline"' ~/lustre-release/lustre/kernel_patches/kernel_configs/kernel-4.18.0-4.18-rhel8.7-x86_64.config
Gather all the patches from
lustre
tree into a single file:cd ~/lustre-release/lustre/kernel_patches/series && \ for patch in $(<"4.18-rhel8.7.series"); do \ patch_file="$HOME/lustre-release/lustre/kernel_patches/patches/${patch}"; \ cat "${patch_file}" >> "$HOME/lustre-kernel-x86_64-lustre.patch"; \ done
Copy the kernel patch into RPM build tree:
cp ~/lustre-kernel-x86_64-lustre.patch ~/kernel/rpmbuild/SOURCES/patch-4.18.0-lustre.patch
Edit the kernel spec file
~/kernel/rpmbuild/SPECS/kernel.spec
:sed -i.inst -e '/^ find $RPM_BUILD_ROOT\/lib\/modules\/$KernelVer/a\ cp -a fs/ext4/* $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/fs/ext4\ rm -f $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/fs/ext4/ext4-inode-test*' \ -e '/^# empty final patch to facilitate testing of kernel patches/i\ Patch99995: patch-%{version}-lustre.patch' \ -e '/^ApplyOptionalPatch linux-kernel-test.patch/i\ ApplyOptionalPatch patch-%{version}-lustre.patch' \ ~/kernel/rpmbuild/SPECS/kernel.spec
Overwrite the kernel config file with ~/lustre-release/lustre/kernel_patches/kernel_configs/kernel-4.18.0-4.18-rhel8.7-x86_64.config:
echo '# x86_64' > ~/kernel/rpmbuild/SOURCES/kernel-4.18.0-x86_64.config cat ~/lustre-release/lustre/kernel_patches/kernel_configs/kernel-4.18.0-4.18-rhel8.7-x86_64.config >> ~/kernel/rpmbuild/SOURCES/kernel-4.18.0-x86_64.config
Start building the kernel with
rpmbuild
:cd ~/kernel/rpmbuild && buildid="_lustre" && \ rpmbuild -ba --with firmware --target x86_64 --with baseonly \ --without kabichk --define "buildid ${buildid}" \ ~/kernel/rpmbuild/SPECS/kernel.spec
Install the
kernel
packages:cd ~/kernel/rpmbuild/RPMS/x86_64/ sudo rpm -Uvh --replacepkgs --force kernel-*.rpm sudo reboot
Login system after reboot, run name -r and see:
# uname -r
4.18.0-425.19.2.el8_lustre.x86_64
Configure and build Lustre
Configure Lustre source:
cd lustre-release ./configure --with-linux=/home/build/kernel/rpmbuild/BUILD/kernel-4.18.0-425.19.2.el8_7/linux-4.18.0-425.19.2.el8_lustre.x86_64/ --disable-gss --disable-shared --disable-crypto
Build and install Lustre:
make -j8 sudo make install sudo depmod -a
Run
/usr/lib64/lustre/tests/llmount.sh and tests lustre:
/usr/lib64/lustre/tests/llmount.sh
Stopping clients: sr050 /mnt/lustre (opts:-f)
Stopping clients: sr050 /mnt/lustre2 (opts:-f)
sr050: executing set_hostid
Loading modules from /usr/lib64/lustre/tests/..
detected 8 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
Formatting mgs, mds, osts
Format mds1: /tmp/lustre-mdt1
Format ost1: /tmp/lustre-ost1
Format ost2: /tmp/lustre-ost2
Checking servers environments
Checking clients sr050 environments
Loading modules from /usr/lib64/lustre/tests/..
detected 8 online CPUs by sysfs
libcfs will create CPU partition based on online CPUs
Setup mgs, mdt, osts
Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1
Commit the device label on /tmp/lustre-mdt1
Started lustre-MDT0000
Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1
Commit the device label on /tmp/lustre-ost1
Started lustre-OST0000
Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2
Commit the device label on /tmp/lustre-ost2
Started lustre-OST0001
Starting client: sr050: -o user_xattr,flock sr050@tcp:/lustre /mnt/lustre
UUID 1K-blocks Used Available Use% Mounted on
lustre-MDT0000_UUID 125056 1644 112176 2% /mnt/lustre[MDT:0]
lustre-OST0000_UUID 313104 1244 284700 1% /mnt/lustre[OST:0]
lustre-OST0001_UUID 313104 1244 284700 1% /mnt/lustre[OST:1]
filesystem_summary: 626208 2488 569400 1% /mnt/lustre
Using TIMEOUT=20
osc.lustre-OST0000-osc-ffffce9052651800.idle_timeout=debug
osc.lustre-OST0001-osc-ffffce9052651800.idle_timeout=debug
setting jobstats to procname_uid
Setting lustre.sys.jobid_var from disable to procname_uid
Waiting 90s for 'procname_uid'
Updated after 4s: want 'procname_uid' got 'procname_uid'
disable quota as required
lod.lustre-MDT0000-mdtlov.mdt_hash=crush
- You will now have a Lustre filesystem available at
/mnt/lustre
End