Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: add shortcut to bypass kernel build

...

This document walks through the steps of patching the kernel, building Lustre and running a basic test of the complete system. 

If you prefer to skip the kernel build steps and get right to building Lustre, you can download and install pre-patched server kernel RPMs from a recent lustre-master, and can then jump straight to the Configure and build Lustre section below.  You should navigate to the el8.x server results (green checkmarked circle), then Build ArtifactsArtifactsRPMsx86_64, and get at least the kernel, kernel-devel, kernel-modules RPMs to install.


...

Process

Provision machine and installing dependencies.

...

  1. Install the kernel development tools: 

    Code Block
    languagebash
    themeEclipse
    yum -y groupinstall "Development Tools"
  2. Install additional dependencies:

    Code Block
    languagebash
    themeEclipse
    yum config-manager --set-enabled powertools
    dnf install -y gcc autoconf libtool which make patch diffutils file binutils-devel python38 python3-devel elfutils-devel libselinux-devel libaio-devel dnf-plugins-core bc bison flex git libyaml-devel libnl3-devel libmount-devel json-c-devel redhat-lsb libssh-devel libattr-devel libtirpc-devel libblkid-devel openssl-devel libuuid-devel texinfo texinfo-tex
    yum -y install audit-libs-devel binutils-devel elfutils-devel kabi-dw ncurses-devel newt-devel numactl-devel openssl-devel pciutils-devel perl perl-devel python2 python3-docutils xmlto xz-devel elfutils-libelf-devel libcap-devel libcap-ng-devel llvm-toolset libyaml libyaml-devel kernel-rpm-macros kernel-abi-whitelists opencsd-devel
    dnf install -y epel-release
    dnf install -y ccache pdsh
    dnf --enablerepo=ha install resource-agents
  3. Install tools for kernel RPM build:

    Code Block
    languagebash
    themeEclipse
    dnf install -y bpftool dwarves java-devel libbabeltrace-devel libbpf-devel libmnl-devel net-tools rsync
    # May only be needed on RHEL9 derivatives:
    dnf install -y python3-devel
  4. Install e2fsprogs packages:

    Code Block
    languagebash
    themeEclipse
    git clone "https://review.whamcloud.com/tools/e2fsprogs" e2fsprogs && cd e2fsprogs && git checkout v1.47.01-wc1 && ./configure --with-root-prefix=/usr --enable-elf-shlibs --disable-uuidd --disable-fsck --disable-e2initrd-helper --disable-libblkid --disable-libuuid --enable-quota --disable-fuse2fs && make -j8
    sudo make install
    cd ..

    Or download and install the following e2fsprogs packages from https://build.whamcloud.com/

...

  1. Configure Lustre source:

    Code Block
    languagebash
    themeEclipse
    cd lustre-release
    ./configure --with-linux=/home/build/kernel/rpmbuild/BUILD/kernel-4.18.0-425.19.2.el8_7/linux-4.18.0-425.19.2.el8_lustre.`uname -m`/ --disable-gss --disable-shared --disable-crypto
  2. Build and install Lustre:

    Code Block
    languagebash
    themeEclipse
    make -j8
    sudo make install
    sudo depmod -a
  3. Run /usr/lib64/lustre/tests/llmount.sh and tests lustre:

    Code Block
    languagebash
    themeEclipse
    /usr/lib64/lustre/tests/llmount.sh

    Stopping clients: sr050 /mnt/lustre (opts:-f)
    Stopping clients: sr050 /mnt/lustre2 (opts:-f)
    sr050: executing set_hostid
    Loading modules from /usr/lib64/lustre/tests/..
    detected 8 online CPUs by sysfs
    libcfs will create CPU partition based on online CPUs
    Formatting mgs, mds, osts
    Format mds1: /tmp/lustre-mdt1
    Format ost1: /tmp/lustre-ost1
    Format ost2: /tmp/lustre-ost2
    Checking servers environments
    Checking clients sr050 environments
    Loading modules from /usr/lib64/lustre/tests/..
    detected 8 online CPUs by sysfs
    libcfs will create CPU partition based on online CPUs
    Setup mgs, mdt, osts
    Starting mds1: -o localrecov  /dev/mapper/mds1_flakey /mnt/lustre-mds1
    Commit the device label on /tmp/lustre-mdt1
    Started lustre-MDT0000
    Starting ost1: -o localrecov  /dev/mapper/ost1_flakey /mnt/lustre-ost1
    Commit the device label on /tmp/lustre-ost1
    Started lustre-OST0000
    Starting ost2: -o localrecov  /dev/mapper/ost2_flakey /mnt/lustre-ost2
    Commit the device label on /tmp/lustre-ost2
    Started lustre-OST0001
    Starting client: sr050:  -o user_xattr,flock sr050@tcp:/lustre /mnt/lustre
    UUID                   1K-blocks        Used   Available Use% Mounted on
    lustre-MDT0000_UUID       125056        1644      112176   2% /mnt/lustre[MDT:0]
    lustre-OST0000_UUID       313104        1244      284700   1% /mnt/lustre[OST:0]
    lustre-OST0001_UUID       313104        1244      284700   1% /mnt/lustre[OST:1]
     
    filesystem_summary:       626208        2488      569400   1% /mnt/lustre
     
    Using TIMEOUT=20
    osc.lustre-OST0000-osc-ffffce9052651800.idle_timeout=debug
    osc.lustre-OST0001-osc-ffffce9052651800.idle_timeout=debug
    setting jobstats to procname_uid
    Setting lustre.sys.jobid_var from disable to procname_uid
    Waiting 90s for 'procname_uid'
    Updated after 4s: want 'procname_uid' got 'procname_uid'
    disable quota as required
    lod.lustre-MDT0000-mdtlov.mdt_hash=crush

  4. You will now have a Lustre filesystem available at /mnt/lustre

...