Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: add shortcut to bypass kernel build

...

This document walks through the steps of patching the kernel, building Lustre and running a basic test of the complete system. 

If you prefer to skip the kernel build steps and get right to building Lustre, you can download and install pre-patched server kernel RPMs from a recent lustre-master, and can then jump straight to the Configure and build Lustre section below.  You should navigate to the el8.x server results (green checkmarked circle), then Build ArtifactsArtifactsRPMsx86_64, and get at least the kernel, kernel-devel, kernel-modules RPMs to install.


...

Process

Provision machine and installing dependencies.

...

  1. Configure Lustre source:

    Code Block
    languagebash
    themeEclipse
    cd lustre-release
    ./configure --with-linux=/home/build/kernel/rpmbuild/BUILD/kernel-4.18.0-425.19.2.el8_7/linux-4.18.0-425.19.2.el8_lustre.`uname -m`/ --disable-gss --disable-shared --disable-crypto
  2. Build and install Lustre:

    Code Block
    languagebash
    themeEclipse
    make -j8
    sudo make install
    sudo depmod -a
  3. Run /usr/lib64/lustre/tests/llmount.sh and tests lustre:

    Code Block
    languagebash
    themeEclipse
    /usr/lib64/lustre/tests/llmount.sh

    Stopping clients: sr050 /mnt/lustre (opts:-f)
    Stopping clients: sr050 /mnt/lustre2 (opts:-f)
    sr050: executing set_hostid
    Loading modules from /usr/lib64/lustre/tests/..
    detected 8 online CPUs by sysfs
    libcfs will create CPU partition based on online CPUs
    Formatting mgs, mds, osts
    Format mds1: /tmp/lustre-mdt1
    Format ost1: /tmp/lustre-ost1
    Format ost2: /tmp/lustre-ost2
    Checking servers environments
    Checking clients sr050 environments
    Loading modules from /usr/lib64/lustre/tests/..
    detected 8 online CPUs by sysfs
    libcfs will create CPU partition based on online CPUs
    Setup mgs, mdt, osts
    Starting mds1: -o localrecov  /dev/mapper/mds1_flakey /mnt/lustre-mds1
    Commit the device label on /tmp/lustre-mdt1
    Started lustre-MDT0000
    Starting ost1: -o localrecov  /dev/mapper/ost1_flakey /mnt/lustre-ost1
    Commit the device label on /tmp/lustre-ost1
    Started lustre-OST0000
    Starting ost2: -o localrecov  /dev/mapper/ost2_flakey /mnt/lustre-ost2
    Commit the device label on /tmp/lustre-ost2
    Started lustre-OST0001
    Starting client: sr050:  -o user_xattr,flock sr050@tcp:/lustre /mnt/lustre
    UUID                   1K-blocks        Used   Available Use% Mounted on
    lustre-MDT0000_UUID       125056        1644      112176   2% /mnt/lustre[MDT:0]
    lustre-OST0000_UUID       313104        1244      284700   1% /mnt/lustre[OST:0]
    lustre-OST0001_UUID       313104        1244      284700   1% /mnt/lustre[OST:1]
     
    filesystem_summary:       626208        2488      569400   1% /mnt/lustre
     
    Using TIMEOUT=20
    osc.lustre-OST0000-osc-ffffce9052651800.idle_timeout=debug
    osc.lustre-OST0001-osc-ffffce9052651800.idle_timeout=debug
    setting jobstats to procname_uid
    Setting lustre.sys.jobid_var from disable to procname_uid
    Waiting 90s for 'procname_uid'
    Updated after 4s: want 'procname_uid' got 'procname_uid'
    disable quota as required
    lod.lustre-MDT0000-mdtlov.mdt_hash=crush

  4. You will now have a Lustre filesystem available at /mnt/lustre

...