Page History
...
Install the kernel development tools:
Code Block # yum -y groupinstall "Development Tools"
Install additional dependencies:
Code Block # yum -y install audit-libs-devel binutils-devel elfutils-devel kabi-dw ncurses-devel newt-devel numactl-devel openssl-devel pciutils-devel perl perl-devel python2 python3-docutils xmlto xz-devel elfutils-libelf-devel libcap-devel libcap-ng-devel llvm-toolset libyaml libyaml-devel kernel-rpm-macros kernel-abi-whitelists
Install e2fsprogs packages:
Download the following packages from https://build.whamcloud.com/job/e2fsprogs-master/arch=aarch64,distro=el8/:
Code Block e2fsprogs e2fsprogs-devel e2fsprogs-libs libcom_err libcom_err-devel libss libss-devel
...
Configure Lustre source:
Code Block $ cd ~/lustre-release/ $ ./configure --with-linux=/home/build/kernel/rpmbuild/BUILD/kernel-4.18.0-80.11.2.el8_0/linux-4.18.0-80.11.2.el8_lustre.aarch64/
Make RPMs:
Code Block $ make rpms
You should now have the Lustre RPMs.
Installing Lustre.
Change to root
and change directory into /home/build/lustre-release/
:
Code Block |
---|
# yum localinstall *.rpm
|
... ... config.status: executing libtool commands CC: gcc LD: /usr/bin/ld CPPFLAGS: -include /home/build/lustre-release/undef.h -include /home/build/lustre-release/config.h -I/home/build/lustre-release/lnet/include/uapi -I/home/build/lustre-release/lustre/include/uapi -I/home/build/lustre-release/libcfs/include -I/home/build/lustre-release/lnet/utils -I/home/build/lustre-release/lustre/include CFLAGS: -g -O2 EXTRA_KCFLAGS: -include /home/build/lustre-release/undef.h -include /home/build/lustre-release/config.h -g -I/home/build/lustre-release/libcfs/include -I/home/build/lustre-release/libcfs/include/libcfs -I/home/build/lustre-release/lnet/include/uapi -I/home/build/lustre-release/lnet/include -I/home/build/lustre-release/lustre/include/uapi -I/home/build/lustre-release/lustre/include -Wno-format-truncation -Wno-stringop-truncation -Wno-stringop-overflow Type 'make' to build Lustre.
Make RPMs:
Code Block $ make rpms ... ... Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/kmod-lustre-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/kmod-lustre-osd-ldiskfs-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-osd-ldiskfs-mount-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-resource-agents-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-devel-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-tests-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/kmod-lustre-tests-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-iokit-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-debugsource-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-debuginfo-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/kmod-lustre-debuginfo-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/kmod-lustre-osd-ldiskfs-debuginfo-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-osd-ldiskfs-mount-debuginfo-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/lustre-tests-debuginfo-2.13.57_47_g475173d-1.el8.aarch64.rpm Wrote: /tmp/rpmbuild-lustre-build-UPMee2HJ/RPMS/aarch64/kmod-lustre-tests-debuginfo-2.13.57_47_g475173d-1.el8.aarch64.rpm Executing(%clean): /bin/sh -e /tmp/rpmbuild-lustre-build-UPMee2HJ/TMP/rpm-tmp.CovhjA + umask 022 + cd /tmp/rpmbuild-lustre-build-UPMee2HJ/BUILD + cd lustre-2.13.57_47_g475173d + rm -rf /tmp/rpmbuild-lustre-build-UPMee2HJ/BUILDROOT/lustre-2.13.57_47_g475173d-1.el8.aarch64 + rm -rf /tmp/rpmbuild-lustre-build-UPMee2HJ/TMP/kmp + exit 0 Executing(--clean): /bin/sh -e /tmp/rpmbuild-lustre-build-UPMee2HJ/TMP/rpm-tmp.P4Pr7b + umask 022 + cd /tmp/rpmbuild-lustre-build-UPMee2HJ/BUILD + rm -rf lustre-2.13.57_47_g475173d + exit 0
You should now have the Lustre RPMs.
Installing Lustre.
Change to root
and change directory into /home/build/lustre-release/
:
Code Block |
---|
# yum localinstall *.aarch64.rpm
|
Disable SELinux (Disable SELinux (Lustre Servers)
SELinux, which is on by default in RHEL/CentOS, will prevent the format commands for the various Lustre targets from completing. Therefore you must either disable it or adjust the settings. These instructions explain how to disable it.
- Run getenforce to see if SELinux is enabled. It should return '
Enforcing
' or 'Disabled
'. - To disable it, edit
/etc/selinux/config
and change the line 'selinux=enforcing
' to 'selinux=disabled
'. Finally, reboot your system.
Code Block # vi /etc/selinux/config ---- # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted --- # shutdown -r now
Testing
run Run
/usr/lib64/lustre/tests/llmount.sh
:Code Block # /usr/lib64/lustre/tests/llmount.sh Stopping clients: sr050 /mnt/lustre (opts:-f) Stopping clients: sr050 /mnt/lustre2 (opts:-f) sr050: executing set_hostid Loading modules from /usr/lib64/lustre/tests/.. detected 160 online CPUs by sysfs libcfs will create CPU partition based on online CPUs quota/lquota options: 'hash_lqs_cur_bits=3' Formatting mgs, mds, osts Format mds1: /tmp/lustre-mdt1 Format ost1: /tmp/lustre-ost1 Format ost2: /tmp/lustre-ost2 Checking servers environments Checking clients sr050 environments Loading modules from /usr/lib64/lustre/tests/.. detected 160 online CPUs by sysfs libcfs will create CPU partition based on online CPUs Setup mgs, mdt, osts Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 Commit the device label on /tmp/lustre-mdt1 Started lustre-MDT0000 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 Commit the device label on /tmp/lustre-ost1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 Commit the device label on /tmp/lustre-ost2 Started lustre-OST0001 Starting client: sr050: -o user_xattr,flock sr050@tcp:/lustre /mnt/lustre UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 125056 1644 112176 2% /mnt/lustre[MDT:0] lustre-OST0000_UUID 313104 1244 284700 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 313104 1244 284700 1% /mnt/lustre[OST:1] filesystem_summary: 626208 2488 569400 1% /mnt/lustre Using TIMEOUT=20 osc.lustre-OST0000-osc-ffffdcdebb822800ffffce9052651800.idle_timeout=debug osc.lustre-OST0001-osc-ffffdcdebb822800ffffce9052651800.idle_timeout=debug setting jobstats to procname_uid Setting lustre.sys.jobid_var from disable to procname_uid Waiting 90s for 'procname_uid' Updated after 9s4s: want 'procname_uid' got 'procname_uid' disable quota as required lod.lustre-MDT0000-mdtlov.mdt_hash=crush
- You will now have a Lustre filesystem available at
/mnt/lustre
...