Before deploy Terraform code for Microsoft Azure, you will need to authenticate under the Microsoft account you used to log into the Microsoft Azure Portal. You will use a Microsoft account and its credentials to allow Terraform to deploy resources.
DDN EXAScaler Cloud in the Azure Marketplace have additional license and purchase terms that you must accept before you can deploy them programmatically. To deploy an environment from this image, you'll need to accept the image's terms the first time you use it, once per subscription.
Steps to authenticate via Microsoft account
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. Learn more about Azure authentication.
EXAScaler Cloud
6.2.0
Red Hat Enterprise Linux 8.7
exascaler_cloud_6_2_redhat
EXAScaler Cloud
6.2.0
Rocky Linux 8.7
exascaler_cloud_6_2_rocky
EXAScaler Cloud
6.3.2
Red Hat Enterprise Linux 8.10
exascaler_cloud_6_3_redhat
EXAScaler Cloud
6.3.2
Rocky Linux 8.10
exascaler_cloud_6_3_rocky
Client packages
EXAScaler Cloud deployment provides support for installing and configuring third-party clients. EXAScaler Cloud client software comprises a set of kernel modules which must be compatible with the running kernel, as well as userspace tools for interacting with the filesystem.
Vendor
Product
Version
Arch
Kernel Version for binary package
Kernel Version for DKMS package
Red Hat
RHEL
7.6
x86_64
3.10.0-957.99.1.el7.x86_64
3.10.0
Red Hat
RHEL
7.7
x86_64
3.10.0-1062.77.1.el7.x86_64
3.10.0
Red Hat
RHEL
7.8
x86_64
3.10.0-1127.19.1.el7.x86_64
3.10.0
Red Hat
RHEL
7.9
x86_64
3.10.0-1160.119.1.el7.x86_64
3.10.0
Red Hat
RHEL
8.0
x86_64
4.18.0-80.31.1.el8_0.x86_64
4.18.0
Red Hat
RHEL
8.1
x86_64
4.18.0-147.94.1.el8_1.x86_64
4.18.0
Red Hat
RHEL
8.2
x86_64
4.18.0-193.141.1.el8_2.x86_64
4.18.0
Red Hat
RHEL
8.3
x86_64
4.18.0-240.22.1.el8_3.x86_64
4.18.0
Red Hat
RHEL
8.4
x86_64
4.18.0-305.148.1.el8_4.x86_64
4.18.0
Red Hat
RHEL
8.5
x86_64
4.18.0-348.23.1.el8_5.x86_64
4.18.0
Red Hat
RHEL
8.6
aarch64
4.18.0-372.105.1.el8_6.aarch64
4.18.0
Red Hat
RHEL
8.6
x86_64
4.18.0-372.134.1.el8_6.x86_64
4.18.0
Red Hat
RHEL
8.7
aarch64
4.18.0-425.19.2.el8_7.aarch64
4.18.0
Red Hat
RHEL
8.7
x86_64
4.18.0-425.19.2.el8_7.x86_64
4.18.0
Red Hat
RHEL
8.8
aarch64
4.18.0-477.86.1.el8_8.aarch64
4.18.0
Red Hat
RHEL
8.8
x86_64
4.18.0-477.86.1.el8_8.x86_64
4.18.0
Red Hat
RHEL
8.9
aarch64
4.18.0-513.24.1.el8_9.aarch64
4.18.0
Red Hat
RHEL
8.9
x86_64
4.18.0-513.24.1.el8_9.x86_64
4.18.0
Red Hat
RHEL
8.10
aarch64
4.18.0-553.40.1.el8_10.aarch64
4.18.0
Red Hat
RHEL
8.10
x86_64
4.18.0-553.40.1.el8_10.x86_64
4.18.0
Red Hat
RHEL
9.0
aarch64
5.14.0-70.101.1.el9_0.aarch64
5.14.0
Red Hat
RHEL
9.0
x86_64
5.14.0-70.122.1.el9_0.x86_64
5.14.0
Red Hat
RHEL
9.1
aarch64
5.14.0-162.23.1.el9_1.aarch64
5.14.0
Red Hat
RHEL
9.1
x86_64
5.14.0-162.23.1.el9_1.x86_64
5.14.0
Red Hat
RHEL
9.2
aarch64
5.14.0-284.99.1.el9_2.aarch64
5.14.0
Red Hat
RHEL
9.2
x86_64
5.14.0-284.99.1.el9_2.x86_64
5.14.0
Red Hat
RHEL
9.3
aarch64
5.14.0-362.24.1.el9_3.aarch64
5.14.0
Red Hat
RHEL
9.3
x86_64
5.14.0-362.24.1.el9_3.x86_64
5.14.0
Red Hat
RHEL
9.4
aarch64
5.14.0-427.50.1.el9_4.aarch64
5.14.0
Red Hat
RHEL
9.4
x86_64
5.14.0-427.50.1.el9_4.x86_64
5.14.0
Red Hat
RHEL
9.5
aarch64
5.14.0-503.26.1.el9_5.aarch64
5.14.0
Red Hat
RHEL
9.5
x86_64
5.14.0-503.26.1.el9_5.x86_64
5.14.0
Canonical
Ubuntu
16.04 LTS
amd64
—
4.4 - 4.15
Canonical
Ubuntu
18.04 LTS
amd64
—
4.15 - 5.4
Canonical
Ubuntu
20.04 LTS
amd64
—
5.4 - 5.15
Canonical
Ubuntu
20.04 LTS
arm64
—
5.4 - 5.15
Canonical
Ubuntu
22.04 LTS
amd64
—
5.15 - 6.2
Canonical
Ubuntu
22.04 LTS
arm64
—
5.15 - 6.2
Canonical
Ubuntu
24.04 LTS
amd64
—
6.8 - TBD
Canonical
Ubuntu
24.04 LTS
arm64
—
6.8 - TBD
Notes:
Client packages for aarch64 and arm64 architectures are available only for EXAScaler Cloud 6.3
Client packages for Canonical Ubuntu 16.04 LTS are not available for EXAScaler Cloud 6.3
Before deploy Terraform code for Microsoft Azure, you will need to authenticate under the Microsoft account you used to log into the Microsoft Azure Portal. You will use a Microsoft account and its credentials to allow Terraform to deploy resources.
DDN EXAScaler Cloud in the Azure Marketplace have additional license and purchase terms that you must accept before you can deploy them programmatically. To deploy an environment from this image, you'll need to accept the image's terms the first time you use it, once per subscription.
Steps to authenticate via Microsoft account
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. Learn more about Azure authentication.
Availability type: none - no infrastructure redundancy required, set - to create an availability set and automatically distribute resources across multiple fault domains, zone - to physically separate resources within an Azure region. Learn more about Azure availability options.
availability.zone
integer
1
Availability zone - unique physical locations within a Azure region. Use 1, 2 or 3 to explicitly specify the availability zone. Learn more about Azure availability zones.
Resource group options
Variable
Type
Default
Description
resource_group.new
bool
true
Create a new resource group, or use an existing one: true or false.
Initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times:
...
Code Block
language
bash
theme
Midnight
$ terraform apply
...
Enter a value: yes
...
Apply complete! Resources: 103 added, 0 changed, 0 destroyed.
Outputs:
azure_dashboard = "https://portal.azure.com/#@00000000-0000-0000-0000-000000000000/dashboard/arm/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/exascaler-cloud-a108-resource-group/providers/Microsoft.Portal/dashboards/exascaler-cloud-a108-dashboard"
client_config = <<EOT
#!/bin/sh
# install new EXAScaler Cloud clients:
# all instances must be in the same location westus
# and connected to the network exascaler-cloud-a108-virtual-network
# and subnet exascaler-cloud-a108-subnet
# to set up EXAScaler Cloud filesystem on a new client instance,
# run the folowing commands on the client with root privileges:
cat >/etc/esc-client.conf<<EOF
{
"Version": "2.0.0",
"MountConfig": {
"ClientDevice": "10.0.0.10@tcp:/exacloud",
"Mountpoint": "/mnt/exacloud",
"PackageSource": "http://10.0.0.10/client-packages"
}
}
EOF
curl -fsSL http://10.0.0.10/client-setup-tool -o /usr/sbin/esc-client
chmod +x /usr/sbin/esc-client
esc-client auto setup --config /etc/esc-client.conf
EOT
http_console = "http://exascaler-cloud-a108-mgs0.westus.cloudapp.azure.com"
mount_command = "mount -t lustre 10.0.0.10@tcp:/exacloud /mnt/exacloud"
private_addresses = {
"exascaler-cloud-a108-cls0" = "10.0.0.8"
"exascaler-cloud-a108-cls1" = "10.0.0.7"
"exascaler-cloud-a108-cls2" = "10.0.0.11"
"exascaler-cloud-a108-cls3" = "10.0.0.12"
"exascaler-cloud-a108-mds0" = "10.0.0.13"
"exascaler-cloud-a108-mgs0" = "10.0.0.10"
"exascaler-cloud-a108-oss0" = "10.0.0.9"
"exascaler-cloud-a108-oss1" = "10.0.0.4"
"exascaler-cloud-a108-oss2" = "10.0.0.5"
"exascaler-cloud-a108-oss3" = "10.0.0.6"
}
ssh_console = {
"exascaler-cloud-a108-mgs0" = "ssh -A stack@exascaler-cloud-a108-mgs0.westus.cloud
app.azure.com"
}
Access the EXAScaler Cloud environment
Now you can access the EXAScaler Cloud environment:
Add storage capacity in an existing EXAScaler Cloud environment
The storage capacity can be added by increasing the number of storage servers. To add storage capacity in an existing EXAScaler Cloud environment, just modify the terraform.tfvars file and increase the number of storage servers (the value of the oss.node_countvariable) as required:
Add storage capacity in an existing EXAScaler Cloud environment
= false
accelerated_network = true
}
And then run the terraform apply command to increase the storage capacity. The available storage capacity (in GB) can be calculated by multiplying the three configuration parameters:
A software upgrade for an existing EXAScaler Cloud environment is possible by recreating the running VM instances using a new version of the OS image. And it requires some manual steps.
Create a backup copy for the existing Terraform directory (*.tf, terraform.tfvars and terraform.tfstate files)The storage capacity can be added by increasing the number of storage servers. To add storage capacity in an existing EXAScaler Cloud environment, just modify the terraform.tfvars file and increase the number of storage servers (the value of the oss.node_countvariable) as required:
cd /path/to/exascaler-cloud-terraform-scripts-x.y.z/az
$ tar pcfz backup.tgz *.tf terraform.tfvars terraform.tfstate
Update Terraform scripts using the latest available EXAScaler Cloud Terraform scripts:
Code Block
language
bash
theme
Midnight
$ cd /path/to
$ curl -sL https://github.com/DDNStorage/exascaler-cloud-terraform/archive/refs/tags/scripts/2.2.2.tar.gz | tar xz
$ cd exascaler-cloud-terraform-scripts-2.2.2/az
Copy the terraform.tfstate file from the existing Terraform directory:
A software upgrade for an existing EXAScaler Cloud environment is possible by recreating the running VM instances using a new version of the OS image. And it requires some manual steps.
Review and update the terraform.tfvars file using configuration options for the existing environmentCreate a backup copy for the existing Terraform directory (*.tf, terraform.tfvars and terraform.tfstate files):
Code Block
language
bash
theme
Midnight
$ diff -u $ cd /path/to/exascaler-cloud-terraform-scripts-x.y.z/az/terraform.tfvars terraform.tfvars
$ tar pcfz backup.tgz *.tf terraform.tfvars terraform.tfstatevi terraform.tfvars
Review the execution plan to make sure all changes are expected:
Code Block
language
bash
theme
Midnight
% terraform plan
Unmount the existing EXAScaler Cloud filesystem using the provided exascaler-cloud-ctl script. This step is required to ensure data consistency during the upgradeUpdate Terraform scripts using the latest available EXAScaler Cloud Terraform scripts:
Code Block
language
bash
theme
Midnight
$ cd /path/to
$ curl -sL https://github.com/DDNStoragescripts/exascaler-cloud-ctl
Usage:
List resource groups : ./scripts/exascaler-cloud-terraform/archive/refs/tags/scripts/2.1.5.tar.gz | tar xz
$ cd exascaler-cloud-terraform-scripts-2.1.5/az
Copy the terraform.tfstate file from the existing Terraform directory:
Review and update the terraform.tfvars file using configuration options for the existing environment:
Code Block
language
bash
theme
Midnight
$ diff -u /path/to/exascaler-cloud-terraform-scripts-x.y.z/az/terraform.tfvars terraform.tfvars
$ vi terraform.tfvars
Review the execution plan to make sure all changes are expected:
Code Block
language
bash
theme
Midnight
% terraform plan
Unmount the existing EXAScaler Cloud filesystem using the provided esc-ctl script. This step is required to ensure data consistency during the upgrade:
Code Block
language
bash
theme
Midnight
$ ./scripts/esc-ctl
Usage:
List resource groups : ./scripts/esc-ctl list
List deployments : ./scripts/esc-ctl <resource_group> list
List instances : ./scripts/esc-ctl <resource_group> <deployment> list
Stop instances : ./scripts/esc-ctl <resource_group> <deployment> stop
Start instances : ./scripts/esc-ctl <resource_group> <deployment> start
Unmount filesystem : ./scripts/esc-ctl <resource_group> <deployment> umount
$ ./scripts/esc-ctl list
Namectl list
List deployments : ./scripts/exascaler-cloud-ctl <resource_group> list
List instances : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> list
Stop instances : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> stop
Start instances : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> start
Unmount filesystem : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> umount
$ scripts/exascaler-cloud-ctl list
Name Location Status
----------------------------------- ---------- ---------
exascaler-cloud-f7cd-resource-group eastus Succeeded
NetworkWatcherRG westus Succeeded
$ scripts/exascaler-cloud-ctl exascaler-cloud-f7cd-resource-group list
Name CreatedLocation Status
------------------------------------------------------- ---------
exascaler-cloud-f7cd-resource-groupeastusSucceeded
NetworkWatcherRG2021-08-21T01:19:36+00:00Succeeded
$scripts/exascaler-cloud-ctlexascaler-cloud-f7cd-resource-groupexascaler-cloud-f7cdumount
Umountcomputeclientexascaler-cloud-f7cd-cls0
Umountcompute westus Succeeded
$ ./scripts/esc-ctlclient exascaler-cloud-f7cd-cls1
Umount storage server exascaler-cloud-f7cd-resource-group list
Name Created Status
------------------------------ ------------------------- ---------
exascaler-cloud-f7cd 2021-08-21T01:19:36+00:00 Succeeded
$ ./scripts/esc-ctl exascaler-cloud-f7cd-resource-group exascaler-cloud-f7cd umount
Umount compute client exascaler-cloud-f7cd-cls0
Umount compute client exascaler-cloud-f7cd-cls1
Umount storage server exascaler-cloud-f7cd-oss0
Umount storage server exascaler-cloud-f7cd-oss1
Umount storage server exascaler-cloud-f7cd-oss2
Umount storage server exascaler-cloud-f7cd-oss3
Umount metadata server exascaler-cloud-f7cd-mds0
Umount management server exascaler-cloud-f7cd-mgs0oss0
Umount storage server exascaler-cloud-f7cd-oss1
Umount storage server exascaler-cloud-f7cd-oss2
Umount storage server exascaler-cloud-f7cd-oss3
Umount metadata server exascaler-cloud-f7cd-mds0
Umount management server exascaler-cloud-f7cd-mgs0
Apply the changes required to upgrade the existing EXAScaler Cloud environment by recreating all instances using the latest version of EXAScaler Cloud:
$ terraform destroy
...
Enter a value: yes
...
Destroy complete! Resources: 103 destroyed.
How to access a deployment
To remote access the EXAScaler Cloud deployment (management, metadata, storage servers and compute clients), we must go through the management server public IP address, using the command.
And then using the management server console as a jump host we can open a SSH sessions to all other servers.Apply the changes required to upgrade the existing EXAScaler Cloud environment by recreating all instances using the latest version of EXAScaler Cloud:
$ terraform destroy
...
Enter a value: yes
...
Destroy complete! Resources: 103 destroyed.
How to access a deployment
To remote access the EXAScaler Cloud deployment (management, metadata, storage servers and compute clients), we must go through the management server public IP address, using the command.
And then using the management server console as a jump host we can open a SSH sessions to all other servers.
New EXAScaler Cloud client instances must be in the same location and connected to the virtual network and subnet. The process of installing and configuring new clients can be performed automatically. All required information is contained in the Terraform output. To configure EXAScaler Cloud filesystem on a new client instance create a configuration file /etc/exascaler-cloud-client.conf using the actual IP address of the management server:
The purpose that client-side encryption wants to serve is to be able to provide a special directory for each user, to safely store sensitive files. The goals are to protect data in transit between clients and servers, and protect data at rest.
This feature is implemented directly at the Lustre client level. Lustre client-side encryption relies on kernel fscrypt. fscrypt is a library which filesystems can hook into to support transparent encryption of files and directories. As a consequence, the key points described below are extracted from fscrypt documentation.
The client-side encryption feature is available natively on Lustre clients running a Linux distributions, including RHEL/CentOS 8.1 and later, Ubuntu 18.04 and later.
Client-side encryption supports data encryption and file and directory names encryption. Ability to encrypt file and directory names is governed by parameter named enable_filename_encryption and set to 0 by default. When this parameter is 0, new empty directories configured as encrypted use content encryption only, and not name encryption. This mode is inherited for all subdirectories and files. When enable_filename_encryption parameter is set to 1, new empty directories configured as encrypted use full encryption capabilities by encrypting file content and also file and directory names. This mode is inherited for all subdirectories and files. To set the enable_filename_encryption parameter globally for all clients, one can do on the management server:
Steps to runIORbenchmark on the EXAScaler Cloud deployment:
...
Code Block
language
bash
theme
Midnight
localhost$ eval $(ssh-agent)
Agent pid 5095
localhost$ ssh-add
Identity added: /Users/deiter/.ssh/id_rsa (/Users/deiter/.ssh/id_rsa)
localhost$ ssh -A stack@20.62.171.73
[stack@exascaler-cloud-65f1-mgs0 ~]$ escexascaler-cloud-ior
IOR-3.3.0: MPI Coordinated Test of Parallel I/O
Began : Wed Aug 25 14:43:01 2021
Command line : /usr/bin/ior -C -F -e -r -w -a POSIX -b 16777216 -t 1048576 -s 251 -o /mnt/exacloud/0b21199cf9682b9d/0b21199cf9682b9d -s 512
Machine : Linux exascaler-cloud-65f1-cls0
TestID : 0
StartTime : Wed Aug 25 14:43:01 2021
Path : /mnt/exacloud/0b21199cf9682b9d
FS : 11.9 TiB Used FS: 0.0% Inodes: 96.0 Mi Used Inodes: 0.0%
Options:
api : POSIX
apiVersion :
test filename : /mnt/exacloud/0b21199cf9682b9d/0b21199cf9682b9d
access : file-per-process
type : independent
segments : 512
ordering in a file : sequential
ordering inter file : constant task offset
task offset : 1
nodes : 4
tasks : 64
clients per node : 16
repetitions : 1
xfersize : 1 MiB
blocksize : 16 MiB
aggregate filesize : 512 GiB
Results:
access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----
write 1482.70 1482.70 21.52 16384 1024.00 0.014559 353.60 19.92 353.60 0
read 1480.89 1480.91 21.99 16384 1024.00 0.299514 354.03 43.40 354.04 0
remove - - - - - - - - 3.36 0
Max Write: 1482.70 MiB/sec (1554.72 MB/sec)
Max Read: 1480.89 MiB/sec (1552.83 MB/sec)
Finished : Wed Aug 25 14:54:52 2021
...
Code Block
language
bash
theme
Midnight
localhost$ eval $(ssh-agent)
Agent pid 5095
localhost$ ssh-add
Identity added: /Users/deiter/.ssh/id_rsa (/Users/deiter/.ssh/id_rsa)
localhost$ ssh -A stack@20.62.171.71
[stack@exascaler-cloud-2ed3-mgs0 ~]$ $ loci hosts -c
10.0.0.17 exascaler-cloud-2ed3-cls0
10.0.0.8 exascaler-cloud-2ed3-cls1
10.0.0.18 exascaler-cloud-2ed3-cls2
10.0.0.12 exascaler-cloud-2ed3-cls3
10.0.0.14 exascaler-cloud-2ed3-cls4
10.0.0.15 exascaler-cloud-2ed3-cls5
10.0.0.11 exascaler-cloud-2ed3-cls6
10.0.0.9 exascaler-cloud-2ed3-cls7
10.0.0.7 exascaler-cloud-2ed3-cls8
10.0.0.16 exascaler-cloud-2ed3-cls9
[stack@exascaler-cloud-2ed3-mgs0 ~]$ ssh -A exascaler-cloud-2ed3-cls0
[stack@exascaler-cloud-2ed3-cls0 ~]$ escexascaler-cloud-io500
Start IO500 benchmark with options:
data directory: /mnt/exacloud/071dfa36e6b20ca7/workload
hosts list: 10.0.0.17,10.0.0.9,10.0.0.14,10.0.0.8,10.0.0.12,10.0.0.5,10.0.0.11,10.0.0.19,10.0.0.18,10.0.0.7
processes per host: 16
files per process: 39637
number of tasks: 160
number of segments: 31500
block size: 4227858432
transfer size: 1048576
IO500 version io500-sc20_v3
[RESULT] ior-easy-write 1.445976 GiB/s : time 364.894 seconds
[RESULT] mdtest-easy-write 15.411987 kIOPS : time 304.382 seconds
[RESULT] ior-hard-write 0.461174 GiB/s : time 410.219 seconds
[RESULT] mdtest-hard-write 2.538281 kIOPS : time 449.131 seconds
[RESULT] find 583.795841 kIOPS : time 9.842 seconds
[RESULT] ior-easy-read 1.450889 GiB/s : time 363.624 seconds
[RESULT] mdtest-easy-stat 61.106840 kIOPS : time 75.517 seconds
[RESULT] ior-hard-read 0.543306 GiB/s : time 348.233 seconds
[RESULT] mdtest-hard-stat 20.753560 kIOPS : time 54.080 seconds
[RESULT] mdtest-easy-delete 5.836530 kIOPS : time 789.832 seconds
[RESULT] mdtest-hard-read 10.320768 kIOPS : time 108.658 seconds
[RESULT] mdtest-hard-delete 4.647816 kIOPS : time 241.181 seconds
[SCORE] Bandwidth 0.851483 GiB/s : IOPS 17.322863 kiops : TOTAL 3.840589
/mnt/exacloud/b44731b2e4ac4fc2/sources/results
2021.06.17-20.26.19 io500-exascaler-cloud-2ed3-cls0-2021.06.17-20.26.19.tgz
How to collect inventory and support bundle
Steps to collect a support bundle on the EXAScaler Cloud deployment:
...
Code Block
language
bash
theme
Midnight
localhost$ eval $(ssh-agent)
Agent pid 5095
localhost$ ssh-add
Identity added: /Users/deiter/.ssh/id_rsa (/Users/deiter/.ssh/id_rsa)
localhost$ ssh -A stack@20.62.171.33
[stack@exascaler-cloud-55d2-mgs0 ~]$ escexascaler-cloud-collector
The following is a list of nodes to collect from:
exascaler-cloud-55d2-cls0
exascaler-cloud-55d2-cls1
exascaler-cloud-55d2-cls2
exascaler-cloud-55d2-cls3
exascaler-cloud-55d2-cls4
exascaler-cloud-55d2-cls5
exascaler-cloud-55d2-cls6
exascaler-cloud-55d2-cls7
exascaler-cloud-55d2-cls8
exascaler-cloud-55d2-cls9
exascaler-cloud-55d2-mds0
exascaler-cloud-55d2-mgs0
exascaler-cloud-55d2-oss0
exascaler-cloud-55d2-oss1
exascaler-cloud-55d2-oss2
exascaler-cloud-55d2-oss3
Connecting to nodes...
Beginning collection of sosreports from 16 nodes, collecting a maximum of 4 concurrently
Successfully captured 16 of 16 sosreports
Creating archive of sosreports...
The following archive has been created. Please provide it to your support team.
/var/tmp/sos-collector-2021-06-18-nzsnm.tar.gz
Before use the Microsoft Azure CLI for Microsoft Azure, you will need to authenticate under the Microsoft account you used to log into the Microsoft Azure Portal. You will use a Microsoft account and its credentials to allow the shell script to start/stop the EXAScaler Cloud servers.
Steps to authenticate via Microsoft account
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. Learn more.
...
Code Block
language
bash
theme
Midnight
$ az account set --subscription XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
List of existing resource groups
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl list
Name Location Status
---------------- ---------- ---------
EXAScaler-Cloud eastus Succeeded
List of existing deployments for the given resource group
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud list
Name Created Status
------------------------------ ------------------------- ---------
exascaler-cloud-65f1 2021-08-24T18:56:27+00:00 Succeeded
List of EXAScaler Cloud servers for the given deployment
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 list
Name Size Role Version PrivateIP PublicIP Status
------------------------- ---------------- ------ --------- ----------- ------------ ----------
exascaler-cloud-65f1-cls0 Standard_D16s_v3 clt 5.2.3 10.0.0.7 VM running
exascaler-cloud-65f1-cls1 Standard_D16s_v3 clt 5.2.3 10.0.0.5 VM running
exascaler-cloud-65f1-cls2 Standard_D16s_v3 clt 5.2.3 10.0.0.8 VM running
exascaler-cloud-65f1-cls3 Standard_D16s_v3 clt 5.2.3 10.0.0.13 VM running
exascaler-cloud-65f1-mds0 Standard_E8s_v3 mdt 5.2.3 10.0.0.12 VM running
exascaler-cloud-65f1-mgs0 Standard_F4s mgt 5.2.3 10.0.0.11 20.62.171.73 VM running
exascaler-cloud-65f1-oss0 Standard_D16s_v3 ost 5.2.3 10.0.0.10 VM running
exascaler-cloud-65f1-oss1 Standard_D16s_v3 ost 5.2.3 10.0.0.4 VM running
exascaler-cloud-65f1-oss2 Standard_D16s_v3 ost 5.2.3 10.0.0.6 VM running
exascaler-cloud-65f1-oss3 Standard_D16s_v3 ost 5.2.3 10.0.0.9 VM running
Stop the EXAScaler Cloud servers
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 stop
Stop compute client exascaler-cloud-65f1-cls0
Stop compute client exascaler-cloud-65f1-cls1
Stop compute client exascaler-cloud-65f1-cls2
Stop compute client exascaler-cloud-65f1-cls3
Stop storage server exascaler-cloud-65f1-oss0
Stop storage server exascaler-cloud-65f1-oss1
Stop storage server exascaler-cloud-65f1-oss2
Stop storage server exascaler-cloud-65f1-oss3
Stop metadata server exascaler-cloud-65f1-mds0
Stop management server exascaler-cloud-65f1-mgs0
Start the EXAScaler Cloud servers
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 start
Start management server exascaler-cloud-65f1-mgs0
Start metadata server exascaler-cloud-65f1-mds0
Start storage server exascaler-cloud-65f1-oss0
Start storage server exascaler-cloud-65f1-oss1
Start storage server exascaler-cloud-65f1-oss2
Start storage server exascaler-cloud-65f1-oss3
Start compute client exascaler-cloud-65f1-cls0
Start compute client exascaler-cloud-65f1-cls1
Start compute client exascaler-cloud-65f1-cls2
Start compute client exascaler-cloud-65f1-cls3
All required services will automatically start and the filesystem will be mounted and available on all compute clients.
How to upgrade an existing deployment
Anchor
Upgrade
Upgrade
A software upgrade for an existing EXAScaler Cloud deployment is possible by creating a new deployment using a copy of the existing file system.
...
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl list
Name Location Status
---------------- ---------- ---------
EXAScaler-Cloud eastus Succeeded
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud list
Name Created Status
------------------------------ ------------------------- ---------
exascaler-cloud-65f1 2021-08-24T18:56:27+00:00 Succeeded
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 stop
Stop compute client exascaler-cloud-65f1-cls0
Stop compute client exascaler-cloud-65f1-cls1
Stop compute client exascaler-cloud-65f1-cls2
Stop compute client exascaler-cloud-65f1-cls3
Stop storage server exascaler-cloud-65f1-oss0
Stop storage server exascaler-cloud-65f1-oss1
Stop storage server exascaler-cloud-65f1-oss2
Stop storage server exascaler-cloud-65f1-oss3
Stop metadata server exascaler-cloud-65f1-mds0
Stop management server exascaler-cloud-65f1-mgs0