EXAScaler Cloud application can be launched on Microsofts Azure Portal by navigating to the link below and and clicking the Create button.
Step by step deployment video
Widget Connector
url
https://www.youtube.com/watch?v=hGBujIKe3H8
Step by step deployment guide
The steps below will show how to create a EXAScaler Cloud Cluster on Microsoft Azure platform.
...
Azureregion for our deployment. For example, we can select the location closest to us. And we must have enough resources in this location for the deployment (number of available CPU cores).
To remote access the EXAScaler Cloud deployment (management, metadata, storage servers and compute clients), we must go through the management server public IP address, using the command.
And then using the management server console as a jump host we can open a SSH sessions to all other servers.
The steps below will show how to create a EXAScaler Cloud environment on Microsoft Azure using Terraform.
Supported products
Product
Version
Base OS
Stock Keeping Unit (SKU)
EXAScaler Cloud
5.2.6
Red Hat Enterprise Linux 7.9
exascaler_cloud_5_2_redhat
EXAScaler Cloud
5.2.6
CentOS Linux 7.9
exascaler_cloud_5_2_centos
EXAScaler Cloud
6.0.1
Red Hat Enterprise Linux 7.9
exascaler_cloud_6_0_redhat
EXAScaler Cloud
6.0.1
CentOS Linux 7.9
exascaler_cloud_6_0_centos
EXAScaler Cloud
6.1.0
Red Hat Enterprise Linux 7.9
exascaler_cloud_6_1_redhat
EXAScaler Cloud
6.1.0
CentOS Linux 7.9
exascaler_cloud_6_1_centos
EXAScaler Cloud
6.2.0
Red Hat Enterprise Linux 8.7
exascaler_cloud_6_2_redhat
EXAScaler Cloud
6.2.0
Rocky Linux 8.7
exascaler_cloud_6_2_rocky
EXAScaler Cloud
6.3.2
Red Hat Enterprise Linux 8.10
exascaler_cloud_6_3_redhat
EXAScaler Cloud
6.3.2
Rocky Linux 8.10
exascaler_cloud_6_3_rocky
Client packages
EXAScaler Cloud deployment provides support for installing and configuring third-party clients. EXAScaler Cloud client software comprises a set of kernel modules which must be compatible with the running kernel, as well as userspace tools for interacting with the filesystem.
Vendor
Product
Version
Arch
Kernel Version for binary package
Kernel Version for DKMS package
Red Hat
RHEL
7.6
x86_64
3.10.0-957.99.1.el7.x86_64
3.10.0
Red Hat
RHEL
7.7
x86_64
3.10.0-1062.77.1.el7.x86_64
3.10.0
Red Hat
RHEL
7.8
x86_64
3.10.0-1127.19.1.el7.x86_64
3.10.0
Red Hat
RHEL
7.9
x86_64
3.10.0-1160.119.1.el7.x86_64
3.10.0
Red Hat
RHEL
8.0
x86_64
4.18.0-80.31.1.el8_0.x86_64
4.18.0
Red Hat
RHEL
8.1
x86_64
4.18.0-147.94.1.el8_1.x86_64
4.18.0
Red Hat
RHEL
8.2
x86_64
4.18.0-193.141.1.el8_2.x86_64
4.18.0
Red Hat
RHEL
8.3
x86_64
4.18.0-240.22.1.el8_3.x86_64
4.18.0
Red Hat
RHEL
8.4
x86_64
4.18.0-305.148.1.el8_4.x86_64
4.18.0
Red Hat
RHEL
8.5
x86_64
4.18.0-348.23.1.el8_5.x86_64
4.18.0
Red Hat
RHEL
8.6
aarch64
4.18.0-372.105.1.el8_6.aarch64
4.18.0
Red Hat
RHEL
8.6
x86_64
4.18.0-372.134.1.el8_6.x86_64
4.18.0
Red Hat
RHEL
8.7
aarch64
4.18.0-425.19.2.el8_7.aarch64
4.18.0
Red Hat
RHEL
8.7
x86_64
4.18.0-425.19.2.el8_7.x86_64
4.18.0
Red Hat
RHEL
8.8
aarch64
4.18.0-477.86.1.el8_8.aarch64
4.18.0
Red Hat
RHEL
8.8
x86_64
4.18.0-477.86.1.el8_8.x86_64
4.18.0
Red Hat
RHEL
8.9
aarch64
4.18.0-513.24.1.el8_9.aarch64
4.18.0
Red Hat
RHEL
8.9
x86_64
4.18.0-513.24.1.el8_9.x86_64
4.18.0
Red Hat
RHEL
8.10
aarch64
4.18.0-553.40.1.el8_10.aarch64
4.18.0
Red Hat
RHEL
8.10
x86_64
4.18.0-553.40.1.el8_10.x86_64
4.18.0
Red Hat
RHEL
9.0
aarch64
5.14.0-70.101.1.el9_0.aarch64
5.14.0
Red Hat
RHEL
9.0
x86_64
5.14.0-70.122.1.el9_0.x86_64
5.14.0
Red Hat
RHEL
9.1
aarch64
5.14.0-162.23.1.el9_1.aarch64
5.14.0
Red Hat
RHEL
9.1
x86_64
5.14.0-162.23.1.el9_1.x86_64
5.14.0
Red Hat
RHEL
9.2
aarch64
5.14.0-284.99.1.el9_2.aarch64
5.14.0
Red Hat
RHEL
9.2
x86_64
5.14.0-284.99.1.el9_2.x86_64
5.14.0
Red Hat
RHEL
9.3
aarch64
5.14.0-362.24.1.el9_3.aarch64
5.14.0
Red Hat
RHEL
9.3
x86_64
5.14.0-362.24.1.el9_3.x86_64
5.14.0
Red Hat
RHEL
9.4
aarch64
5.14.0-427.50.1.el9_4.aarch64
5.14.0
Red Hat
RHEL
9.4
x86_64
5.14.0-427.50.1.el9_4.x86_64
5.14.0
Red Hat
RHEL
9.5
aarch64
5.14.0-503.26.1.el9_5.aarch64
5.14.0
Red Hat
RHEL
9.5
x86_64
5.14.0-503.26.1.el9_5.x86_64
5.14.0
Canonical
Ubuntu
16.04 LTS
amd64
—
4.4 - 4.15
Canonical
Ubuntu
18.04 LTS
amd64
—
4.15 - 5.4
Canonical
Ubuntu
20.04 LTS
amd64
—
5.4 - 5.15
Canonical
Ubuntu
20.04 LTS
arm64
—
5.4 - 5.15
Canonical
Ubuntu
22.04 LTS
amd64
—
5.15 - 6.2
Canonical
Ubuntu
22.04 LTS
arm64
—
5.15 - 6.2
Canonical
Ubuntu
24.04 LTS
amd64
—
6.8 - TBD
Canonical
Ubuntu
24.04 LTS
arm64
—
6.8 - TBD
Notes:
Client packages for aarch64 and arm64 architectures are available only for EXAScaler Cloud 6.3
Client packages for Canonical Ubuntu 16.04 LTS are not available for EXAScaler Cloud 6.3
Before deploy Terraform code for Microsoft Azure, you will need to authenticate under the Microsoft account you used to log into the Microsoft Azure Portal. You will use a Microsoft account and its credentials to allow Terraform to deploy resources.
DDN EXAScaler Cloud in the Azure Marketplace have additional license and purchase terms that you must accept before you can deploy them programmatically. To deploy an environment from this image, you'll need to accept the image's terms the first time you use it, once per subscription.
Steps to authenticate via Microsoft account
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. Learn more about Azure authentication.
Availability type: none - no infrastructure redundancy required, set - to create an availability set and automatically distribute resources across multiple fault domains, zone - to physically separate resources within an Azure region. Learn more about Azure availability options.
availability.zone
integer
1
Availability zone - unique physical locations within a Azure region. Use 1, 2 or 3 to explicitly specify the availability zone. Learn more about Azure availability zones.
Resource group options
Variable
Type
Default
Description
resource_group.new
bool
true
Create a new resource group, or use an existing one: true or false.
Initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times:
Code Block
language
bash
theme
Midnight
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/azurerm versions matching ">= 3.10.0"...
- Finding latest version of hashicorp/random...
- Finding latest version of hashicorp/template...
- Installing hashicorp/azurerm v3.13.0...
- Installed hashicorp/azurerm v3.13.0 (signed by HashiCorp)
- Installing hashicorp/random v3.3.2...
- Installed hashicorp/random v3.3.2 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Validate configuration options:
Code Block
language
bash
theme
Midnight
$ terraform validate
Success! The configuration is valid.
Create an execution plan with a preview of the changes that Terraform will make to the environment:
Code Block
language
bash
theme
Midnight
$ terraform plan
Apply the changes required to reach the desired state of the configuration:
Code Block
language
bash
theme
Midnight
$ terraform apply
...
Enter a value: yes
...
Apply complete! Resources: 103 added, 0 changed, 0 destroyed.
Outputs:
azure_dashboard = "https://portal.azure.com/#@00000000-0000-0000-0000-000000000000/dashboard/arm/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/exascaler-cloud-a108-resource-group/providers/Microsoft.Portal/dashboards/exascaler-cloud-a108-dashboard"
client_config = <<EOT
#!/bin/sh
# install new EXAScaler Cloud clients:
# all instances must be in the same location westus
# and connected to the network exascaler-cloud-a108-virtual-network
# and subnet exascaler-cloud-a108-subnet
# to set up EXAScaler Cloud filesystem on a new client instance,
# run the folowing commands on the client with root privileges:
cat >/etc/esc-client.conf<<EOF
{
"Version": "2.0.0",
"MountConfig": {
"ClientDevice": "10.0.0.10@tcp:/exacloud",
"Mountpoint": "/mnt/exacloud",
"PackageSource": "http://10.0.0.10/client-packages"
}
}
EOF
curl -fsSL http://10.0.0.10/client-setup-tool -o /usr/sbin/esc-client
chmod +x /usr/sbin/esc-client
esc-client auto setup --config /etc/esc-client.conf
EOT
http_console = "http://exascaler-cloud-a108-mgs0.westus.cloudapp.azure.com"
mount_command = "mount -t lustre 10.0.0.10@tcp:/exacloud /mnt/exacloud"
private_addresses = {
"exascaler-cloud-a108-cls0" = "10.0.0.8"
"exascaler-cloud-a108-cls1" = "10.0.0.7"
"exascaler-cloud-a108-cls2" = "10.0.0.11"
"exascaler-cloud-a108-cls3" = "10.0.0.12"
"exascaler-cloud-a108-mds0" = "10.0.0.13"
"exascaler-cloud-a108-mgs0" = "10.0.0.10"
"exascaler-cloud-a108-oss0" = "10.0.0.9"
"exascaler-cloud-a108-oss1" = "10.0.0.4"
"exascaler-cloud-a108-oss2" = "10.0.0.5"
"exascaler-cloud-a108-oss3" = "10.0.0.6"
}
ssh_console = {
"exascaler-cloud-a108-mgs0" = "ssh -A stack@exascaler-cloud-a108-mgs0.westus.cloud
app.azure.com"
}
Access the EXAScaler Cloud environment
Now you can access the EXAScaler Cloud environment:
Add storage capacity in an existing EXAScaler Cloud environment
The storage capacity can be added by increasing the number of storage servers. To add storage capacity in an existing EXAScaler Cloud environment, just modify the terraform.tfvars file and increase the number of storage servers (the value of the oss.node_countvariable) as required:
And then run the terraform apply command to increase the storage capacity. The available storage capacity (in GB) can be calculated by multiplying the three configuration parameters:
A software upgrade for an existing EXAScaler Cloud environment is possible by recreating the running VM instances using a new version of the OS image. And it requires some manual steps.
Create a backup copy for the existing Terraform directory (*.tf, terraform.tfvars and terraform.tfstate files):
Code Block
language
bash
theme
Midnight
$ cd /path/to/exascaler-cloud-terraform-scripts-x.y.z/az
$ tar pcfz backup.tgz *.tf terraform.tfvars terraform.tfstate
Update Terraform scripts using the latest available EXAScaler Cloud Terraform scripts:
Code Block
language
bash
theme
Midnight
$ cd /path/to
$ curl -sL https://github.com/DDNStorage/exascaler-cloud-terraform/archive/refs/tags/scripts/2.2.2.tar.gz | tar xz
$ cd exascaler-cloud-terraform-scripts-2.2.2/az
Copy the terraform.tfstate file from the existing Terraform directory:
Review and update the terraform.tfvars file using configuration options for the existing environment:
Code Block
language
bash
theme
Midnight
$ diff -u /path/to/exascaler-cloud-terraform-scripts-x.y.z/az/terraform.tfvars terraform.tfvars
$ vi terraform.tfvars
Review the execution plan to make sure all changes are expected:
Code Block
language
bash
theme
Midnight
% terraform plan
Unmount the existing EXAScaler Cloud filesystem using the provided exascaler-cloud-ctl script. This step is required to ensure data consistency during the upgrade:
Code Block
language
bash
theme
Midnight
$ scripts/exascaler-cloud-ctl
Usage:
List resource groups : ./scripts/exascaler-cloud-ctl list
List deployments : ./scripts/exascaler-cloud-ctl <resource_group> list
List instances : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> list
Stop instances : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> stop
Start instances : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> start
Unmount filesystem : ./scripts/exascaler-cloud-ctl <resource_group> <deployment> umount
$ scripts/exascaler-cloud-ctl list
Name Location Status
----------------------------------- ---------- ---------
exascaler-cloud-f7cd-resource-group eastus Succeeded
NetworkWatcherRG westus Succeeded
$ scripts/exascaler-cloud-ctl exascaler-cloud-f7cd-resource-group list
Name Created Status
------------------------------ ------------------------- ---------
exascaler-cloud-f7cd 2021-08-21T01:19:36+00:00 Succeeded
$ scripts/exascaler-cloud-ctl exascaler-cloud-f7cd-resource-group exascaler-cloud-f7cd umount
Umount compute client exascaler-cloud-f7cd-cls0
Umount compute client exascaler-cloud-f7cd-cls1
Umount storage server exascaler-cloud-f7cd-oss0
Umount storage server exascaler-cloud-f7cd-oss1
Umount storage server exascaler-cloud-f7cd-oss2
Umount storage server exascaler-cloud-f7cd-oss3
Umount metadata server exascaler-cloud-f7cd-mds0
Umount management server exascaler-cloud-f7cd-mgs0
Apply the changes required to upgrade the existing EXAScaler Cloud environment by recreating all instances using the latest version of EXAScaler Cloud:
$ terraform destroy
...
Enter a value: yes
...
Destroy complete! Resources: 103 destroyed.
How to access a deployment
To remote access the EXAScaler Cloud deployment (management, metadata, storage servers and compute clients), we must go through the management server public IP address, using the command.
And then using the management server console as a jump host we can open a SSH sessions to all other servers.
New EXAScaler Cloud client instances must be in the same location and connected to the virtual network and subnet. The process of installing and configuring new clients can be performed automatically. All required information is contained in the Terraform output. To configure EXAScaler Cloud filesystem on a new client instance create a configuration file /etc/exascaler-cloud-client.conf using the actual IP address of the management server:
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04 LTS
Release: 22.04
Codename: jammy
# exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.conf
Discovering platform ... Done.
Configuring firewall rules for Lustre ... Done.
Configuring Lustre client package source ... Done.
Installing Lustre client packages and building DKMS modules ... Done.
Mounting 10.0.0.10@tcp0:/exacloud at /mnt/exacloud ... Done.
# mount -t lustre
10.0.0.10@tcp:/exacloud on /mnt/exacloud type lustre (rw,flock,user_xattr,lazystatfs,encrypt)
RHEL, CentOS, Alma and Rocky Linux
Code Block
language
bash
theme
Midnight
# cat /etc/redhat-release
AlmaLinux release 8.6 (Sky Tiger)
# exascaler-cloud-client auto setup --config /etc/exascaler-cloud-client.conf
Discovering platform ... Done.
Configuring firewall rules for Lustre ... Done.
Configuring Lustre client package source ... Done.
Installing Lustre client packages ... Done.
Mounting 10.0.0.10@tcp0:/exacloud at /mnt/exacloud ... Done.
# mount -t lustre
10.0.0.10@tcp:/exacloud on /mnt/exacloud type lustre (rw,seclabel,flock,user_xattr,lazystatfs,encrypt)
How to use client-side encryption
The purpose that client-side encryption wants to serve is to be able to provide a special directory for each user, to safely store sensitive files. The goals are to protect data in transit between clients and servers, and protect data at rest.
This feature is implemented directly at the Lustre client level. Lustre client-side encryption relies on kernel fscrypt. fscrypt is a library which filesystems can hook into to support transparent encryption of files and directories. As a consequence, the key points described below are extracted from fscrypt documentation.
The client-side encryption feature is available natively on Lustre clients running a Linux distributions, including RHEL/CentOS 8.1 and later, Ubuntu 18.04 and later.
Client-side encryption supports data encryption and file and directory names encryption. Ability to encrypt file and directory names is governed by parameter named enable_filename_encryption and set to 0 by default. When this parameter is 0, new empty directories configured as encrypted use content encryption only, and not name encryption. This mode is inherited for all subdirectories and files. When enable_filename_encryption parameter is set to 1, new empty directories configured as encrypted use full encryption capabilities by encrypting file content and also file and directory names. This mode is inherited for all subdirectories and files. To set the enable_filename_encryption parameter globally for all clients, one can do on the management server:
$ sudo fscrypt setup
Defaulting to policy_version 2 because kernel supports it.
Customizing passphrase hashing difficulty for this system...
Created global config file at "/etc/fscrypt.conf".
Allow users other than root to create fscrypt metadata on the root filesystem? (See
https://github.com/google/fscrypt#setting-up-fscrypt-on-a-filesystem) [y/N]
Metadata directories created at "/.fscrypt", writable by root only.
$ sudo fscrypt setup /mnt/exacloud
Allow users other than root to create fscrypt metadata on this filesystem? (See
https://github.com/google/fscrypt#setting-up-fscrypt-on-a-filesystem) [y/N] y
Metadata directories created at "/mnt/exacloud/.fscrypt", writable by everyone.
Steps to encrypt directory:
Code Block
language
bash
theme
Midnight
$ sudo install -v -d -m 0755 -o stack -g stack /mnt/exacloud/stack
install: creating directory '/mnt/exacloud/stack'
$ fscrypt encrypt /mnt/exacloud/stack
The following protector sources are available:
1 - Your login passphrase (pam_passphrase)
2 - A custom passphrase (custom_passphrase)
3 - A raw 256-bit key (raw_key)
Enter the source number for the new protector [2 - custom_passphrase]:
Enter a name for the new protector: test
Enter custom passphrase for protector "test":
Confirm passphrase:
"/mnt/exacloud/stack" is now encrypted, unlocked, and ready for use.
$ cp -v /etc/passwd /mnt/exacloud/stack/
'/etc/passwd' -> '/mnt/exacloud/stack/passwd'
$ ls -l /mnt/exacloud/stack/
total 1
-rw-r--r--. 1 stack stack 1610 Jul 13 20:34 passwd
$ md5sum /mnt/exacloud/stack/passwd
867541523c51f8cfd4af91988e9f8794 /mnt/exacloud/stack/passwd
Lock the directory:
Code Block
language
bash
theme
Midnight
$ fscrypt lock /mnt/exacloud/stack
"/mnt/exacloud/stack" is now locked.
$ ls -l /mnt/exacloud/stack
total 4
-rw-r--r--. 1 stack stack 4096 Jul 13 20:34 ydpdwRP7MiXzsTkYhg0mW3DNacDlsUJdJa2e9l6AQKL
$ md5sum /mnt/exacloud/stack/ydpdwRP7MiXzsTkYhg0mW3DNacDlsUJdJa2e9l6AQKL
md5sum: /mnt/exacloud/stack/ydpdwRP7MiXzsTkYhg0mW3DNacDlsUJdJa2e9l6AQKL: Required key not available
Unlock the directory:
Code Block
language
bash
theme
Midnight
$ fscrypt unlock /mnt/exacloud/stack
Enter custom passphrase for protector "test":
"/mnt/exacloud/stack" is now unlocked and ready for use.
$ ls -l /mnt/exacloud/stack
total 4
-rw-r--r--. 1 stack stack 1610 Jul 13 20:34 passwd
$ md5sum /mnt/exacloud/stack/passwd
867541523c51f8cfd4af91988e9f8794 /mnt/exacloud/stack/passwd
Open SSH session to the EXAScaler Cloud management server
Collect a support bundle using esc-collector tool
Code Block
language
bash
theme
Midnight
localhost$ eval $(ssh-agent)
Agent pid 5095
localhost$ ssh-add
Identity added: /Users/deiter/.ssh/id_rsa (/Users/deiter/.ssh/id_rsa)
localhost$ ssh -A stack@20.62.171.33
[stack@exascaler-cloud-55d2-mgs0 ~]$ exascaler-cloud-collector
The following is a list of nodes to collect from:
exascaler-cloud-55d2-cls0
exascaler-cloud-55d2-cls1
exascaler-cloud-55d2-cls2
exascaler-cloud-55d2-cls3
exascaler-cloud-55d2-cls4
exascaler-cloud-55d2-cls5
exascaler-cloud-55d2-cls6
exascaler-cloud-55d2-cls7
exascaler-cloud-55d2-cls8
exascaler-cloud-55d2-cls9
exascaler-cloud-55d2-mds0
exascaler-cloud-55d2-mgs0
exascaler-cloud-55d2-oss0
exascaler-cloud-55d2-oss1
exascaler-cloud-55d2-oss2
exascaler-cloud-55d2-oss3
Connecting to nodes...
Beginning collection of sosreports from 16 nodes, collecting a maximum of 4 concurrently
Successfully captured 16 of 16 sosreports
Creating archive of sosreports...
The following archive has been created. Please provide it to your support team.
/var/tmp/sos-collector-2021-06-18-nzsnm.tar.gz
Generate the inventory report by running the about_this_deployment command:
Before use the Microsoft Azure CLI for Microsoft Azure, you will need to authenticate under the Microsoft account you used to log into the Microsoft Azure Portal. You will use a Microsoft account and its credentials to allow the shell script to start/stop the EXAScaler Cloud servers.
Steps to authenticate via Microsoft account
Obtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. Learn more.
...
Code Block
language
bash
theme
Midnight
$ az account set --subscription XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
List of existing resource groups
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl list
Name Location Status
---------------- ---------- ---------
EXAScaler-Cloud eastus Succeeded
List of existing deployments for the given resource group
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud list
Name Created Status
------------------------------ ------------------------- ---------
exascaler-cloud-65f1 2021-08-24T18:56:27+00:00 Succeeded
List of EXAScaler Cloud servers for the given deployment
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 list
Name Size Role Version PrivateIP PublicIP Status
------------------------- ---------------- ------ --------- ----------- ------------ ----------
exascaler-cloud-65f1-cls0 Standard_D16s_v3 clt 5.2.3 10.0.0.7 VM running
exascaler-cloud-65f1-cls1 Standard_D16s_v3 clt 5.2.3 10.0.0.5 VM running
exascaler-cloud-65f1-cls2 Standard_D16s_v3 clt 5.2.3 10.0.0.8 VM running
exascaler-cloud-65f1-cls3 Standard_D16s_v3 clt 5.2.3 10.0.0.13 VM running
exascaler-cloud-65f1-mds0 Standard_E8s_v3 mdt 5.2.3 10.0.0.12 VM running
exascaler-cloud-65f1-mgs0 Standard_F4s mgt 5.2.3 10.0.0.11 20.62.171.73 VM running
exascaler-cloud-65f1-oss0 Standard_D16s_v3 ost 5.2.3 10.0.0.10 VM running
exascaler-cloud-65f1-oss1 Standard_D16s_v3 ost 5.2.3 10.0.0.4 VM running
exascaler-cloud-65f1-oss2 Standard_D16s_v3 ost 5.2.3 10.0.0.6 VM running
exascaler-cloud-65f1-oss3 Standard_D16s_v3 ost 5.2.3 10.0.0.9 VM running
Stop the EXAScaler Cloud servers
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 stop
Stop compute client exascaler-cloud-65f1-cls0
Stop compute client exascaler-cloud-65f1-cls1
Stop compute client exascaler-cloud-65f1-cls2
Stop compute client exascaler-cloud-65f1-cls3
Stop storage server exascaler-cloud-65f1-oss0
Stop storage server exascaler-cloud-65f1-oss1
Stop storage server exascaler-cloud-65f1-oss2
Stop storage server exascaler-cloud-65f1-oss3
Stop metadata server exascaler-cloud-65f1-mds0
Stop management server exascaler-cloud-65f1-mgs0
Start the EXAScaler Cloud servers
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 start
Start management server exascaler-cloud-65f1-mgs0
Start metadata server exascaler-cloud-65f1-mds0
Start storage server exascaler-cloud-65f1-oss0
Start storage server exascaler-cloud-65f1-oss1
Start storage server exascaler-cloud-65f1-oss2
Start storage server exascaler-cloud-65f1-oss3
Start compute client exascaler-cloud-65f1-cls0
Start compute client exascaler-cloud-65f1-cls1
Start compute client exascaler-cloud-65f1-cls2
Start compute client exascaler-cloud-65f1-cls3
All required services will automatically start and the filesystem will be mounted and available on all compute clients.
How to upgrade an existing deployment
Anchor
Upgrade
Upgrade
A software upgrade for an existing EXAScaler Cloud deployment is possible by creating a new deployment using a copy of the existing file system.
To upgrade the existing EXAScaler Cloud deployment you can use the standard EXAScaler Cloud Azure application by selecting the "Upgrade" value for the "Deployment type" option.
All running instances must be shut down for the existing EXAScaler Cloud deployment before performing the upgrade, this is required to ensure data consistency while creating a copy of existing data:
Code Block
language
bash
theme
Midnight
$ .scripts/escexascaler-cloud-ctl list
Name Location Status
---------------- ---------- ---------
EXAScaler-Cloud eastus Succeeded
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud list
Name Created Status
------------------------------ ------------------------- ---------
exascaler-cloud-65f1 2021-08-24T18:56:27+00:00 Succeeded
$ .scripts/escexascaler-cloud-ctl EXAScaler-Cloud exascaler-cloud-65f1 stop
Stop compute client exascaler-cloud-65f1-cls0
Stop compute client exascaler-cloud-65f1-cls1
Stop compute client exascaler-cloud-65f1-cls2
Stop compute client exascaler-cloud-65f1-cls3
Stop storage server exascaler-cloud-65f1-oss0
Stop storage server exascaler-cloud-65f1-oss1
Stop storage server exascaler-cloud-65f1-oss2
Stop storage server exascaler-cloud-65f1-oss3
Stop metadata server exascaler-cloud-65f1-mds0
Stop management server exascaler-cloud-65f1-mgs0
...
And then press a "Create" button to upgrade the existing environmentdeployment. A new EXAScaler Cloud environment will deployment will be created in accordance with the selected parameters, and all new targets will be created as copies of targets in the existing EXAScaler Cloud environmentdeployment.