In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you can read this topic (based on hyperconverged model). For Remote Desktop usage, I’ll deploy a disaggregated model of S2D. In this topic, I’ll configure file servers for User Profile Disks. This series consists of the following topics:
- Deploy a Windows Server 2016 RDS Farm in Microsoft Azure
- Create Microsoft Azure networks, storage and Windows image
- Deploy the Microsoft Azure Virtual Machines
- Configure Domain Controllers
- Deploy the RDS farm
- Configure File Servers for User Profile Disk (UPD)
- RDS final configuration
I’ll deploy this file service by using only PowerShell. Before following this topic, be sure that your Azure VM has joined the Active Directory and they have two network adapters in two different subnets (one for cluster and the other for management). I have also fixed the IP addresses from Azure portal.
Deploy the cluster
First of all, I install these features in both file server nodes:
install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools
Then I install the RSAT of Failover Clustering in the management VM.
Install-WindowsFeature RSAT-Clustering
Next I test if the cluster nodes can manage Storage Spaces Direct
Test-Cluster -Node "AZFLS0","AZFLS1" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"
If the test is passed successfully, you can run the following cmdlet to deploy the cluster with the name UPD-Sto and the IP 10.11.0.29.
New-Cluster -Node "AZFLS0","AZFLS1" -Name UPD-Sto -StaticAddress 10.11.0.29 -NoStorage
Once the cluster is created, add the Cluster Name Object (UPD-Sto) the right to create computer object on the OU where it is located. This permission is required to create the CNO for SOFS.
Enable and configure S2D and SOFS
Now that the cluster is created, you can enable S2D (I run the following PowerShell on a file server node by using Remote PowerShell).
Enable-ClusterS2D
Then I create a new volume formatted with ReFS and with a capacity of 100GB. This volume has the 2-Way Mirroring resilience.
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName UPD01 -FileSystem CSVFS_REFS -Size 100GB
Now I rename the folder Volume1 in ClusterStorage by UPD-01
rename-item C:\ClusterStorage\Volume1 UPD-01
Then I a add the role Scale-Out File Server role in the cluster and I call it SOFS.
Add-ClusterScaleOutFileServerRole -Name SOFS
To finish I create a folder called Profiles in the volume and I share it for everyone (not recommended in production) and I call the share UPD$
New-Item -Path C:\ClusterStorage\UPD-01\Profiles -ItemType Directory New-SmbShare -Name 'UPD$' -Path C:\ClusterStorage\UPD-01\Profiles -FullAccess everyone
Now my storage is ready and I am able to reach \\SOFS.homecloud.net\UPD$
Next topic
In the next topic, I will deploy a session collection and configure it. Then I will add the certificate for each Remote Desktop components.
How do you intend to backup the user profile disks?
You can backup them by using System Center DPM, Veeam or any backup product.
I feel like I am missing something here. When do the extra data disks added to the file servers come into play? I am trying to set up something similar but using less VMs that share roles and currently those disks are just sitting uninitialised in Windows. Should they be assigned to the cluster?
Hello, the shares will be used to store user profile disk. I set them in the next part.