StarWind VSAN free provides a free Software-Defined Storage (SDS) solution for two nodes. With this solution, you are able to deliver a highly available storage based on Direct-Attached Storage devices. On top of StarWind VSAN free, you can deploy a Microsoft Failover Clustering with Scale-Out File Server (SOFS). So you can deploy a converged SDS solution with Windows Server 2016 Standard Edition and StarWind VSAN free. It is an affordable solution for your Hyper-V VM storage.
In this topic, we’ll see how to deploy StarWind VSAN free on two nodes based on Windows Server 2016 Standard Core edition. Then we’ll deploy a Failover Clustering with SOFS to deliver storage to Hyper-V nodes.
Architecture overview
This solution should be deployed on physical servers with physical disks (NVMe, SSD or HDD etc.). For the demonstration, I have used two virtual machines. Each virtual machine has:
- 4 vCPU
- 4GB of Memories
- 1x OS disk (60GB dynamic) – Windows Server 2016 Standard Core edition
- 1x Data disk (127GB dynamic)
- 3x vNIC (1x Management / iSCSI, 1x Hearbeart, 1x Synchronization)
Both nodes are deployed and joined to the domain.
Node preparation
On both nodes, I run the following cmdlet to install the features and prepare a volume for StarWind:
# Install FS-FileServer, Failover Clustering and MPIO install-WindowsFeature FS-FileServer, Failover-Clustering, MPIO -IncludeManagementTools -Restart # Set the iSCSI service startup to automatic get-service MSiSCSI | Set-Service -StartupType Automatic # Start the iSCSI service Start-Service MSiSCSI # Create a volume with disk New-Volume -DiskNumber 1 -FriendlyName Data -FileSystem NTFS -DriveLetter E # Enable automatic claiming of iSCSI devices Enable-MSDSMAutomaticClaim -BusType iSCSI
StarWind installation
Because I have installed nodes in Core edition, I install and configure components from PowerShell and command line. You can download StarWind VSAN free from this link. To install StarWind from command line, you can use the following parameters:
Starwind-v8.exe /SILENT /COMPONENTS="comma separated list of component names" /LICENSEKEY="path to license file"
Current list of components:
Service: StarWind iSCSI SAN server.
service\haprocdriver: HA Processor Driver, it is used to support devices that have been created with older versions of the Software.
service\starflb: Loopback Accelerator, it is used with Windows 2012 and upper versions to accelerate iSCSI operation when client resides on the same machine as server.
service\starportdriver: StarPort driver that is required for operation of Mirror devices.
Gui : Management Console ;
StarWindXDll: StarWindX COM object;
StarWindXDll\powerShellEx: StarWindX PowerShell module.
To install StarWind, I have run the following command:
C:\temp\Starwind-v8.exe /SILENT /COMPONENTS="Service,service\starflb,service\starportdriver,StarWindxDll,StarWindXDll\powerShellEx /LICENSEKEY="c:\temp\ StarWind_Virtual_SAN_Free_License_Key.swk"
I run this command on both nodes. After this command is run, StarWind is installed and ready to be configured.
StarWind configuration
StarWind VSAN free provides a trial of 30 days for the management console. After the 30 days, you have to manage the solution from PowerShell. So I decided to configure the solution from PowerShell:
Import-Module StarWindX try { $server = New-SWServer -host 10.10.0.54 -port 3261 -user root -password starwind $server.Connect() $firstNode = new-Object Node $firstNode.ImagePath = "My computer\E" $firstNode.ImageName = "VMSTO01" $firstNode.Size = 65535 $firstNode.CreateImage = $true $firstNode.TargetAlias = "vmsan01" $firstNode.AutoSynch = $true $firstNode.SyncInterface = "#p2=10.10.100.55:3260" $firstNode.HBInterface = "#p2=10.10.100.55:3260" $firstNode.CacheSize = 64 $firstNode.CacheMode = "wb" $firstNode.PoolName = "pool1" $firstNode.SyncSessionCount = 1 $firstNode.ALUAOptimized = $true # # device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. # $firstNode.SectorSize = 512 # # 'SerialID' should be between 16 and 31 symbols. If it not specified StarWind Service will generate it. # Note: Second node always has the same serial ID. You do not need to specify it for second node # $firstNode.SerialID = "050176c0b535403ba3ce02102e33eab" $secondNode = new-Object Node $secondNode.HostName = "10.10.0.55" $secondNode.HostPort = "3261" $secondNode.Login = "root" $secondNode.Password = "starwind" $secondNode.ImagePath = "My computer\E" $secondNode.ImageName = "VMSTO01" $secondNode.Size = 65535 $secondNode.CreateImage = $true $secondNode.TargetAlias = "vmsan02" $secondNode.AutoSynch = $true $secondNode.SyncInterface = "#p1=10.10.100.54:3260" $secondNode.HBInterface = "#p1=10.10.100.54:3260" $secondNode.ALUAOptimized = $true $device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod "Clear" $syncState = $device.GetPropertyValue("ha_synch_status") while ($syncState -ne "1") { # # Refresh device info # $device.Refresh() $syncState = $device.GetPropertyValue("ha_synch_status") $syncPercent = $device.GetPropertyValue("ha_synch_percent") Start-Sleep -m 2000 Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow } } catch { Write-Host "Exception $($_.Exception.Message)" -foreground red } $server.Disconnect()
Once this script is run, two HA images are created and they are synchronized. Now we have to connect to this device through iSCSI.
iSCSI connection
To connect to the StarWind devices, I use iSCSI. I choose to set iSCSI from PowerShell to automate the deployment. In the first node, I run the following cmdlets:
New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260 New-IscsiTargetPortal -TargetPortalAddress 10.10.0.55 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.54 Get-IscsiTarget | Connect-IscsiTarget -isMultipathEnabled $True
In the second node, I run the following cmdlets:
New-IscsiTargetPortal -TargetPortalAddress 127.0.0.1 -TargetPortalPortNumber 3260 New-IscsiTargetPortal -TargetPortalAddress 10.10.0.54 -TargetPortalPortNumber 3260 -InitiatorPortalAddress 10.10.0.55 Get-IscsiTarget | Connect-IscsiTarget -isMultiPathEnabled $True
You can run the command iscsicpl in a server core to show the iSCSI GUI. You should have something like that:
PS: If you have a 1GB/s network, set the load balance policy to Failover and leave Active the 127.0.0.1 path. If you have 10GB/s network, choose Round Robin policy.
Configure Failover Clustering
Now that a shared volume is available for both node, you can create the cluster:
Test-Cluster -node VMSAN01, VMSAN02
Review the report and if all is ok, you can create the cluster:
New-Cluster -Node VMSAN01, VMSAN02 -Name Cluster-STO01 -StaticAddress 10.10.0.65 -NoStorage
Navigate to Active Directory (dsa.msc) and locate the OU where is located the Cluster Name Object. Edit the permissions on this OU to allow the Cluster Name Object to create computer object:
Now we can create the Scale-Out File Server role:
Add-ClusterScaleOutFileServerRole -Name VMStorage01
Then we can initialize the StarWind disk to convert it later in CSV. Then we can create a SMB share:
# Initialize the disk get-disk |? OperationalStatus -like Offline | Initialize-Disk # Create a CSVFS NTFS partition New-Volume -DiskNumber 3 -FriendlyName VMSto01 -FileSystem CSVFS_NTFS # Rename the link in C:\ClusterStorage Rename-Item C:\ClusterStorage\Volume1 VMSTO01 # Create a folder new-item -Type Directory -Path C:\ClusterStorage\VMSto01 -Name VMs # Create a share New-SmbShare -Name 'VMs' -Path C:\ClusterStorage\VMSto01\VMs -FullAccess everyone
The cluster looks like that:
Now from Hyper-V, I am able to store VM in this cluster like that:
Conclusion
StarWind VSAN free and Windows Server 2016 Standard Edition provides an affordable SDS solution. Thanks to this solution, you can deploy a 2-node storage cluster which provides SMB 3.11 shares. So Hyper-V can uses these shares to host virtual machines.
nice article. I have two HP Proliant DL380G9 with a coverged 2x10GB NIC Team. I guess or iSCSI best is to use other dedicated NICs. do you think 2x1GB NICs for ISCSI is enough? I wanna Excchange and SQL and many other VMs run.
It is better to use 10GB/s for iSCSI and 1GB/s for VMs. You can also converged all traffic in two 10GB/s by using for example Switch Embedded Teaming if you use Hyper-V.
Hi Romain,
Great article, though I need some clarification around the IPs you’ve used in the configurations. In the diagram at the beginning you list each host as having:
Node 1
10.10.0.46 – Management/Production
10.10.200.46 – Heartbeat
10.10.100.46 – Sync
Node 2
10.10.0.47 – Management/Production
10.10.200.47 – Heartbeat
10.10.100.47 – Sync
Then in your Startwind PS setup you have the following:
$firstNode.SyncInterface = “#p2=10.10.100.55:3260”
$firstNode.HBInterface = “#p2=10.10.100.55:3260”
$secondNode.SyncInterface = “#p1=10.10.100.54:3260”
$secondNode.HBInterface = “#p1=10.10.100.54:3260”
I’m guessing the PS setup should (1) be using different subnets for sync vs. heartbeat, and (2) the node IPs are just a little different to the diagram? .54 and .55 vs. .46 and .47?
Also, I haven’t been able to find anywhere what the prefix #p1 or #p2 means. Would you be able to explain a little?
Many thanks.
Hi,
You right, this is an issue in my script. It should be different subnet for Sync and HB. P1 means first node and P2 means second node 🙂
Thank you 🙂