Hey Checkyourlogs Fans,

Quick post here: a customer asked me to expand a Virtual Disk inside a 4-node S2D Cluster running Server 2019 today. I thought I had this documented online, but I couldn’t find it, so here is a fresh post on how to do this with PowerShell.

The screenshot below is from Failover Cluster Manager showing that we have a 1.96 TB CSV and approximately 465 GB free. The team requested to add 1 TB of additional space to the CSV.

Now that we know the CSV’s name is CSV05, we will check the storage pool to ensure we have enough space to do this.

It appears we have enough space in the pool, but we need to ensure that we don’t deplete our reserve capacity. Suppose you are unsure of how to do this. Just run the following on one of the cluster nodes to get the size and count of disks.


Get-physicaldisk #Ensure all disks are in an Operational State of OK

Get-physicaldisk | measure #Count is always at least N-1

#Now the count of the disks is always N-1

So, in this case, we can see the size of our disks at 3.49 TB. I can also see someone has plugged in an external USB Drive to this cluster note. See the Seagate Expansion HDD. This doesn’t count in our N-1 but is exciting, and I’ll follow up with the team to see why this has happened. This makes our count on the following command, N-2.

As you can see below, we count 22 Drives in the cluster. Because I have the Intel OS Drive as N-1 and the USB Drive, this counts as N-2. So our total Drive Count is 20 x 3.49 TB NVME SSD Drives.

We need to do some math because the Failover Cluster Manager UI and Storage Spaces Direct don’t directly show us the reserve space required unless we view it with Windows Admin Center. We aim to do this with PowerShell and Failover Cluster Manager so that we can do this manually.

20 x 3.49 = 69.8 TB

This matches what we are seeing in Failover Cluster Manager.

Now, I would like to take a minute and chat about something that I have seen customers do to get themselves into trouble with Virtual Disk expansion. Behind the scenes is a pool reserve, which = 4 x of the Capacity Drives in the pool up to 4 Nodes (1 per node 4 drives total). Because we are only running one tier for this cluster, all NVME SSD Drives would be 4 x 3.49 TB =13.96 TB. This won’t show up in Failover Cluster Manager nor a warning in the Disk Subsystem until depleted beyond this required amount. In short, what I’m saying is never go beyond pool reserve. This space is used for critical rebuild operations, and the concept of hot spares doesn’t exist with Microsoft HCI and Storage Spaces Direct (AZHCI / S2D). So, when we look at free space available, it is NOT 24.4 TB. It is 24.4 – 13.96 = 10.44 TB. Lastly, this is RAW space and doesn’t account for any resiliency settings that you have in the cluster. In our case, we are configured to use a 3-Way Mirror, so when we add 1 TB of space, then 3 TB of space would be depleted from the pool. After performing this expansion, I expect to see 21.4TB in free space minus our pool mentioned above reserve, which leaves us with 7.44 TB.

Here is a good read on more about Volumes with AZHCI and S2D. Plan volumes on Azure Stack HCI and Windows Server clusters – Azure Stack HCI | Microsoft Learn. From the article: For example, if you have two servers using 1 TB capacity drives, set aside 2 x 1 = 2 TB of the pool as reserve. Suppose you have 3 servers and 1 TB capacity drives; set aside 3 x 1 = 3 TB as reserve. If you have 4 or more servers and 1 TB capacity drives, set aside 4 x 1 = 4 TB as reserve. Leaving some capacity in the storage pool unallocated gives volumes space to repair “in-place” after drives fail, improving data safety and performance. If sufficient capacity exists, an immediate, in-place, parallel repair can restore volumes to full resiliency even before the failed drives are replaced. This happens automatically.

If you do deplete beyond the pool reserve, you can run the following:


Get-Healthfault

Severity: Warning

Reason: "The storage pool does not have the minimum recommended reserve capacity. This may limit your ability to restore data resiliency during drive failure(s)."

RecommendedAction: "Add additional capacity to the storage pool, or free up capacity. The minimum recommended reserve varies by deployment but is approximately 2 drives' worth of capacity."

It can also be helpful to use Get-Healthfault to check for any other faults on the cluster. Our support request was for CSV05, and we can see that CSV02 also shows a low disk space condition.

The process to expand CSV05 is also the same for CSV02.

Finally, to double-check your math on Pool Reserve configurations, you can use the Storage Spaces Direct Calculator Storage Spaces Direct Calculator (windows.net). Here is another excellent write-up on Storage Pool as a deep dive. Deep Dive: The Storage Pool in Storage Spaces Direct – Microsoft Community Hub.

Okay, with all this being said, we have 10.44 TB to work with RAW usable space. We want to expand our CSV by 1 TB, leaving us with a remaining 7.44 TB of RAW usable space afterwards.


#Expand the Virtual Disk

#Get a list of the virtual disks by running the following command:

Get-VirtualDisk

#Use the list to find the name of the virtual disk that must be expanded with the command:

Get-VirtualDisk CSV05 | Resize-VirtualDisk -Size 3TB

#Expand the volumes by running the following three commands:

$VirtualDisk = Get-VirtualDisk CSV05

$Partition = $VirtualDisk | Get-Disk | Get-Partition | Where PartitionNumber -Eq 2

$Partition | Resize-Partition -Size ($Partition | Get-PartitionSupportedSize).SizeMax

#Run the following command and verify that the volume has expanded:

Get-Volume

Get-VirtualDisk

I hope you have found this helpful post.

Thanks,

Dave