Hey Checkyourlogs Fans,

Today, I was called in to work on a case with a customer that had messed around with their Virtual Switch Settings in the Hyper-V Console on an Azure Stack HCI Cluster. I didn’t get a lot of details but what I did get emailed was this “I was building a Virtual Machine in Failover Cluster Manager and I couldn’t get it on the network. So, I went into Hyper-V Manager Virtual Switch Manager and made some changes. After that, I couldn’t ping the host anymore”.

 

I was able to logon via the out of band IPMI interface on the node and decided to have a look at what was done.

 


 

That doesn’t look right. I don’t ordinarily hard code the Hyper-V Virtual Switch with a VLAN. This setting should be blank as I manage all of these settings from PowerShell on the Hyper-V Set Switch.

Here is what the Management Adapter looked like with this setting enabled.


 

Get-VmnetworkAdapterVlan -ManagementOS

That VLAN 2 should read untagged.

I unchecked the checkbox in Hyper-V Switch Manager and left it blank.


 

Double checked with PowerShell and found that the setting didn’t return to its original value.


 

Get-VmnetworkAdapterVlan -ManagementOS

Now the VLAN is set to 0.

To fix the issue, I ran the following


 

Get-VMNetworkAdapter -ManagementOS mgmt* | Set-VmNetworkAdapterVlan -Untagged -verbose

This changed the value back to normal.

 

In Failover Cluster Manager, the MGMT Adapter was still showing as Failed.


 

To fix this, I figured we would try resetting the MGMT Adapter via PowerShell.

 


 

Get-NetAdapter -Name "vEthernet (MGMT)" | Restart-NetAdapter -verbose

After about 30 seconds, Failover Cluster Manager was back to normal.


 

When dealing with Azure Stack HCI (S2D) Clusters the core networks shouldn’t ever need to be changed. The only networks that should change are inside the Fabric, where the Virtual Machines live.

 

I hope this helps you solve your problems in case you have an issue like this.

Happy hunting,

Dave