Setting up a 3-node Lenovo ThinkAgile cluster with Storage Spaces Direct (S2D) on Windows Server 2025 requires careful planning and adherence to best practices. This guide provides IT professionals with a detailed technical walkthrough—from preparing the nodes and updating firmware (using Lenovo’s Best Recipe guidelines) to automating configuration with PowerShell and validating the cluster. The goal is to create a reliable, highly available hyper-converged cluster that follows Lenovo and Microsoft’s recommendations.

Cartoon animals in a server room AI-generated content may be incorrect.

Preparing Lenovo ThinkAgile Nodes: Best Practices

Before deploying the OS or configuring S2D, ensure all three ThinkAgile nodes are prepared consistently according to Lenovo’s design guidance:

  • Uniform Hardware Configuration: Use identical or certified compatible servers for all nodes. Microsoft recommends all cluster servers be the same manufacturer and model for consistency. This means identical CPU models, memory size, storage configuration, and network adapters across the three nodes. Consistency ensures balanced performance and simplifies support.
  • Storage Configuration: Each node should have a dedicated boot drive (e.g. SATA/SAS SSD or M.2) separate from the S2D data drives. The OS boot device can be a single SSD (RAID1 mirror is supported but not required) with at least ~200 GB capacity. All drives intended for S2D (NVMe, SSD, or HDD) must be in JBOD/HBA mode (no hardware RAID). Verify that any RAID controllers are set to pass-through so Windows can directly manage the disks. S2D supports direct-attached SATA, SAS, NVMe, and persistent memory drives; ensure no SAN or USB drives are used.
  • Networking Plan: Prepare a high-bandwidth, low-latency network for the cluster. 10 GbE is the minimum for small (2-3 node) S2D clusters, and 25 GbE (with RDMA) is recommended for larger or high-performance clusters. Each node should have at least two network interfaces for redundancy and throughput. The plan is to dedicate or aggregate NICs for cluster traffic (using Switch Embedded Teaming or SMB Multichannel). If using RDMA-capable NICs (iWARP or RoCE), ensure the network switches (if any) are correctly configured for RDMA (PFC and ETS for RoCE) or use switchless direct connections for a small cluster.
  • Firmware and Driver Currency: Verify that each server’s BIOS, NIC firmware, drive firmware, and other component firmware are up-to-date and consistent across all nodes. In a cluster, network adapters, drivers, and firmware must be an exact match on all nodes to ensure features like switch-embedded teaming function properly. We will use Lenovo’s XClarity and Best Recipe (described below) to achieve a consistent firmware baseline.
  • Power and Cooling: Ensure adequate power (each node on redundant PSUs) and cooling for the servers, as S2D clusters can be resource-intensive. In BIOS, set the performance profile to “Maximum Performance” (disabling power saving features like CPU throttling or C-states) to reduce latency. Also, configure the System Profile/Workload Profile to “Virtualization” or “Hypervisor” if such an option exists in BIOS, as this tunes settings for consistent performance.
  • Security and UEFI: Use UEFI boot mode with Secure Boot enabled (if supported by your environment) for improved security. Enable the TPM 2.0 module on each node if you use features like BitLocker for S2D volumes. Verify that CPU virtualization extensions (Intel VT-x/VT-d or AMD-V) are enabled in the BIOS to support Hyper-V. Also, disable any unused peripheral devices in the BIOS to reduce boot time and potential interruptions.
  • Node OS Installation: Install Windows Server 2025 Datacenter on each node (Datacenter edition is required for S2D). Join all nodes to the same Active Directory domain. Before building the cluster, it’s best to apply the latest Windows updates or cumulative patches at this stage (to ensure all S2D-related hotfixes are present).

Following these preparation steps sets a solid foundation. With the hardware ready and the OS installed, we align the firmware and driver levels using Lenovo’s tools.

Firmware and Driver Updates with Lenovo XClarity Administrator (Best Recipe)

Lenovo provides a “Best Recipe” for ThinkAgile systems – a curated set of firmware and driver versions tested for reliability. Applying the Best Recipe to all nodes ensures the cluster firmware is at a Lenovo-certified baseline. Lenovo’s Best Recipe for ThinkAgile MX (Microsoft hyper-converged) solutions can be obtained from Lenovo’s support site. We will use Lenovo XClarity Administrator (LXCA) to apply these updates:

  1. Obtain the Best Recipe Package: Download the appropriate Best Recipe firmware bundle for your ThinkAgile model and target OS (Windows Server 2025 S2D) from Lenovo’s Data Center Support portal. The Best Recipe documentation​ will list the recommended firmware levels (for BIOS, XClarity Controller, drives, NICs, etc.) and driver versions for the ThinkAgile solution.
  2. Import into XClarity Administrator: Launch LXCA (if not already installed, deploy LXCA as a VM or on a management server) and add your three nodes for management (discover them via their XClarity Controller IPs and credentials). In LXCA’s Updates or Compliance section, import the Best Recipe firmware bundle (sometimes provided as an update repository file or a catalog that LXCA can connect to).
  3. Create Compliance Policy: Create a firmware compliance policy in LXCA using the imported Best Recipe as the baseline. Assign your three ThinkAgile nodes to this compliance policy. LXCA will inventory each server and compare current firmware levels to the Best Recipe baseline.
  4. Update Firmware: Initiate an Apply or Compliance Update job on the policy. XClarity will update each server’s firmware (BIOS, UEFI, drive, NIC, etc.) to match the Best Recipe. You can update one node at a time (to minimize multiple nodes rebooting simultaneously). LXCA will handle the update process, including any required host reboots.
  5. Driver Updates: Many device drivers (for Windows) are also part of Lenovo’s Best Recipe. You should update Windows drivers (NIC teams, storage controller drivers, etc.) to the recommended versions. LXCA can deploy drivers if an agent is installed, but a more straightforward method is to use Lenovo’s XClarity Essentials OneCLI or System Update tool within Windows. For example, you might download the Best Recipe’s Windows driver pack and use a script or OneCLI to install all drivers silently. Ensure all cluster nodes have identical driver versions for storage and network devices.

By the end of this process, all three nodes should be running the same firmware and driver versions as Lenovo’s best-practice recipe specified. This aligns with Lenovo’s support matrix and satisfies Microsoft’s requirement for uniform firmware/driver versions in a clustered solution.

Tip: Record the firmware and driver versions applied (LXCA can generate a compliance report). It’s also a good practice to update the XClarity Controller (XCC) on each node to the latest version as part of firmware updates since XCC (BMC) improvements can affect system stability.

Automating Feature Installation and Configuration with PowerShell

With hardware and firmware standardized, the next step is configuring Windows Server features and settings for S2D – a task that can be automated using PowerShell scripts. Automation ensures that all nodes are configured identically, reducing the chance of human error. Below are the key configurations to automate:

  • Windows Features for S2D Cluster: Enable each server’s required roles/features: Failover Clustering, Hyper-V (if you will run VMs on the cluster), and other supporting features. Also, consider installing the File Server role (for CSV integration) and SMB Direct/Data-Center Bridging (DCB) features if using RDMA. This can be done remotely on all nodes using a single PowerShell loop or Invoke-Command. For example:

$nodes = 'Node1','Node2','Node3'

Invoke-Command -ComputerName $nodes -ScriptBlock {

Install-WindowsFeature -Name Failover-Clustering, Hyper-V -IncludeManagementTools

Install-WindowsFeature -Name FS-Data-Deduplication -IncludeManagementTools

Install-WindowsFeature -Name RSAT-Clustering -IncludeAllSubFeature

# (Optional: Enable Data Center Bridging for RDMA networks)

Install-WindowsFeature -Name Data-Center-Bridging

}

The above script installs Failover Clustering and Hyper-V (including management tools like cluster GUI/PowerShell), enables data deduplication (applicable if you plan to deduplicate volumes on the S2D cluster), and adds DCB for RDMA networking.

  • Networking Configuration: Use PowerShell to set up consistent network settings on each node:
    • Rename Network Adapters to meaningful names (e.g., “MgmtNIC”, “StorageNIC1”, “StorageNIC2”) using Rename-NetAdapter for easier identification.
    • Enable Jumbo Frames on storage NICs if you use MTU 9014 for SMB (ensure the switches are set accordingly). For example:

Set-NetAdapterAdvancedProperty -Name "StorageNIC1" -DisplayName "Jumbo Packet" -DisplayValue "9014"

Set-NetAdapterAdvancedProperty -Name "StorageNIC2" -DisplayName "Jumbo Packet" -DisplayValue "9014"

    • Enable RDMA on the NICs (if applicable) using Enable-NetAdapterRdma -Name StorageNIC1 (for iWARP or RoCE v2 capable NICs). For RoCE, configure Priority Flow Control (PFC) and traffic classes via Enable-NetQosFlowControl and New-NetQosPolicy cmdlets as needed.
    • Set vSwitch or Teaming: If using Switch Embedded Teaming (SET) to combine two NICs for converged traffic, you can create the Hyper-V vSwitch via PowerShell:

New-VMSwitch -Name "ConvergedSwitch" -NetAdapterName "StorageNIC1","StorageNIC2" -EnableEmbeddedTeaming $true -AllowManagementOS $true

Then, configure QoS on that switch (set host vs. VM min bandwidth, etc.) and assign VLANs if needed. Alternatively, keeping networks separate (one for cluster/storage, one for management) ensures the cluster network is isolated on its subnet/VLAN.

  • Driver and BIOS Settings via Script: Some driver settings can be tuned via PowerShell or vendor utilities. For example, Storage QoS policies can be set after cluster creation if needed. BIOS settings generally require manual configuration or Lenovo tools (like OneCLI or LXCA). If not set earlier, consider scripting any BIOS updates via OneCLI:

# Example: Using Lenovo OneCLI to ensure BIOS settings (pseudo-code)

OneCli.exe config populate –bmc User:Pass@<NodeIP> –source bios_settings.ini

The above is conceptual. In practice, you could export a known-good BIOS profile to a file and then apply it to all nodes using OneCLI or XClarity, ensuring uniform settings like boot order and power profile.

Automating these steps ensures that each node has the required Windows features for S2D and consistent network configurations. By using scripts, you can re-run them as needed (idempotently) if you add nodes or need to reconfigure.

BIOS, Boot Order, and Network Setting Recommendations

Proper BIOS and system settings are crucial for S2D performance and cluster stability. Here are the recommendations to verify on each Lenovo ThinkAgile node:

  • BIOS Firmware Level: Confirm that the BIOS is updated (which should be done if the Best Recipe was applied). All nodes should run the same BIOS firmware version.
  • UEFI Boot and Boot Order: Use UEFI mode. Set the boot order so the primary boot device is the local OS drive (e.g., an SSD or RAID1 volume with Windows Server). Disable PXE network boot on data NICs (unless you plan to use PXE; even then, ensure it’s not first in order). If the servers support UEFI Secure Boot, it should be enabled for security (Windows Server 2025 supports Secure Boot).
  • BIOS Performance Settings: In the BIOS setup, disable energy-saving options that can reduce performance. For example, disable CPU C1E/C States and set CPU Performance to Maximum Performance. Ensure Turbo Boost (if available) is enabled for bursts of performance. If an “HPC Mode” or “Latency Performance” profile exists, it could benefit low-latency storage operations. Also, ensure that the Memory is in performance mode (versus power-saving mode).
  • Virtualization Features: Ensure Intel VT-x and VT-d (or AMD-V/Vi) are enabled for virtualization and device passthrough. Also, if applicable, enable “CPU XD/NX bit” and “Intel EPT,” as these support Hyper-V and security features.
  • Hyper-Threading: Hyper-Threading can remain enabled (default) as it generally improves throughput for Hyper-V workloads. There’s no need to disable it for S2D; Microsoft supports hyper-threaded cores in clusters.
  • NIC Configuration (BIOS/UEFI): Check if the BIOS has any settings for network adapters (for example, some BIOS allow enabling/disabling PXE or setting link speeds). Ensure all high-speed NICs are set to their maximum speed and that option ROMs (PXE/iSCSI boot ROMs) are only enabled on the NIC if needed for boot. If not booting from SAN, you can disable network boot ROMs to speed up POST.
  • Time Synchronization: Ensure each node’s BIOS clock is roughly correct (it will sync via domain later). It’s good to configure the Windows Time service on the domain for accurate time (important for cluster and quorum witness, especially if using a cloud witness).
  • OS Power Plan: Once Windows is installed, set the OS power plan to High Performance. This prevents the OS from throttling the CPU or parking cores. You can do this via the command line:

powercfg /setactivescheme SCHEME_MIN # SCHEME_MIN = High Performance GUID

Additionally, disable fast startup (to avoid any hybrid sleep issues on servers) and ensure the servers are set never to sleep/hibernate (which is the default for Windows Server).

  • Firewall and Security: Configure Windows Firewall with the appropriate cluster allowances (Failover Clustering, SMB, etc., are usually auto-enabled when the features are installed). Avoid turning off the firewall entirely; instead, use the built-in rules for cluster features, which should be enabled once clustering is installed. Also, verify that any antivirus or security software is cluster-aware (set proper exclusions for cluster databases, CSV volumes, etc., per vendor guidelines).
  • Network Binding Order: In Windows, ensure that the management NIC is at the top of the binding order (so domain traffic goes out the correct interface). That cluster/storage networks are not registering in DNS (set those NICs not to register DNS to avoid DNS confusion).

You prepare the environment to run S2D at its best by fine-tuning BIOS and OS settings as above. Many settings (UEFI, power, and NIC speeds) directly affect performance and latency.

Cluster Creation and Validation

With all nodes prepared, updated, and configured, you can create the Windows Server 2025 Failover Cluster and enable Storage Spaces Direct. This section covers cluster formation, validation, and storage setup:

Validate Cluster Configuration: Before creating the cluster, run the cluster validation tests provided by Microsoft. This is a crucial step in catching any misconfiguration. Use the Failover Cluster Manager GUI (Validate Configuration Wizard) or PowerShell Test-Cluster cmdlet. For example:

Test-Cluster -Node Node1, Node2, Node3 -Include "Storage Spaces Direct", "Inventory", "Network", "System Configuration"

Ensure the validation report shows No Errors (only informational warnings at most). Microsoft’s guidance is that before proceeding, the fully configured cluster must pass all cluster validation tests (either via the wizard or Test-Cluster cmdlet). If any tests fail (for example, mismatched NIC settings or disk configuration issues), resolve them and re-run validation.

Create the Cluster: Once validation is successful, create the cluster (this will form the Windows Failover Cluster without S2D enabled yet). You can do this in PowerShell:


New-Cluster -Name "ThinkAgile-S2D-Cluster" -Node Node1,Node2,Node3 -StaticAddress 10.0.0.100 -NoStorage

Replace the cluster name and IP as appropriate for your environment. The -NoStorage flag prevents the cluster from automatically adding disks (we will enable S2D explicitly). After this, the cluster will be formed, and the nodes will become members of a failover cluster. In the Failover Cluster Manager, you should see the new cluster online.

Configure Quorum (Witness): With three nodes, it’s highly recommended to configure a cluster witness for quorum. Even though three is odd, a witness helps maintain cluster availability during a node outage or maintenance. You can use a Cloud Witness (Azure) or a File Share Witness on a separate server. For example, to configure a cloud witness in PowerShell:


Set-ClusterQuorum -CloudWitness -AccountName "storage-account" -AccessKey "key" -Endpoint "endpoint"

#Or for a file share witness:

Set-ClusterQuorum -FileShareWitness "\\WitnessServer\ShareName"

Ensure the witness (if file share) is accessible by all cluster nodes and not on one cluster node. A witness will contribute a vote to help avoid split-brain scenarios.

Enable Storage Spaces Direct: Now, enable S2D on the cluster. This will aggregate the local disks from all nodes into a single storage pool. Use PowerShell:

Enable-ClusterStorageSpacesDirect

This command will discover all eligible disks (those not used for OS and typically those that are blank/unformatted) on each node and create a clustered storage pool named “Storage Spaces Direct Pool” (by default). It also configures cache devices (if you have mixed SSD+HDD, SSDs will be cache; if all are NVMe/SSD, cache may be disabled by S2D automatically). You can check the result with:


Get-StoragePool -Cluster $env:CLUSTERNAME

Get-PhysicalDisk -StoragePoolFriendlyName "S2D*"

Create Volumes (CSV): With the S2D pool ready, create volumes to use as cluster-shared volumes (CSVs). For example, to create a 2 TB volume with ReFS (recommended filesystem for S2D):


New-Volume -StoragePoolFriendlyName "S2D*" -FriendlyName "CSV01" -FileSystem CSVFS_ReFS -Size 2TB -PhysicalDiskRedundancy 2

This will create the volume, format it with ReFS on a CSV, and add it to the cluster’s CSVs. Repeat for as many volumes as needed (ensuring you have enough capacity).

Validate and Tune the Cluster: After creating volumes, it’s good to rerun Test-Cluster, focusing on storage, to ensure everything is healthy:


Test-Cluster -Cluster "ThinkAgile-S2D-Cluster" -Include "Storage Spaces Direct", "Inventory"

#Verify the cluster’s Storage Spaces Direct Subsystem is healthy:

Get-StorageSubSystem -Cluster | Get-StorageHealthReport

All health indicators should be OK. Also, check Get-ClusterNetwork to ensure cluster networks are appropriately categorized (the cluster will usually auto-designate one network for cluster+CSV traffic and one for the client if you have separate subnets; you can adjust the roles if needed, e.g., disable client access on the storage network).

Cluster Tips: According to Microsoft’s requirements, all components in the cluster should be certified for Windows Server and ideally carry the SDDC Premium certification (which ThinkAgile nodes do). This ensures compatibility with S2D. Also, remember that with three nodes, S2D will use two-way mirroring by default for volumes (since dual parity requires 4+ nodes). The two-way mirror can tolerate one node (or drive) failure. If you desire three-way mirroring (which stores three copies, tolerating two failures), note that in a 3-node cluster, a three-way mirror can tolerate one node failure (the cluster itself cannot survive two node failures). Most 3-node deployments stick to a two-way mirror for efficiency.

Ensure the cluster is monitored via Windows Admin Center, System Center, or Lenovo XClarity Integrator so that any hardware issues (disk failures, etc.) are flagged immediately. Lenovo XClarity can also integrate with the cluster nodes to report hardware events into the Windows Event Log.

Finally, document your configuration: BIOS settings, network setup, names of networks, IPs, etc., for future reference. This will help troubleshoot or expand the cluster later.

Conclusion

Building a 3-node Lenovo ThinkAgile S2D cluster on Windows Server 2025 can be complex, but following best practices and using automation can achieve a robust and supportable deployment. We ensured all nodes were identically configured and up-to-date with Lenovo’s Best Recipe firmware/drivers. We then automated the installation of Windows features and applied consistent network and system settings across nodes. Critical BIOS configurations (UEFI, power, virtualization) were reviewed to maximize reliability and performance.

Throughout the process, we aligned with Lenovo’s guidelines (for hardware prep and updates) and Microsoft’s guidelines (for cluster configuration). Notably, Microsoft requires that all cluster hardware pass validation tests and use certified drivers/firmware—steps we’ve taken with XClarity and Test-Cluster. We also emphasized setting up a quorum witness and other cluster tunings to ensure the cluster remains up even during maintenance or failures.

By investing time in planning and automation, you reduce deployment time and minimize errors. The result is a highly available, performant 3-node S2D cluster that provides resilient storage and computing for your workloads. Leverage the code snippets and steps in this guide as a template for your deployment, customizing as needed for your specific ThinkAgile model and network environment.

Next Steps: Once the cluster is up, you can deploy VMs or roles. Consider using Windows Admin Center to manage the S2D cluster easily (it provides a dashboard for Storage Spaces Direct). Also, schedule a periodic firmware/driver review. Lenovo periodically updates Best Recipes, so you may use XClarity to keep the cluster firmware compliant over time.

By adhering to these best practices and using the available tools, you’ll ensure your Lenovo ThinkAgile S2D cluster is built on a solid foundation of automation, consistency, and vendor-certified configurations—ready to deliver reliable hyper-converged infrastructure for your organization.

Thanks,

Steve Labeau – Principal Consultant / Blogger