With the evolution of Storage Spaces Direct coming full circle since Oct/Nov 2016 when Microsoft Launched Windows Server 2016. I still get asked the questions about why RDMA is better than regular TCP/IP and how can I prove it. Well I tell you that there have been extensive tests but don’t just take my work on it.

There is an amazing write-up that just came out from the @MellanoxTech Blog by Motti Beck @MottiBeck Sr. Director Enterprise Market Development at Mellanox Technologies Inc. In his blog posts he goes into the deepest of conversations about Storage Spaces Direct & RDMA over Converged Ethernet (RoCE).

I personally love deep dive stuff so here are some highlights from one his blog posts which can be found here:

https://www.mellanox.com/blog/2017/04/enabling-higher-azure-stack-efficiency-networking-matters/

How about being able to deliver 1TB per second of Bandwidth??


Well this was done at a demo at Microsoft Ignite a few years back using Mellanox CX-4 100 Gbe Adapters and infrastructure.

Jose Barreto delivered a fascinating presentation at Microsoft’s Ignite 2015, where he showed in real-time the performance boost that RDMA enables.  In a three minute video, Barreto compared the performance of TCP/IP vs RDMA (Ethernet vs. RoCE) and clearly showed that RoCE delivers almost two times higher bandwidth and two times lower latency than Ethernet at 50 percent of the CPU utilization required for the data communication task.

I’m not sure if you know this or not but Microsoft is using Storage Spaces Direct for their Sealed AzureStack deployments. This HyperConvered sealed configured uses the same technology that is embedded in the Kernel in Windows Server 2016. So, if you think this isn’t being tested by the biggest integrators in the world you are dead wrong. HPE, Dell, Lenovo, Cisco and more are feverishly getting their AzureStack configurations ready for market and I feel that we will likely see a launch sometime around MS Ignite 2017. This is still unconfirmed and is only a guess on my part but it feels like that will be the timing.

That being said all of this innovation that is happening around Storage and Microsoft are paving the way for some really cool solutions. I am incredibly blessed to be able to work with the product teams at Microsoft Hyper-V, Storage, and Networking to see the complete picture as part of my Microsoft MVP designation.

Here is another snip from Motti’s blog post mentioned above.

Immediately after the release of Windows Server 2012, several papers were published, all demonstrating the higher efficiency of the solution, including, “Achieving Over 1-Million IOPS from Hyper-V VMs in a Scale-Out File Server Cluster Using Windows Server 2012 R2” or, Optimizing MS-SQL AlwaysOn Availability Groups With Server SSD. All showed the advantages of using Mellanox’s RDMA-enabled network solution in the scale-out deployments.

However, Microsoft continued to develop and enhance their Storage Space features and capabilities, and in 2016, in the Windows Server 2016 release, they added support for Hyperconverged systems, a solution that uses Software-Defined Storage (SDS) to run compute and storage over the same server by using Storage Spaces over RDMA-enabled networks (Storage Spaces Direct, or S2D).

Lastly the most exciting news is that Mellanox has become the first to pass the Microsoft Server Software Defined DataCenter (SDDC) Premium certification. This means that you can trust that the drivers, configurations and performance will live up to what Mellanox is telling you as it has been completely verified by Microsoft as part of this certification.

Here is the last snip Motti’s Blog post referencing this.

A couple of weeks ago, Mellanox’s ConnectX®-3/ConnectX-3 Pro and ConnectX-4/ConnectX-4 Lx NICs became the first to pass Microsoft Server Software Defined Data Center (SDDC) Premium certification for Microsoft Windows at all Ethernet standard speeds, means, 10, 25, 40, 50 & 100 GbE. This was the latest significant milestone and crucial in the journey that Microsoft and Mellanox started more than six years ago to enable our networking hardware to deliver the most efficient solutions for the new Windows Server and Azure Stack-based deployments. These ConnectX NICs have already been certified by the world’s leading server OEMs (HPE, Dell, Lenovo, and others¹), and when deployed with the most advanced switches and cables, like Mellanox’s Spectrum switch and LinkX copper and optical cables, they have been proven to provide the most efficient Azure Stack solutions


I highly recommend following @MellanoxTech and @MottiBeck as they are a great source of knowledge for RDMA, RoCE, Storage Spaces Direct, and AzureStack.

Thanks, and happy learning,


Dave