I have recently had the chance to work on the configuration of a @Cisco Meraki farm this week. I was brought in to troubleshoot a lot of latency coming from the network. Our HyperConverged Servers were constantly dropping packets and all around it wasn’t a very pleasant experience for the customer.

 

Upon initial investigation, it appears that the consulting company that implemented them did a pretty much stock configuration. When Meraki Switches are unboxed and implemented every port is configured as a Trunk Port with Native VLAN 1. When I first looked, this was the first thing that caught my eye.

 

 

The next thing I noticed was that when I checked the Switch Stack there were a LOT of drops and ports flapping. So, I checked the Switches Node and found that the Switches appeared to be going up and down quite a bit.

 

 

A cool little trick when using Meraki Infrastructure is the ability to hover over objects to get more information.

 

 

It turns out that Meraki recommends that you always use 8.8.8.8 and 8.8.4.4 for your DNS Configurations on each piece of core infrastructure. Meraki Devices are required to check into the cloud for configurations. If there are issues with the internet you can see the infrastructure can start to flap.

 

If this is configured for an internal DNS Servers the Switches were having a lot of issues.

 

One of my favourite parts of the Meraki user interface is probably the Topology view. This made it quite easy for us to look at how things were configured.

 

 

Now that we have had a lap around this Meraki Infrastructure. I looked for a required feature that we want for Storage Spaces Direct. Priority Flow Control?? Where is it. Well…. I don’t think that the Meraki Switches support it. So we will have to treat them as regular old Layer 2 Switches.

As you have read many posts with us suggesting to use the Mellanox CX-3 or CX-4 network adapters. Looks like we will be forced to use Chelsio IWARP adapters if we want to configure this solution for Storage Spaces Direct.

 

Here is a bit more information on the Chelsio T5 iWARP 10/40 GBE adapters: Courtesy Intel and Chesio

Redlining Windows Performance: 5M IOPs Storage Spaces Direct and Client RDMA

 

Chelsio T5 iWARP 10/40GbE adapter solutions enable streamlined, simplified configuration of high-performance RDMA networking for Windows Server 2016 Storage Spaces Direct (S2D). iWARP’s ability to work with any standard L2 switch without depending on DCB or PFC or ETS, enables an immediate plug-and-play deployment without requiring a concurrent switch upgrade. In addition, T5 can bring the performance benefits of RDMA to Windows 10, thus enabling superior client-server performance using the existing datacenter infrastructure.

 

We’re very excited to share major milestones recently achieved by our T5 iWARP solution for Windows Server 2016 Storage Spaces Direct.

 

Chelsio and Microsoft collaborated to demonstrate 5M IOPs for S2D. This establishes the redline performance in cloud storage offering with Windows platforms.

 

Intel endorsed Chelsio iWARP as the preferred plug-and-play solution to use with Windows Server 2016 S2D among the various RDMA alternatives and published a blog on high IOPS performance achieved with Chelsio iWARP RDMA capable adapters and S2D.

 

Chelsio announced the release of signed, WHQL Certified drivers for Windows Server 2016. This comprehensive offering includes SMB-Direct, Storage Spaces Direct (S2D), Nano Server, Storage Replica, Network Direct, VxLAN, NVGRE, and SRIOV offload support. Chelsio has also released support for Windows 10 Enterprise Client RDMA enabling high-performance workstation-to-storage access.

Now for the port configuration I would recommend just configuring the ports for Storage Spaces Direct as Trunk Ports and leave them set to Trunk All. That way the configuration coming from the Servers is super easy especially when we setup our SET Teams in Windows Server 2016.

As always hope you enjoy,

 

Thanks,

 

Dave