Sunday, April 21, 2013

The art of cabling


The challenge of organising your cables behind your TV is nothing compared to that of a large computing cluster.

One of our standard racks contains 12 Dell R510s servers (for storage) and 6 Dell C6100 chases (providing 24 compute nodes) all 36 nodes are connected with a 10 Gb (SFP+), 1 Gb (backup) and 100 Mb (for IPMI) network cable. Connecting to 3 different network switches at the top of the rack. In addition the 18 "boxes" need a total of 36 power connections. A total of 144 cables per rack!


How to cope? Separate the network cables from the power cables, a possible source of noise. Use different colour cables for the different traffic and add unique id number for each cable. Use lose, removable cable ties. When a cable brakes don't remove it, just add a new cable.

The 10 Gb switches, in our case Dell S4810s, connect using 4 40Gb QSFP+ cables to two Dell Z9000 core switches. Having two core switches allows us to take one unit out of service without downtime (we use the VLT protocol and it works!). However this does add cable complexity. The backup 1 gig switches connect to each other in a daisy chain using 10 Gb cx4 cables, left over from before our 10 Gb upgrade. Finally the ipmi switches connect to a front-end switch using 1GBaseT cables.




The picture shows the inter-switch links. Visible are the orange 40Gb connections and blue 10Gb cx4 cables. In addition each 40 Gb cable has an ID indicating which rack it came from and which core switch its going too.



We have one rack full of critical, world facing servers. These servers need to be available all the times making it very difficult to reorganise the cabling. As a result over time, as we add and remove servers, the cabling becomes a mess. This is starting to become a risk! We are just going to have to accept some down time to sort it out in the near future.



No comments: