Flux provides high-speed Ethernet connections to locations elsewhere on- or off-campus using a hardware-based gateway between its high-speed InfiniBand network and the campus Ethernet network. There is no additional cost to use the gateway beyond the cost of using Flux.
|Schematic diagram of the connection between Flux nodes and the U-M campus network showing the connection speeds and paths|
The most common use case for this high-bandwidth connection to Flux nodes is to access NFS-based storage, although it isn’t restricted to that at all.
While any high-bandwidth network demand or application will work with this gateway and the Flux compute nodes, the example we have is of bandwidth to NFS storage. A researcher in the School of Public Health has a high-bandwidth NFS file server connected to the campus backbone at 10Gbps. He also has a Flux allocation and asked us to ensure that the network path to his storage servers is via the InfiniBand to Ethenet Gateway.
After the configuration was set, which is available to anyone using Flux, his compute jobs read data from his NFS server at between 4.6Gbps and 9.2Gbps—approaching the maximum speed of the 10Gbps interface on the file server.
|Network traffic approaching 10Gbps in January 2014; this traffic was between Flux nodes and one file server attached at 10Gbps|
The gateway hardware handles the balance between the two 10Gbps links to the campus network, ensuring that there is no bottleneck between Flux and the campus network.
|Balanced network traffic to the 10Gbps Ethernet connections to campus minimizes network bottlenecks|
The InfiniBand to Ethenet gateway is a physical system from Mellanox called a BX5020. The BX5020 has the capacity to support up to 12 10Gbps Ethenet connections, so there is sufficient network bandwidth for a large number of connections to other network locations on campus.
If you are interested in making use of this high-bandwidth Ethernet connection, please let us know at firstname.lastname@example.org.