Using 1 GbE and 10 GbE Networks
1 Gbps Ethernet networks can deliver 110-120 MBps, which is close to a single drive performance on sequential I/O. Since several drives on a single server can deliver higher throughput than a single 1 Gbps Ethernet link, networking may become a bottleneck.
However, in real-life applications and virtualized environments, sequential I/O is not common (backups mainly) and most of the I/O operations are random. Thus, typical HDD throughput is usually much lower, close to 10-20 MBps, according to statistics accumulated from hundreds of servers by a number of major hosting companies.
Based on these two observations, we recommend to use one of the following network configurations (or better):
- A 1 Gbps link per each 2 HDDs on the Hardware Node. Although if you have 1 or 2 HDDs on a Hardware Node, two bonded network adapters are still recommended for better reliability (see Setting Up Network Bonding).
- A 10 Gbps link per Hardware Node for the maximum performance.
The table below illustrates how these recommendations may apply to a Hardware Node with 1 to 6 HDDs:
| HDDs | 1 GbE Links | 10 GbE Links |
|---|---|---|
| 1 | 1 (2 for HA) | 1 (2 for HA) |
| 2 | 1 (2 for HA) | 1 (2 for HA) |
| 3 | 2 | 1 (2 for HA) |
| 4 | 2 | 1 (2 for HA) |
| 5 | 3 | 1 (2 for HA) |
| 6 | 3 | 1 (2 for HA) |
Note the following:
- For the maximum sequential I/O performance, we recommend to use one 1 Gbps link per each hard drive, or one 10 Gbps link per Hardware Node.
- It is not recommended to configure 1 Gbps network adapters to use non-default MTUs (e.g., 9000-byte jumbo frames). Such settings require switch configuration and often lead to human errors. 10 Gbps network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance.
- For maximum efficiency, use the
balance-xorbonding mode with thelayer3+4hash policy. If you want to use the802.3adbonding mode, also configure your switch to use thelayer3+4hash policy.