I’ve been trying to find a good use for the second network adapter in my fileserver, and what better than to increase the throughput to the network. Using something called bonding, its possible to combine two physical network adapters into one logical bonded adapter. Using a bonded adapter provides several advantages such as fallover should one network fail, but more interesting for me, increased throughput. There are several different methods that one can use to bond the adapters together. The best way to double the throughput of the bonded adapters is to use the IEEE 802.3ad protocol, however this requires support on the switch. 802.3ad support is generally limited to managed switches and is not supported on my unmanaged Netgear JGS516. The next best thing is to use a special mode of the bonding driver in Linux, called balance-alb. The other modes supported by the kernel bonding driver allow things such as fallover. I’ve been told that the following method does not work with all network adapters. Specifically, the driver needs to be able to change the MAC address on-the-fly, which is not supported in all drivers. I can say that it works with the r8169 (Realtek) and the Intel drivers (e100, e1000, etc.). This post is written with Gentoo in mind but should apply to other distributions.
First thing you need to do is enable bonding in the kernel. Be sure to compile it as a module since we will need to pass arguments to the module when it is loaded:
Device Drivers ---> Network Device Support <M> Bonding driver support
Now set the module to be loaded at boot:
# nano /etc/modules.autoload.d/kernel-2.6 bonding mode=balance-alb miimon=100
The two options above are important. The balance-alb (adaptive load balancing) specifies what bonding mode to use. There are several different options available here, but balance-alb provides the functionality I want since my switch does not support 802.3ad bonding. The second option, miimon=100, tells the module to use mii-tool to check if the network adapters are up every 100ms. This provides the fallover should one adapter fail.
Now emerge ifenslave, a userland tool to bond the interfaces:
# emerge -av ifenslave
Next, we need to configure the network interfaces:
# nano /etc/conf.d/net slaves_bond0="eth0 eth1" config_bond0=( "dhcp" ) config_eth0=( "null" ) config_eth1=( "null" )
This bonds eth0 and eth1 to form the bond0 interface. You can bond more than two adapters as necessary. The bond0 interface is configured to use DHCP to automatically obtain an IP address. The config_eth0=( “null” ) lines prevent the individual adapters from getting an IP address since we only want bond0 to get an IP.
Now lets create the startup script, have it start automatically on boot, and remove the individual adapters from starting at boot:
# ln -sf /etc/init.d/net.lo /etc/init.d/net.bond0 # rc-update add net.bond0 default # rc-update del net.eth0 # rc-update del net.eth1
That should be it, reboot, and you’ll now have bonded adapters. I used a program called netio to measure my network speeds. Prior to bonding, I could read at about 60-80MB/sec from the fileserver over a gigabit network. With bonded adapters, that increased to about 120-150MB/sec, quite an improvement! Write speeds saw a significant improvement as well. Note that many hard drives will not be able to read/write at the same speed as the network adapter. So if your adapter can push out 150MB/sec, it doesn’t help you if your hard drive can only read at 60MB/sec. Internally, the RAID 5 array in my fileserver can read at up to 250MB/sec using hdparm, so the array can more that keep up with the bonded adapters. If you have an older computer with dual 10/100 adapters, bonding can give you a nice boost to throughput that most hard drives will still be able to keep up with.
References and additional information:
Boost Reliability with Ethernet Bonding and Linux