Tue Oct 19, 2010 2:58 am
Generally speaking, bonding happens at a lower level and can potentially more smoothly load-balance. It should generally also guarantee in-order packet delivery.
Bonding can load-balance at a sub-packet level and increase throughput for a single stream of data. Bonded interfaces appear as one higher-bandwidth interface at layer-2, which can simplify routing, etc, in turn reducing CPU usage (except the bonding in turn might increase the CPU, if there isn't hardware support for it).
请注意,我说的是“可以”,一些焊接方法可能not do some or any of those things.
The other methods either can't give full bandwidth to a single stream (because that stream will use only one of the paths), or risk out-of-order packet delivery (which shouldn't actually be a problem in theory for most things, but many stacks don't handle it well). If you're using multiple reliable interfaces of the same speed and latency, you probably mostly won't have out-of-order problems though (but smaller packets could arrive before larger ones if they were sent at nearly the same time).
Sub-packet balancing, simple example: you bonded 3 interfaces of equal speed. The bonding will split incoming packets into thirds, and send a 3rd of a packet down each of the 3 interfaces, which are then reassembled on the other end and sent out the bonded interface as a complete packet. What happens when the interfaces aren't equal speed and/or have high jitter, and/or the packet isn't evenly divisable by 3 is left as an exercise for the reader.
Note that the packet still takes the same amount of time to appear at the other end (the speed of light being a constant), but the *end* of the packet arrives sooner, meaning the next packet can be sent sooner.