By default many Linux network interface card drivers set their SMP affinity mask to either all zeroes or all ones (“ff” — the length of the mask depends on the number of CPUs on the system). The former results in all queues and interfaces running on CPU ID 0, which can become a performance bottleneck due to insufficient computing power. The latter results in all queues and interfaces being scheduled on multiple CPUs, which can become a performance bottleneck due to increased CPU memory cache misses.
Some drivers create multiple queues each with its own IRQ, which then can each be assigned to its own CPU. Drivers appear to create as many queues as there are CPUs, unless the NIC hardware capabilities limit the total to a lower number.
Some drivers create separate transmit (TX) and receive (RX) queues. For optimal cache hits, the TX and RX sides of a given queue must be scheduled on the same CPU.
Even without multiple queues, a performance boost can be attained on multihomed hosts and when using bonded interfaces by assigning each NIC to a separate CPU.
IRQs don’t always stay the same as hardware can be changed, and driver configurations can change. For example, some NIC drivers only appear to create their queues when an interface is configured up. This makes it impractical to hard-code IRQ affinity assignments in boot scripts — not to mention how time-consuming such manual work can be.
I created a utility to detect NIC IRQs and assign SMP affinity on them following all these rules. I have released it under the BSD license.
- Source code: https://github.com/suominen/network-irq-affinity
- Signed packages: http://software.kimmo.suominen.com/
I only had access to a limited set of NICs. Drivers appear to implement unique naming schemes, so please check what the script does before deploying it in production. Please send pull requests for any naming schemes you add.