I’ve seen an explosion of discussion in the last couple of days regarding something called Bufferbloat. It seems that Bell Labs’ Jim Gettys has been investigating poor network performance at his house, and has stumbled onto a network phenomenon that he’s termed bufferbloat. Essentially, bufferbloat is when networks are configured with excessive buffers which leads (perhaps counter-intuitively) to poor network performance.
Jim has written a series of blog posts as he has investigated this problem. You can find them all here. He freely admits that he’s not the first to have stumbled across this problem, though he certainly seems to have successfully coined the best term for it. Bufferbloat has in fact long been identified as an issue by those who build their own routers. I remember reading about the phenomenon years ago (though it did not have a name at the time) and avoiding bufferbloat has in fact been the cornerstone of my own home network configurations for over five years now.
I’ve written about my FreeBSD based pf firewall in the past, in an entry titled Adventures in Packet Filter. That post covered a bunch of ground (multiple subnets, etc), but the key to the traffic shaping component is avoiding bufferbloat. Due to it’s nature, bufferbloat will only impact the slowest link in the network path. For most home users, this is going to be the DSL or cable connection. Effectively managing the traffic at this point can very effectively resolve the issue of bufferbloat.
The section of my pf configuration that is relavent is the setup of the queues. Here is my configuration:
#################################################
# Queuing #
#################################################
# Set queues on wan_if
# upstream: 887100 bits, downstream: 6947900 bits
altq on $wan_if bandwidth 850000b hfsc (linkshare 850000b upperlimit 850000b) queue { wan_ackq, wan_dnsq, wan_vipq, wan_sshq, wan_defq, wan_p2pq }
queue wan_ackq bandwidth 5% priority 7 qlimit 200 hfsc (realtime 25%)
queue wan_dnsq bandwidth 5% priority 6 qlimit 50 hfsc (realtime 5%)
queue wan_vipq bandwidth 5% priority 4 qlimit 50 hfsc (realtime 92160b)
queue wan_sshq bandwidth 10% priority 3 qlimit 50 hfsc (realtime 10%)
queue wan_defq bandwidth 40% priority 2 qlimit 50 hfsc (default realtime 25%)
queue wan_p2pq bandwidth 5% priority 0 qlimit 25 hfsc (ecn red upperlimit 425000b)
The first thing I hope you notice is that I have the true sync rate listed in a comment. I pulled this value from the modem, but you should be able to call your ISP and get this value. Don’t assume that you’re getting the bandwidth listed in the promo offer. Find out exactly what the modem is syncing at. Now then deduct about 5% of that.
By configuring the buffer (queue, in pf terminology) to run at just slightly less than the modem sync rate, you can be sure that packets will never queue up on the modem itself. And that, right there, resolves the problem of bufferbloat. Well, actually, it doesn’t. But it does mean that instead of having the modem manage the buffer, you do. And you’re likely smarter than the modem.
I manage my queue by creating six sub queues, which I have arranged by order of priority in the snippet above. First priority goes to TCP ACK packets. This greatly cuts down on needless TCP retransmits and is really the single most effect strategy I use for managing my queue. This sub queue also has a 200 packet buffer. This is a HUGE number of packets to buffer under most circumstances, except that ACK packets are only 40 bytes in size (I believe, please correct me if I’m wrong). So even if the buffer is full, there’s only 64000 bits of data in that buffer, which means it will take 75ms to empty the buffer completely on my connection. That’s still pretty high, but reasonable, especially since the queue shouldn’t ever have more than a few packets buffered, since the whole queue runs at less than the sync rate.
The next four queues are pretty standard, with 50 packet buffers and various prioritizations. They could all be amalgamated into a single queue if I wanted, except that there is some value in seeing DNS and VoIP get priority over web browsing. No one will notice if there’s an extra 50ms delay in that picture loading in their browser, but 50ms means a lot in a VoIP call, and quick DNS response times can make a network connection ‘feel’ faster and snappier.
It’s not obvious from the above snippet, but the final queue is actually the default queue. Later in my pf.conf file I dictate that all traffic goes into the p2p queue, and then I selectively pull traffic out of it and put it in the other queues. That’s why the p2p queue is the one with ECN and RED enabled. RED means that rather than waiting for the queue to be full before we drop packets, the system will randomly drop packets from the queue as the queue starts to fill up, with the rate at which packets drop increasing as the queue gets more and more full. This method utilizes TCP’s innate traffic control properties which respond to dropped packets to manage the speed of connections in this queue. What’s wonderful is that by dropping packets in this queue in only one direction, it will help to control the flow of traffic in both directions because of the nature of TCP.
I’ve been able to solve the problem of bufferbloat for myself quite effectively by removing the buffers in my modem from the equation and taking over the task of managing the buffer myself. While I wouldn’t recommend that everyone should have their own FreeBSD firewall with hand-crafted packet filter configurations, these same techniques can be used with consumer routers running dd-wrt and similar firmware already. And there is no reason why the router manufacturers can’t implement these techniques as well. They aren’t complicated, and they aren’t particularly expensive in terms of CPU or RAM, both of which are now dirt cheap even in embedded systems. I’m glad the term has been coined though, as it should finally focus the attention of the router manufacturers on the problem, and they will hopefully move to implementing something similar to what I describe above. This is especially easy now that most ISPs are shipping integrated modem/router devices, which means the router component will have full visibility into the state of the modem, and can glean the sync rate and run a common buffer.
Sorry, the comment form is closed at this time.