Your Linux Data Center Experts

I've always thought that the traffic shaping under Linux was just hard to understand. With things like the HTB and other technologies, I figured it was pretty advanced. I mean, the iptables stuff is incredibly advanced, even if it is (just a little) subtle. However, we recently had a client who needed to give hundreds of users on an Ethernet each a limited up and down bandwidth. After a fair bit of research, things aren't looking so good in Linux-land.

The bare Linux traffic shaping mechanism is called “tc”, for “Traffic Control”. This is implemented by the “iproute2” suite of packages. There are many “shaper” packages such as “tcss”, “wondershaper”, “tcng”, “bandwidth arbitrator”, “shaper”, “cbq.init”, and the list goes on. These are all basically just (varying thicknesses of) veneer on top of the tc tools. They all seem to be oriented towards a home or office user behind a DSL line optimizing the traffic of the users.

All the solutions seem to give, at best, the ability to spread traffic over a handful of connections. For example, the graphics design department can use up to half the bandwidth, and if the programmers are also hitting the line, the uses get equalized. However, if the designers drop off their usage, the programmers can “borrow” their bandwidth. Or maybe giving upper limits on HTTP, FTP, and e-mail traffic, while allowing some to borrow bandwidth from others. Very powerful constructs.

Unfortunately, the underlying “tc” implementation has some fairly severe limits. It seems to be limited to 16 “bands”, and worse, these “bands” aren't documented. From the output of the tcng compiler, it looks like if you tell it to set up different classes for the different users, it's putting them each in different bands. If that's true, we're limited to 8 to 16 users (depending on whether we require two for doing in and out-bound shaping).

I've also recently found that the traffic shaping configuration that works so well with our mirrors.tummy.com FTP site doesn't seem to work at all for Bit Torrent. Torrents seems to completely circumvent this shaping limit. After careful review of the poorly-documented and hard to understand commands, it looks like it should do the right thing.

So, “tc” is limited, poorly documented, hard to understand, and may not be completely effective. For some limited uses it may be OK, but there are certainly some problems.

There are a bunch of places who build traffic shapers based on Linux systems, but all of these seem to implement their own kernel modules for doing so.

I started looking at some other alternatives to see if they were any better. In fact, last night after Hacking Society, Scott and I mucked around with the “pf” and “ALTQ” traffic shaping under OpenBSD. It's pretty easy to set up, relatively, and well documented. With one exception. It could be better documented that the bandwidth limiting only works on an “out” interface, not “in”. For example:

altq on ep1 cbq bandwidth 1.5Mb queue { usersi std }

queue std bandwidth 1.5Mb cbq(default)
queue usersi cbq { u_10_250_1_1i }
queue u_10_250_1_1i cbq bandwidth 256Kb

nat on dc0 from ep1:network to any -> dc0
pass in on ep1 inet proto tcp from 10.250.1.1 to any keep state queue u_10_250_1_1i

This sets up an altq with 1.5mbps of bandwidth, with a sub-queue for incoming traffic to 10.250.1.1 limited to 256kbps.

A couple of downsides to this. One is that you need to have one queue for incoming and outgoing if you want to limit both ways. The other problem is that if you want to NAT and shape based on the inside IP address, because the shaper happens on the out, the NAT has already happened, so you no longer have traffic split out by the inside IP address.

The other problem is that you only have 255 queues you can set up. Not surprisingly, that limit is not something you can just increase by changing the code. It sounds like even at 255 queues, the performance isn't very good. Latency on the shaping becomes an issue where the limits become very “chunky”.

Today I took a quick look at the FreeBSD ipfw bandwidth management code, and it looks pretty good. It's hard to tell what it's scalability and performance will be without testing, but it looks like dynamically adjustable up to fairly high limits.

It also supports “mask"s and "virtual queues”. Using a mask, you can tell the code that when a packet matches it should be assigned to a queue named the source and/or destination IP address and/or port. These queues are created dynamically. So, you could, for example, automatically limit traffic based on the destination port, source address, or parts of the network:

ipfw add pipe 1 ip from 192.168.2.0/24 to any out
ipfw add pipe 2 ip from any to 192.168.2.0/24 in
ipfw pipe 1 config mask src-ip 0x000000ff bw 200Kbit/s queue 20Kbytes
ipfw pipe 2 config mask dst-ip 0x000000ff bw 200Kbit/s queue 20Kbytes

In other words, create a different queue for every least-significant octet of the source or destination IP address. This example creates 512 dynamic pipes to the system. Better, the dynamic pipes are allocated and freed as traffic comes through the site, freeing them when idle.

I'm a big Linux advocate, but this system I must say this looks pretty good. iptables is pretty nice because of the tables and that you can jump into different rules tables. Very handy. But the traffic control functions in FreeBSD look to be pretty slick.

comments powered by Disqus

Join our other satisfied clients. Contact us today.