Your Linux Data Center Experts

PyCon 2010 Wireless Network

By  Sean Reifschneider Date February 24, 2010

Also see my related articles on networking set up for PyCon 2012, 2010, 2009, and 2008, and 2007.

This year the networking at PyCon went fairly smoothly, though not without some issues. Many things went very well: users of 802.11a (5.2GHz) basically just worked, the upstream connection (from a company we hadn't used before) had absolutely no problem, and we had wired ports for people to use if they couldn't connect otherwise. We also tested out some 802.11n gear and had a surprising number of people using 802.11n.

The primary issue was, again, 2.4GHz. I'm starting to think that 2.4GHz is just not going to cut it. Though I made some connections at PyCon that are interested in further trying to "crack this nut".

The first few hours of the tutorials and the main conference we just weren't ready. The problem with the tutorials was that the router went to the warehouse (and my backup plan was suffering from terrible packet loss), and the main conference hall didn't have power until moments before the conference started, so I couldn't do the final test of the networking. After these hiccups, things went very well.

I tried a few new things this year including putting out quite a few wired network ports, and using 802.11n gear. Plus, 802.11b was not available on our new APs. The new APs worked well in 5.2 GHz, but 2.4 was really not happy.

Lessons Learned

Read on for more details.

Upstream Bandwidth

We spiked up to 55mbps, but that was only very briefly. We could have done pretty well with 35mbps of bandwidth. However, with the terrestrial wireless (as we've used previous years), the cost of 100mbps was really quite reasonable. Compared to what the hotel network provider wanted to charge us, anyway. They wanted $25k before they would even provide a quote, we ended up paying $5k for the terrestrial wireless (about half what we were paying in Chicago).

Our uplink provider was One Ring Networks and they were just great. They really had their stuff together and they just worked. Absolutely no problems, everything went very smoothly from the contracts to the networking. They were recommended via a question I posted on LinkedIn, and I can highly recommend them for anyone needing terrestrial backhaul in Atlanta.

Wireless

The trend continued from last year with more 5.2GHz users, fewer 2.4GHz users and less of those being 802.11b.

In addition to the APs that we used the last several years, I also got some 802.11n gear to try out. I was primarily interested in them for the MIMO antenna stuff, hoping that it would improve performance.

The 2.4GHz band continued to give us fits. During the main conference in the main rooms, it was basically unusable with heavy packet loss and multi-second latency, despite having good signal. My netbook performed better than last year, but it was still basically unusable. I'm just coming to the conclusion that 2.4GHz is just not going to work with 150 users, despite being spread across a dozen APs and trying to keep the power down. Only having 3 non-overlapping channels is just killing it, and it's also competing with the hotel wireless gear.

5.2GHz performed very well though. There were some brief spikes in latency, presumably as people were doing large downloads, but mostly latency was in the 45ms range with less than 1% packet loss.

In short, if you want wireless to work at a conference: use 5.2GHz.

The new APs we were using had a hard limit of 63 wireless clients per radio. We hit that very hard during the first few hours of the main part of the conference because half the APs weren't working. This was caused by us doing our own RJ45 crimping, which I really wanted to avoid.

802.11n

This is the first year we've tried 802.11n. About half of our APs had 802.11n capability, on both 2.4 and 5.2GHz. The above graph splits out the N users from the graph from the previous section. In other words, we had around 324 out of 419 users of 5.2GHz running 802.11n.

I was very surprised to find that over half our total users were connecting via 802.11n. Particularly after finding that my laptop, which I was fairly sure would, did not connect. I thought I had gotten an ABGN capable card, but I guess it's only ABG. Sigh.

The real trick was getting APs that would do 802.11n on both radios at the same time. Most APs are not able to do N on both radios. So we ended up using the Netgear WNDAP-350, a $300 AP but it does support lots of features including gigabit Ethernet, N on both frequencies at the same time, PoE...

We also got plenty of high speed connections: of the samples saved off we had the majority connecting at 130mbps, followed by a third fewer at 117mbps and just slightly fewer at 54mbps and 78mbps and 104mbps.

Wired

This year we tried getting more wired ports out there for people who were having issues. While this was documented in both the Wiki page on networking, and the e-mail sent out to attendees before the show, people just really didn't know about it. I wanted a couple of minutes during the initial show session to tell people about this, but that was vetoed.

Later the suggestion was passed along to me that we have Ethernet ports available. We had nearly 200 available (for around 1,000 attendees). Unfortunately, we were using inexpensive switches which don't have any stats we can get to, so I don't know how many were actually used. However, I rarely saw anyone plugged into them.

The suggestion was made to have Ethernet available on every table instead of to about one out of every 10 tables like we did this year. I'm reluctant to do that because it would have dramatically increased time to set up (we had a team of 8 working around 4 hours for what we had), but also because we didn't have people using what we did have.

Next year I'll probably double our number of switches, but I think the real issue here was that we didn't have a few minutes to educate people on the setup.

Networking Problems

The first day of the tutorials, we didn't have networking for the first couple of hours. There were a few issues that contributed to this. One was that our router was at the warehouse and hadn't been brought in the set of boxes that came the day before. So we didn't have it until 9am the first day of the tutorials.

But then the router had to go into a locked room, which apparently nobody at the hotel had a key to, and the guy who did have the key didn't arrive until 11am. At that point I plugged the router in and we were live.

This lead to a significant risk that if the router had issues, I couldn't get to it to fix it, and it possibly would have involved several hours to get someone to unlock the door. Next year I either want a key to that room, or to have the Ethernet drops and power pulled outside that room to where I can get to it.

Similarly, the first day of the main conference, the first few hours the network was having serious issues. There were two issues contributing to this. One is that through a mis-communication with Carl he got a crimper and ends rather than the Leviton press-on port ends that I would have preferred using (and then a small jumper, which I had plenty of). The other was that the crimp on ends had the tabs in fairly close, so unless you bent them up before plugging them in, they wouldn't click in place.

Because of this, we had several of the Ethernet connections come loose, which caused the wireless APs to shut down their radios (I had configured them this way). Which meant that the other APs were overwhelmed. Of the 700 AP-minutes where the APs were at their limit, a quarter of those occurred during the first 2 hours of the main conference.

We also didn't have power to the APs until literally a few minutes before the conference started. The electrician that was supposed to show up at 5am to wire them all completely failed us. So while we had the APs out and individually tested the night before, we couldn't do a full system check until the first break of the conference.

The other issue with crimping our own ends on was the time it took. We only had one crimper, so we had no parallelism going on there.

NAT

This year I tried to get public IPs again, but we just weren't able to get any meaningful number of them. So, we had to do NAT. This worked well, and I had no complaints beyond the first one: one of the guys couldn't establish a PPTP connection. I had forgotten to load the NAT protocol modules... I loaded those up and it went smoothly after that.

I also did remember to up the number of connections it would allow, so we didn't run into any problems. I tried to set up munin to track the number of connections, but that failed to work. I'll have to fix that plugin for next year.

Going Forward

Number one: We need more APs, particularly if there are going to be 50% more users next year.

I'd also like to try some directional antennas to cover part of the main room and see if that resolves the issues with 2.4GHz there.

We really need to tell people what's happening as part of the conference. For having the majority of our attendees using the network, I think it's proven that it's important enough to warrant a few minutes about availability of Ethernet ports and cables.

Finally, I'd like to put out more Ethernet ports. They're fairly cheap to do.

A little more volunteer time, possibly that saved by not crimping our own cables, would have allowed us to get ethernet cables on the tables by the switches, if not plugged in and ready to go.

In Conclusion

Using many small independent APs continues to work well for our needs. The response I've gotten has generally been "There were issues the first few hours, but after that things worked well."

Shameless Plug

tummy.com has smart people who can bring a diverse set of knowledge to augment your Linux system administration and managed hosting needs. See the menu on the upper left of this page for more information about our services.

comments powered by Disqus