Your Linux Data Center Experts

PyCon 2012 Wireless Network

By  Sean Reifschneider Date March 13, 2012

Also see my related articles on networking set up for PyCon 2010, 2009, 2008, and 2007.

This year at PyCon we tried something completely different: We had the venue run the network. This was partly because of issues finding alternative providers in the Santa Clara area (really), and partly because I didn't leave enough time for doing so. Though in previous years I'd gotten them lined up in far less time...

So my role this year was largely one of overseeing and of facilitating the networking, rather than running access-points or network gear...

Surprisingly, it worked out extremely well! This largely is due to the hard work by the venue networking folks at Smart City, including Dan, Emiliano, and Paul. Over the years of PyCon I've worked with a lot of networking vendors, and Smart City was "streets ahead" of the others. So really, most of the props go out to them. They were extremely easy to work with, went above and beyond the call, and were right on top of things, plus they had first-rate gear.

For the first time ever, I can't say enough good things about the wireless networking vendor PyCon worked with.

The down side is that they were also the most expensive PyCon has ever worked with. Though they really aren't out of line with what other venues ask.

For wireless gear, the facility had a lot built into the premises including hidden in ceilings. To that they added a number of APs on masts around the rooms, with the biggest concentration being in the keynote ballroom. In total there were 45 APs deployed.

We could really only afford 50Mbps of bandwidth, where in previous years we had 100Mbps... We hadn't applied any shaping, and had used around 40Mbps as the peak burst. However, this year we had over 50% more attendees, with a total of 2,250 attendees.

For the sprints we had some issues, largely gear-related, but that has smoothed out.

My plan for next year is pretty much to just let the folks at Smart City deal with it, and continue my role overseeing it.

Reports were that people didn't have issues connecting, and while it wasn't fast when they connected it wasn't bad and there weren't problems with flakiness.

Lessons Learned

Read on for more details.

Upstream Bandwidth

I don't know exactly what the bandwidth that was available at the facility is. I got the feeling they could have provided more, but it was largely a cost issue: we couldn't afford more.

We purchased 50Mbps for the wireless attendees, with an additional 3Mbps on a dedicated network for presenters in case the wireless (either at the venue or on a presenters laptop) was having issues.

This was less than we had in previous years by about half. However, the gear they had allowed for setting up shaping of attendees.

I see in the graphs they provided us that at some point Thursday afternoon (the second day of the sprints), we spiked up to nearly 70Mbps. This may have been due to changes we were doing for the A/V guys to get more bandwidth on the speakers network, or could have been related to our asking them to change the shaping per user from 0.5Mbps to 1Mbps, etc... The remainder of the conference was shaped fairly effectively to 50Mbps.

Shaping

The first day of the tutorials we started with shaping set to 512Kbps in, 512Kbps out for all attendees. There were 700-ish attendees for tutorials, so with around a third the total attendees I decided we should try upping that for the second day of tutorials to 1Mbps, then returning it to 512Kbps for the first day of the conference.

The shaping seemed to work fine, even with less bandwidth and more attendees than previous years the network operated very smoothly.

Portal/Redirect

We had 3 options for the portal: Require a login, a "simple redirect" and "no portal". Without the portal or redirect, they had no ability to track the number of connected devices.

However, the redirect caused problems with devices or applications that weren't browser-based. You couldn't access the network until you went to a page in a browser, which got redirected to a portal. At that point, your connection was opened up.

For things like Google Voice phone calls, or other apps on smart phones, this often lead to what looked like flakiness in the network or the phone. At one point I rebooted my phone because Google Voice was hanging, for example.

We only had the redirect on for the first day of the tutorials. After that, I had them disable the redirect and lived without getting information on the number of attendees. That removed the only source of grumbling I had heard.

Seating

The attendance for this year was supposed to be capped at 1,500, but for various reasons this ended up being 2,250. This forced a bit of a change in the seating, with fewer "classroom" options (table and chairs) and more "theater" (chairs without tables). So, it may have been that due to the reduced table space, there were fewer and lighter wireless network users compared to previous years.

I did speak to many people who were using tablets and phones for the conference and leaving laptops in their rooms. There were power outlets all over, but there weren't really a lot of people who were using their laptops in talks.

Since I wasn't running the network, I could be one of them.

Wireless

Unfortunately, because we had no portal, we had no ability with the vendor's software to collect number of connected devices. So we have no idea how many devices were connected during the event.

We do have graphs of Wednesday and the first few hours of Thursday (the tutorial days), before the redirection was turned off. On Wednesday we had 460 devices in the morning and 510 devices in mid-afternoon. On Thursday it jumped up to 556 devices at 9:30, when I had them shut down the redirect.

The gear used for the conference seemed to mostly use 3x3 antennas, and were dual band. 802.11b was disabled, as we had previously done.

Total numbers of APs were 45 with 19 in the main keynote room.

Where previously I had put APs in the audience on tables, this year there were no tables. Most of the APs were spread around the perimeter of the audience, on masts raising them to maybe 5 and 7 feet. However, there apparently were also APs in the ceilings, which would have helped to penetrate the crowd. Having the APs higher than I have in the past also likely helped with that.

Wired

We had almost no wired networking. All the speakers podiums had wired connections. Registration also had a wired connection. Some of the exhibitors had wired connections which they purchased directly through the networking vendor, others wanted a less expensive option so we purchased another drop from the vendor and then were going to run wires for them.

The vendor, Smart City, ended up doing the wired runs for us and getting them connected up, even handling a booth that had been originally listed as being in one location but instead being moved clear across the hall. This is one of the reasons I've said that they went above and beyond the call.

In this case I had gotten a call from one of the organizers that a booth didn't have their wired connection, I went down to the booth and by the time I got there Dan and Paul from Smart City were already working on it. Paul was deploying another wireless AP, because the booth vendor was having problems getting connected to the wireless (they were trying wireless when the wired connection couldn't be found), while Dan was running a hard line.

Again, fantastic work by Smart City!

Unlike in previous years, we did not have any switches out for the audience to use. Of course, that simplified the deployment...

The wired network was on it's own dedicated upstream bandwidth, which meant that it was not competing for resources against the wireless users. Having speakers on their own network was a nice safety net in case of congestion on the wireless...

Networking Problems

There were very few reported problems. The first day of the tutorials we had the "redirect" enabled, which often looked like networking problems. We disabled it after that.

The first 20 minutes of the conference we shut down all the APs in the main hall to reduce interference with the dancing robots. That was also reported frequently as a problem -- it wasn't announced very well that it was deliberate. That lead to a bit of a problem where audience members started firing up hot-spots on their phones, but the robots worked fairly well with only a few synchronization problems.

The other problem we had was that freenode needs to be notified that there will be many simultaneous connections from a single IP, since we were behind NAT.

Other than people reporting those things, all the responses I heard were extremely positive about the wireless.

NAT

In the past, we really haven't had problems with NAT, so we just continued to use NAT and didn't have problems with it, other than the freenode IRC servers blocking connections until we told the admins about it.

Time Zone Weirdness

One very unexpected thing that we ran into was during the sprints. I had several people report to me that their laptops or phones were saying that they were in Atlanta, and setting them to the Eastern time zone.

Apparently, someone "wardrove" around the conference one or both of the previous years in Atlanta, which picked up the MAC addresses of our APs. Now, in California, the databases were still showing these AP MACs in Atlanta.

An interesting weakness in this scheme... A freshness date would allow for resolution of this sort of thing, but probably isn't that often an issue.

Sprints

The sprints were at the Hyatt, so we had to make completely separate arrangements for the networking there. The first day of sprinting things were a bit rough, but they smoothed out a bit after that.

Part of the problem is that the hotel only has a DS-3, and that's shared with guest rooms. So we really only had 20-30Mbps of bandwidth, for around 700 sprinters. Reports of network slowness abounded.

I also deployed our older APs, which were a/b+g only instead of 802.11n. This is because I got confused and was thinking that we needed external antennas for the new APs (they have lugs on the back for mounting external antennas), and forgot that they also have internal antennas. We had some people reporting connection problems that ended up getting resolved by dropping the new APs in their rooms and connecting to them specifically.

I really wish we could have had the sprints over at the convention center, but they probably just didn't have the space in smaller rooms for us... That would almost certainly decrease our costs on the networking side.

Our vendor for the sprints was Swisscom, providing only the Internet bandwidth and in-wall wiring. Jason at Swisscom has been great to work with as well. I know I'm coming across as all fawning over the other vendor and not saying much about Swisscom, but that's mostly because we haven't relied on Swisscom for as much as we did the other vendor.

Going Forward

I'm hoping that next year we can just say "exactly the same as last year". For the conference we can definitely do that, though more bandwidth would be nice.

For the sprints, the hotel is doubling their connection to the Internet up to 100Mbps, so that will help.

In Conclusion

We've finally found a networking vendor that can handle PyCon, basically without incident. I'm expecting that 2012 we are going to do exactly the same thing, and hopefully we can figure out how to replicate it for 2013.

comments powered by Disqus