Also see my related articles on networking
set up for PyCon 2008, 2007,
It's been a long time coming, for a variety of reasons, but I have gotten
the statistics collected from the 2009 PyCon back in March and have pretty
graphs to show. I had hoped to write this up shortly after the conference,
but it took months for my router boxes to get back to me, and I didn't have
local copies of those systems (with their stats) available as I did in
This year was largely successful with the exception of people with
Netbooks. We did had a surprise with the hotel network infrastructure, but
were able to work that out. The big news is that we had way more 802.11a
users than 802.11g, and almost no 802.11b users. At one point we ran out
of DHCP leases, but that was a minor issue as I had more IPs ready and just
had to configure the NAT for them and turn them on in the DHCP
We were running mostly with public IPs given out to attendees, but we
were fairly limited in the number of public IPs we could get from them.
Leave money in the budget to deal with unanticipated issues such as
having to light a dozen strands of fiber.
Public IPs was just not worth it.
Netbooks have terrible wireless cards in them.
Terrestrial wireless backhaul is good stuff.
Get wired ports out there for people to use.
Too many access points is almost enough.
The PyCon attendees are a great group of people to provide
Read on for more details.
This year we had two locations, one for the main conference and a
different one for the sprints. This is why the two graphs above.
We again used Business Only Broadband for the upstream link, and we
had 100mbps at the main hotel and 50mbps at the Sprints hotel. In both
cases the bandwidth was very much overkill. But it's better than the
BoB performed very well, just as expected. We had absolutely no
problems with the upstream.
The equipment last year wasn't able to give us statistics about the
usage of the wireless, so I have to go back to 2007 for comparison. This
year we had a peak of around 600 wireless users connected. Note the graphs
are incomplete because I didn't have the statistics collection set up right
In 2007 we had mostly 802.11g users connected (200 out of 340), with
about half that many using 802.11a and half again using the dusty old
This year most people by far were using 5.2GHz 802.11a, with less than
half more using 802.11g and few people using 802.11b. At the sprints it
was more even between the a and g users, which I am a little surprised at.
I guess the sprinters aren't the technophiles that I expected...
My records indicate that we had 24 APs at the main venue. They were
configured as before, with 802.11bg running at the lowest power on channels
1, 6, and 11, 802.11a running at high power but using all channels and
channel duplication spread as far apart as possible.
At the sprints I had set up basically one AP per room. As I had
reports of networking problems I went around and added additional APs to
cover the problem areas. By this time we had switches from the main hotel
over at the sprints hotel, so I was able to give out wired connections
also. Each room had only one jack, so putting a second AP in place
involved putting in a switch and connecting the two APs to it, so some
wired ports became available as well. We ended up with 10 APs deployed for
Networking problems at the sprints had this workflow:
Someone came to me saying they were getting disconnected from the
I would add another AP, putting it right next to their laptop.
I would make a joke about hoping that they didn't want to have
Their networking problem was solved.
We didn't use any 802.11n equipment this year, largely because of
budget. The APs we have are great for our needs, being dual radio, having
control over output power, and working with my existing software for
pulling statistics out of the APs. And on top of this they're "cheap",
between $100 and $200 each.
I may get a couple of 802.11n APs to try out for 2010, but mostly I
expect that we will be using the same gear as 2009.
The Netbook Problem
Netbooks just basically did not work at the conference on wireless.
And we just didn't have the gear to do wired ports. I had both a ThinkPad
T61 and a Eee 901 with me. I tested the network with the Eee, and
everything worked fine. But once significant numbers of users started
showing up, my connection would drop around 30 seconds after it got
My theory is that the wireless cards that are going into $200 to $300
computers are just not that good, and they can't cope with the high levels
As I said, I had a Eee 901 with me and just could not get a reliable
connection while I was there. I tried a number of different settings
including locking on to a single AP, adjusting the rate, modifying the RTS,
nothing really helped. Even going into the speakers ready room, which was
somewhat isolated from the rest of the wireless, just didn't give a
Discussing this with others I heard stories of people replacing the
network cards in their Netbooks with nice Intel cards and getting much
improved wireless, but also getting worse battery life.
In the end I just gave up and decided that next year we'd get some
wired ports available. I don't think there's anything we can do about
So, if you plan to take a netbook to a big conference, you may want to
consider going armed with a USB wireless dongle, particularly an
The Venue Problem: Hyatt Regency
The Hyatt Regency had been telling us that they would just give us
Ethernet handoffs in each of the wiring closets that we needed to make
connections in, with those being on a VLAN dedicated to our use.
The reality when we showed up was that they refused to let us use
their existing infrastructure, and instead required that we use their "dark
fiber" infrastructure for connecting between the closets.
The good news is that I had some experience with doing fiber because
of some recent changes to our tummy.com hosting service, so I was able to
fairly quickly and inexpensively light the fiber.
It was more challenging than it should have been because their fiber
infrastructure was poorly labeled, and their guy that really knew the fiber
infrastructure was not available. One of the ports was wired the reverse
of what it should have been, so after spending a pile of time trying to
figure out why this link wasn't coming up, I had to cut apart our fiber
connector so I could swap the two fibers. These were the SC ends which are
designed so that they only work one way. Impossible to connect reversed,
unfortunately this is exactly what I needed to do in this case.
The labeling problem also resulted in their staff not being able to
find one of the fiber runs we needed, which resulted in their registration
system getting knocked off the network, apparently for a significant amount
of time. Worse, this was apparently not the first time this had happened,
and even so they hadn't corrected the labeling.
The Venue Problem: Crowne Plaza in Rosemont
On the other hand, the Crowne Plaza went very smoothly. Their network
was exactly as promised, and was labeled correctly. The only problem we
had with bringing that up was fairly weird...
The run from the uplink to the main patch panel was flaky. We could
connect directly into it at the radio end, but if we connected the cable
running to the patch panel we would get link but no traffic.
If I put a different switch in between the cable to the patch panel,
and the cable to the radio gear, it started working just fine. That took a
while to figure out.
The Crowne Plaza was a joy to work with after the problems we had run
into at the Hyatt Regency.
The Leases Problem
This was the other significant problem, we just ran out of DHCP leases
in the public space. I had already allocated some private IPs, so when
this report came in I just had to bring up another interface on the
firewalls, add a NAT rule, and modify the DHCP configuration. I had hoped
to bring this up before we ran out of public IPs, but I had just been
occupied by other things.
An easy fix, but I was disappointed that I hadn't prevented it before
The above graph shows the number of leases. Again, the graph is blank
because I didn't get it set up until later in the conference.
About Public IPs
We had, mostly at previous sprints, had a few requests for public IPs
for demo severs or file shares for groups of developers, etc... I figured
that plus the elimination of any NAT traversal problems for people running
VoIP or other programs might make things go more smoothly.
I don't recall anyone mentioning that the public IPs were a benefit,
nor do I recall more than one or two people at previous PyCons asking for
public IPs or saying there were problems with the NATed IPs.
I'm not sure this was worth the effort, small as it was, because it
lead to running out of leases as explained in the above section. When
we've used private IPs, we've always had plenty of them available, so
leases were just never an issue.
It reminds me of a previous year when someone came up to me and
suggested that we drop the lease time from, I think I had it set to 24
hours or more, so that IPs would be re-used more quickly and we wouldn't
run out. "Well, we have 4,000 IPs available and are only using 1,000 of
The PoE Problem
Ok, this wasn't really a problem... But only about half of our APs
supported Power Over Ethernet. I had gotten several switches that would do
PoE, because in previous years we had some problems with APs getting
disconnected from power, or worse APs getting disconnected from Ethernet
but still being connected to power and broadcasting their ESSID.
The down side to this was that I had several reports of APs that had
been disconnected from power, because they only had an Ethernet connection.
Part of this was because the LEDs for the AP you couldn't really see unless
you bent down and "looked it in the eye". Despite the reports, we didn't
have any APs that were disconnected from power this year.
I think the biggest thing I would do in the future is to get wired
ports available for people to use. I have a line item in the budget for
next year to get switches for end user connections. I'd love to get some
that are managed to the level of reporting the number of active links, but
I think we probably want to spend the money on ports rather than
management. But it would be nice to know how many people are using the
wired ports. Maybe I'll use a different network for the wired ports, and
just count based on the number leases we have in that network...
The other thing I think I'll probably adjust for is not using public
IPs. It was an interesting experiment, but ultimately I don't think it was
Using many small independent APs continues to work well for our needs.
The network this year was well received, though having wired ports for
attendees is necessary.
tummy.com has smart people who can bring a diverse set of knowledge to
augment your Linux system administration and managed hosting needs. See
the menu on the upper left of this page for more information about our
comments powered by Disqus