Your Linux Data Center Experts

Brett Cannon has called PyCon 2009 the “best PyCon ever”. I totally agree. Though, as usual, I had a hard time making it to many of the talks (preferring instead the hallway track and one a few occasions getting called in to do networking stuff), the talks are now all up and available for download and watching. I have a bunch of them queued upto check out. Read on if you're interested in more of my impressions of PyCon 2009.

The Network

I will only touch on the (Inter-) networking briefly, as I expect to write a full report on it once the router machines (and their statistics) make it back to Colorado. However, in general the reports on the network were quite positive with one notable exception: netbooks.

I had an Asus Eee 901, and once the bulk of the attendees arrived I basically could not get on the wireless network at all. I could sometimes associate for around 30 seconds and then I'd lose the connection, sometimes it wouldn't even associate with the network, even if I manually did “iwconfig” commands to lock to an AP and ESSID.

I got similar reports from others who had netbooks including other models of the Eee and several people with Acer Aspire Ones. Though Paul Hummer said that he could connect until someone with one of the metal Mac laptops sat down close to him.

My theory is that being a $300-ish laptop they just have to cut corners in some places including possibly the wireless radios and antennas. Ralph said that he had heard of people replacing the Eee cards with Intel cards so they could run the Hackintosh project, and they reported having much better wireless performance.

So, my Eee was totally useless via wireless, even if I was in the green room which had it's own AP that was fairly lightly used. The RF interference was just killing it. My Thinkpad had absolutely no problem connecting and staying connected.

The Summits and Tutorials

I didn't attend any of the tutorials. I was mostly either running around doing the last of the networking stuff for the final conference, or attending the summit. I only made the first half day of the Language summit on Thursday (I spend the afternoon putting up the networking for the main conference).

The first half of the day at the language summit was great though. The unladen swallow project mentioned that they've made changes that have produced a 30% speed-up in performance, and that they're targeting a 500% increase. That will be pretty impressive.

I also brought up some packaging issues. The decision was made that Python version 3 executable will be called “python3” and will stay that way even after Python version 2 and below are effectively deprecated. And unless Python version 4 is backwards incompatible, it will likely continue to be called “python3” as the executable… So, you should be using “python3” in your #! line for Python version 3 code.

The Main Conference

One of the biggest things I recall from the conference itself was speaking to Toshio Kuratomi about packaging and I came up with the idea that we should build RPMs of the packages in the Python Package Index (PyPi, AKA the Cheeseshop). We discussed some issues relating to it, including how that package index would interract with or interfere with a distro package library like Fedora. I thought of the idea of prefixing all these package names with “pypi-”, but in the end I decided not to implement this initially for a few reasons including that it was hard to do.

It was, of course, great seeing all my friends and acquaintances again. The Python community is great, and it's just full of interesting people talking about interesting things.

I gave a lightning talk about this database wrapper I wrote for some projects I've recently been doing. It's specifically a Psycopg2 wrapper which makes some simple things rather simple. It's available at and the primary things it does are to make every query allocate it's own cursor, so you don't have to deal with that. It also makes all the return results accessible via a dictionary (which the old psycopg library did, but psycopg2 doesn't by default).

The cursor is returned via a helper so that you can iterate over the results, and the cursor gets freed when you are done. For my uses, this results in me no longer needing to manage cursors at all.

It's normal work-flow is:

from psycopgwrap import Database as db
for row in db.query('SELECT * FROM table'):
   print 'id; as dictionary: ', row['id'], ' or as an array:', row[0]

Finally, it's query arguments are passed as additional function arguments, rather than as a single tuple. I obsessed over this change from the dbapi, but in the end I think it's the right decision. Because when passing a tuple I often want to use a “%” rather than a comma between the query string and the arguments, which silently ends up being formatted instead of being properly passed and quoted.

Afterwards, Martin Blais told me about his “antiorm” project which has “dbapiext” that includes some neat features for passing where clauses as lists and other formatting benefits. He parses the query and handles “%W” and “%S” specially so you can do things like:

data = { 'first_name' = 'sean', 'last_name' : 'reifschneider' }
execute_f(cursor, 'UPDATE table SET %S WHERE id = %(id)s', data, id = 1)

Which gets turned into the SQL: “UPDATE table SET first_name = 'sean', last_name = 'reifschneider' WHERE id = 1;”. Setting values via the dictionary is a pretty awesome idea.

The Sprints

The networking for the sprints went relatively well. We had one issue in the Kennedy room which I resolved by adding an AP, and in the Python Core room we had some packet loss at the far end of the room which was helped a bit by another AP (though it was on the other side of a concrete block wall). This was because there were just a lot of people in that room the first day, after that it was no problem.

I spent the first couple of days setting up virtual machines running CentOS 4 and 5, and Fedora 10, in both 32 and 64 bit modes, and running builds of all the packages I could get from the cheeseshop.

I ended up getting RPMs built for around 1,700 packages on CentOS 4, 2,900 for CentOS 5, and over 3,000 for Fedora 10. I need to set up a system to do the full builds automatically and in a safe environment, and do something to get dependencies resolved for building packages. However, as a first pass I think this is a good start.

The packages are available at, though they don't currently work as a repository. There is something in the package information that causes yum to choke on the repository with a traceback.

I was just looking at the Apache stats today and it looks like only a few people have tried packages from this repository.

I have made available the output of the build results for each package so that package maintainers can help fix build issues at

I also worked on a Twisted bug about building RPM packages. I totally did the wrong thing as far as checking in the changes, because I wasn't familiar with their new development process. I think their process is quite excellent, now that I'm familiar with it. I'd absolutely plan to use it for projects I work on in the future. Everything has to have a bug report, and be reviewed before it gets pushed into the trunk.

Martin V. Loewis and Brett Cannon had mentioned earlier that they wanted to set up something in the Python tracker to automatically close pending issues that had been open for 2 weeks. We have a chronic problem with users submitting issues, developers looking at them and needing additional feedback, but never getting it. So I decided this was a good leverage point to work on.

With the help of some coffee, I was able to get this figured out and implemented, though it is not yet in place on the tracker because I want to do some final testing and validation on the live data before we set it loose.

Finally, I had incorporated a bunch of patches I've received over the last 4 months for the Python memcache module I maintain, and release a new version of that.

I didn't get enough time to actually close any Python issues, which makes me sad. But I did get quite a lot of fairly useful things done at the sprints so over all I'm quite happy.

The sprints are just amazing. I looked around and was struck that I was sitting there with 40 people, all of which were working their butts off for something they don't, in general, get paid for. Working hard to follow their bliss, as the saying goes. And that was happening in 10 other rooms at the same time.

If this were a company, with these people doing this 50 weeks a year, it's amazing to imagine where Python would go. I realize a lot of these people are able to put in work or after work time for this, but I can't help but think that there's nearly an order of magnitude more focus on the projects during the sprints. Probably not sustainable, but it's still interesting to think about.


So, yeah, best PyCon ever. No doubt.

comments powered by Disqus

Join our other satisfied clients. Contact us today.