Your Linux Data Center Experts

Sometimes I wonder about the performance impact of some operations. This morning I've been poking at some PostgreSQL code, and found that when I updated a table my long-running Python code wasn't picking up the change. Just a little surprising, but easily enough fixed by calling connection.commit() periodically to close the current transaction. How expensive is a commit after no changes to the database, though?

Just for reference, this is running on an Athlon 2800+ under-clocked to 1.75GHz (don't ask), using PostgreSQL 7.4.6 and psycopg 1.1.18. Here are some native Python operators, for comparison:

  • i += 1: 1,936,018/sec
  • time.time(): 244,075/sec

Now, the database operators:

  • psycopg.connect()/connection.close(): 181.65/sec
  • psycopg.connect()/connection.cursor()/connection.close(): 181.46/sec
  • connection.cursor()/cursor.close(): 298,214/sec
  • connection.commit(): 538,010/sec
  • connection.rollback(): 534,019/sec

So, a commit on a local database which is read-only is really fast. Connections are the slowest part, not surprisingly. I'm a little surprised at how fast cursors are to create. I had tended to use them fairly sparingly in the past, but I'm going to change my usage pattern of them a bit because of this.

comments powered by Disqus

Join our other satisfied clients. Contact us today.