Sometimes I wonder about the performance impact of some operations. This morning I've been poking at some PostgreSQL code, and found that when I updated a table my long-running Python code wasn't picking up the change. Just a little surprising, but easily enough fixed by calling connection.commit() periodically to close the current transaction. How expensive is a commit after no changes to the database, though?
Just for reference, this is running on an Athlon 2800+ under-clocked to 1.75GHz (don't ask), using PostgreSQL 7.4.6 and psycopg 1.1.18. Here are some native Python operators, for comparison:
Now, the database operators:
So, a commit on a local database which is read-only is really fast. Connections are the slowest part, not surprisingly. I'm a little surprised at how fast cursors are to create. I had tended to use them fairly sparingly in the past, but I'm going to change my usage pattern of them a bit because of this.comments powered by Disqus