A few months ago at Hacking Society, Alex Williamson mentioned that LVM2 performance was pretty bad. Right around that time I had used LVM2 on a pair of 400GB drives to run badblocks across the pair of drives and had noticed the lower than expected performance. I used LVM instead of RAID, because I can remember how to create an LVM array, but not a RAID array with the new mdadm tool.
badblocks uses large sequential reads and writes to verify that the discs are happy, and I also use it just to stress test the drives. Of course, on a pair of 400GB drives, the difference between 20MiB/sec and 45MiB/sec means the difference between 18 days run-time and 8 days. I like to run 5 passes, which on this array would read and write around 32TiB of data.
Recently at NCLUG I mentioned to Stephen Warner that LVM2 performance was pretty bad. Today I decided to take a quick look to see if there were any fixes to it and found an interesting tool. The reason for looking for this was the copy of a bunch of data on some systems that we have to use LVM2 on (these systems average around 20 partitions per machine, not really suitable for disc-based partitions). I was copying 6GiB between machines on gigabit networking, and getting only 10MiB/sec.
The 50% number I was quoting to Stephen was just seat of the pants. In my looking today and collecting some hard numbers, it looks like 33% is a closer number. While looking for a solution, I found this thread which mentions the "blockdev" command.
I wasn't able to get substantially better performance, but this command is one I was not familiar with. It allows you to tweak block device values from the command line, including things like the read-ahead, marking devices as read-only, getting the capacity of the device, and allowing you to flush the buffers. I can see some regular uses for the capacity and flush options.comments powered by Disqus