Your Linux Data Center Experts

We've been upgrading our mirrors.tummy.com server to add more storage space, and the new system got accidentally set up as a Linux software RAID-10 array rather than RAID-5. Since most of our use is reads, and we want capacity, RAID-5 is preferable.

So I decided to try “growing” the array from RAID-10 to RAID-5. Read on for more details about this operation.

First of all, as always, make sure you have a backup of all related data. In our case, we hadn't yet decommissioned the previous server, and this server is just serving copies of data from other servers, so a data-loss wasn't a big concern.

However, at one point I was pretty sure that my array was totally unusable. Software RAID ended up doing the right thing in the end, but I would have been real worried if this was precious data…

First, a word about the configuration: The system is running CentOS 6 with all the latest updates applied. There are 4 2TB drives (we thought about 3TB but the BIOS doesn't support booting from 3TB drives and the 2TB drives give us double the capacity of the old server…

This operation isn't very straightforward… You can't go directly from RAID-10 to RAID-5, but you can go from RAID-10 to RAID-0, then from RAID-0 to RAID-5.

First, I changed RAID-10 to RAID-0 with the following commands:

mdadm --grow /dev/md1 --level=0 --raid-devices=2

This operation was instantaneous and worked very well. With one exception… When I first ran it, mdadm produced a segfault. The version of mdadm on the system is 3.2.2-9, and I saw that there was a version 3.2.3 available upstream, so I downloaded and built that from source, and used that version for the command.

Looking at /proc/mdstat, I found that sdb2 and sdc2 were in the array, so sda2 and sdd2 were removed.

NOTE: According to the “mdadm” man page, the minimum kernel version supported for going RAID-0 to RAID-5 is 2.6.35, and CentOS has 2.6.32.

I found the kernel version requirement as I was reading through the man page building up the command I thought I needed to run. Finding examples of this on the Internet didn't work very well, which is part of the reason for this post.

However, I decided to try the command I had come up with, to see if it reported any problems with it. The command was:

mdadm --grow /dev/md1 --level=5 --raid-devices=4 --add /dev/sda2
--add /dev/sdd2

This command immediately hung the system.

I got onto the console and booted into a Ubuntu Lucid rescue environment. I was hoping for a more recent kernel, but it turned out to also be 2.6.32… It took maybe 5 or 10 minutes to give me the rescue option, but when I dropped down into a shell I looked at /proc/mdstat and found that the RAID-5 array was rebuilding, with 3 drives and one spare.

This rebuild took around 5 hours, and once it did everything seemed fine. I then rebooted back into the main OS, which worked just fine.

Then I tried adding the 4th drive to the array with the following commands:

mdadm --grow /dev/md1 --bitmap=none
mdadm --grow /dev/md1 --raid-devices=4

You apparently can't have a bitmap on the array while this is going, so that's why I did the first command.

This rebuild took around 19 hours; it took significantly longer because it's not just rebuilding one drive, but re-writing the entire array, so there was a lot of reading, seeking, and re-writing.

Once this completed, I returned the bitmap:

mdadm --grow /dev/md1 --bitmap=internal

While I wouldn't say it went smoothly (due to the kernel lock-up and the mdadm segfault), it did seem to work in the end.

comments powered by Disqus

Join our other satisfied clients. Contact us today.