Linux‎ > ‎Resize Partition‎ > ‎

Partition Resize Tutorial

A software mirror (RAID1) can be resized mostly on the fly, as long as the hardware and kernel support hot-swapping hard drives of different sizes. If not, then it can still be done with minimal downtime (requires two reboots).

The basic idea is to swap the hard drives one at a time, rebuilding the mirror in between. Once the larger drives are live, resize the software mirror to use the newly added extra space. Then, resize the various LVM levels to use the new space on the RAID, and finally, resize the filesystem on the logical volumes.

This document assumes that there are two software RAIDs. /dev/md0 is used by /boot, and is made up of /dev/sda1 and /dev/sdb1. /dev/md1 is used by the LVM and is made up of /dev/sda2 and /dev/sdb2. Replace Drive1

In order to remove a drive from a software raid, you must first manually fail the drive, then remove it from the raid.

# mdadm --fail /dev/md0 /dev/sdb1
# mdadm --fail /dev/md1 /dev/sdb2
# mdadm --remove /dev/md0 /dev/sdb1
# mdadm --remove /dev/md1 /dev/sdb2

Check that /dev/sdb has been removed from both RAIDs.

# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0]
     143267584 blocks [2/2] [U_]

md0 : active raid1 sda1[0]
     104320 blocks [2/2] [U_]

It is now safe to swap Drive1 (/dev/sdb) with the new, larger, drive.

If the new drive is not recognized by the system, a reboot is required now.

Once the new drive is installed, and recognized by the system, you can now rebuild the mirror. First, create a new partition table on the new drive. Make the first partition identical to the first partition on /dev/sda, and fill the remainder of the disk with the second partition. Make sure to flag the first partition as bootable, and make both partitions of type "Linux raid autodetect" (partition type code 'fd'). Once complete, the partition table should look something like this:

# fdisk -l /dev/sdb

Disk /dev/sdb: 146.8 GB, 146815733760 bytes
255 heads, 63 sectors/track, 17849 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   fd  Linux raid autodetect
/dev/sdb2              14       17849   143267670   fd  Linux raid autodetect

Now, you can add these partitions to the mirrors, and they will automatically start rebuilding.

# mdadm --add /dev/md0 /dev/sdb1
# mdadm --add /dev/md1 /dev/sdb2

Allow the mirrors to rebuild. This may take quite some time, depending on how big the mirrors are, and how much I/O activity there is on the machine. You can check the progress by looking at /proc/mdstat. Once the mirrors are completely synced, /proc/mdstat should look something like this:

# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
     143267584 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
     104320 blocks [2/2] [UU]

Finally, install the bootloader on the new disk, THIS IS A VERY IMPORTANT STEP, if you forget it, you can expect a lot more downtime, as you boot into a liveCD to fix the bootloader.

# grub
grub> root (hd1,0)
grub> setup (hd1)
grub> quit

Replace Drive0

Replacing Drive0 (/dev/sda) is the same as replacing Drive1 (/dev/sdb).

# mdadm --fail /dev/md0 /dev/sda1
# mdadm --fail /dev/md1 /dev/sda2
# mdadm --remove /dev/md0 /dev/sda1
# mdadm --remove /dev/md1 /dev/sda2

Swap Drive0 with the new, larger, drive. If the system recognizes the new drive, make the partition table look exactly like the new partition table on Drive1 (/dev/sdb), and add the partitions to the RAIDs:

# mdadm --add /dev/md0 /dev/sda1
# mdadm --add /dev/md1 /dev/sdb2

And, install the bootloader on Drive0:

# grub
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Then, move on to the next step.

If the new drive was not recognized by the system, you will need to reboot. Because some of the servers are not configured to be able to boot off of Drive1 (/dev/sdb), you will need to flip the drives around, such that the newly mirrored /dev/sdb is now /dev/sda. The RAID partitions are autodetected, so the system should be able to figure out that the drives have moved.

So, now you have both larger drives installed, where /dev/hda has the data, and /dev/hdb is blank.

Create a new partition table on /dev/hdb, ensuring that it is identical to the partition table on /dev/hda.

Now, rebuild the mirror:

# mdadm --add /dev/md0 /dev/sdb1
# mdadm --add /dev/md1 /dev/sdb2

Wait for the mirror to rebuild (check /proc/mdstat for the progress), then install the bootloader on /dev/sdb.

# grub
grub> root (hd1,0)
grub> setup (hd1)
grub> quit

Expand RAID

At this point, you should now have both drives replaced with the new, larger, drives, and the mirror rebuild process should be complete. What you have now, is a RAID1 that is only using the first part of the partition, and it needs to be expanded to use the entire partition.

mdadm --grow /dev/md1 --size=max

Expand LVM Physical Volume

At this point, the RAID is now the same size as the partition, and the LVM Physical Volume needs to be expanded to use the entire RAID. In order to do this, you must have the latest version of lvm2 installed (and, by "latest", I mean the latest available via up2date did the trick, but the default version did not work).

up2date -i lvm2

Then, you can expand the Physical Volume to use the entire RAID with:

pvresize /dev/md1

Expand LVM Volume Group

The LVM Volume Group should automatically grow when the LVM Physical Volume is expanded. You can check it with vgdisplay. Note that the amount of free space should reflect the size difference between the old drives and the new drives. Expand LVM Logical Volume(s)

You can expand the LVM Logical Volume(s) to fit your needs using lvresize. Using lvresize with no arguments shows the usage of the command. There are basically three methods that you might use to increase the size of a Logical Volume.

1. Increase the size of the Logical Volume by a certain amount of space (10GB in this example)

lvresize -L +10G /dev/VolGroup00/LogVol00

2. Increase the size of the Logical Volume to a specific size (10GB in this example)

lvresize -L 10G /dev/VolGroup00/LogVol00

3. Increase the size of the Logical Volume by a specific number of Physical Extents. This method is useful if you want to grow the Logical Volume to use the remainder of the available space on the LVM. You can use vgdisplay to find out how many extents are available. (300 in this example)

lvresize -l +300 /dev/VolGroup00/LogVol00

Expand Filesystem(s)

Now that all of the containers have been resized, you can finally resize the filesystems. Note that this is done using ext2resize on a mounted ext2 or ext3 filesystem. ext2resize can only grow the filesystem. A filesystem can be reduced in size using resize2fs, but the filesystem must be unmounted in order to do that.

ext2online /dev/VolGroup00/LogVol00

And, you're done.