Moving LVM volumes to a different volume group

by Sander Marechal

I recently ordered a brand new PowerEdge T105 server from Dell because my current home server, a HP ProLiant G3, is much too power hungry for my liking. It consumes about 300 Watt round the clock, increasing my power bill by nearly € 50 a month. The T105 consumer about a third of that, usually less. The new server came with an 80 GB hard disk. I partitioned it with LVM, installed Debian Lenny and moved over the bulk of my things from the old server to the new server. I did that manually over the course of two weeks because it was a good time to restructure and upgrade many other things in the process.

When I was done only one thing remained: my media collection, which is stored on a 500 GB RAID1 array on the old server. That RAID1 array is also partitioned using LVM in a single 500 GB volume group. I took the two drives out of the old server, put them in the new server, copied over /etc/mdadm/mdadm.conf from the old server and all was well. Nearly. My media collection only uses a small part of the 500 GB volume group, so I wanted to move the OS volumes from the 80 GB volume group to the 500 GB volume group. That way I could take out the 80 GB disk and save some power. Problem: There is no obvious way to move a logical volume from one volume group to another. Additional problem: I can’t run the OS from the 80 GB volume group when I am migrating them. Cue SytemRescueCD.

At first I tried to use a Debian Etch Live CD and an Ubuntu 8.04 Live CD to access my mdadm RAID and LVM volume groups from a Live CD but those didn’t work well. The 2.6.18 kernel in Debian Etch is too old to handle the T105 hardware. It could not get on-board gigabit ethernet working so I could not download the packages I needed to get mdadm and lvm working. Ubuntu was just as unsuccessful. While it did see my ethernet card, various kernel modules needed for mdadm and LVM are left out of the Live CD kernel in order to save space. So I went with SystemRescueCD which comes with both mdadm and LVM out-of-the-box.

The system layout is quite simple. /dev/sda1 and /dev/sdb1 make up a 500 GB mdadm RAID1 volume. This RAID volume contains an LVM volume group called “3ware”, named so because in my old server it was connected to my 3ware RAID card. It contains a single logical volume called “media”. The original 80 GB disk is on /dev/sdc1 which contains an LVM volume group called “linuxvg”. Inside that volume group are three volumes: “boot”, “root” and “swap”. Goal: Move linuxvg-root and linuxvg-boot to the 3ware volume group. Additional goal: Rename 3ware to linuxvg. The latter is more for aesthetic reasons but as a bonus it also means that there is no need to fiddle with grub or fstab settings after the move.

Before starting SystemRescueCD and start moving things around there are a few things that need to be done first. Start by making a copy of /etc/mdadm/mdadm.conf because you will need it later. Also, because the machine will be booting from the RAID array I need to install grub to those two disks.

  1. # grub-install /dev/sda
  2. # grub-install /dev/sdb

Now it’s time to boot into SystemRescueCD. I start off by copying /etc/mdadm/mdadm.conf back and starting the RAID1 array. This command scans for all the arrays defined in mdadm.conf and tries to start them.

  1. # mdadm --assemble --scan

Next I need to make a couple of changes to /etc/lvm/lvm.conf. If I were to scan for LVM volume groups at this point, it would find the 3ware group three times: once in /dev/md0, /dev/sda1 and /dev/sdb1. So I adjust the filter setting in lvm.conf so it will not scan /dev/sda1 and /dev/sdb1.

  1. filter = [ "r|/dev/cdrom|", "r|/dev/sd[ab]1|" ]

LVM can now scan the hard drives and find all the volume groups.

  1. # vgscan

I disable the volume groups so that I can rename them. linuxvg becomes linuxold and 3ware becomes the new linuxvg. Then I re-enable the volume groups.

  1. # vgchange -a n
  2. # vgrename linuxvg linuxold
  3. # vgrename 3ware linuxvg
  4. # vgchange -a y

Now I can create a new logical volume in the 500 Gb volume group for my boot partition and create an ext3 filesystem in it.

  1. # lvcreate --name boot --size 512MB linuxvg
  2. # mkfs.ext3 /dev/mapper/linuxvg-boot

I create mount points to mount the original boot partition and the new boot partition and then use rsync to copy all the data. Don’t use cp for this! Rsync with the -ah option will preserve all soft links, hard links and file permissions while cp does not. If you do not want to use rsync you could also use the dd command to transfer the data directly from block device to block device.

  1. # mkdir /mnt/src /mnt/dst
  2. # mount -t ext3 /dev/mapper/linuxold-boot /mnt/src
  3. # mount -t ext3 /dev/mapper/linuxvg-boot /mnt/dst
  4. # rsync -avh /mnt/src/ /mnt/dst/
  5. # umount /mnt/src /mnt/dst

Rinse and repeat to copy over the root filesystem.

  1. # lvcreate --name root --size 40960MB linuxvg
  2. # mkfs.ext3 /dev/mapper/linuxvg-root
  3. # mount -t ext3 /dev/mapper/linuxold-root /mnt/src
  4. # mount -t ext3 /dev/mapper/linuxvg-root /mnt/dst
  5. # rsync -avh /mnt/src/ /mnt/dst/
  6. # umount /mnt/src /mnt/dst

There's no sense in copying the swap volume. Simply create a new one.

  1. # lvcreate --name swap --size 1024MB linuxvg
  2. # mkswap /dev/mapper/linuxvg-swap

And that's it. I rebooted into Debian Lenny to make sure that everything worked and I removed the 80 GB disk from my server. While this wans’t particularly hard, I do hope that the maintainers of LVM create an lvmove command to make this even easier.

Creative Commons Attribution-ShareAlike

Comments

#1 Anonymous Coward

is it difficult to run:

sudo apt-get install mdadm lvm2

?

#2 David M.

Two comments:
First you can use the ubuntu live cd to do this. As the first comment indicates you just need to use apt-get to install mdadm and lvm2. I've done this a number of times. Mainly b/c it's much faster to do then to download another iso and burn another cd.

Secondly I believe you can do this using lvm commands with the following steps.

Run vgmerge to merge your two volume groups into one. Then use pvmove to migrate you lv's off the 80GB drive. Once you have the lv's all on the raid you can use vgreduce and pvremove to get the 80GB drive out of the volume group.

#3 Sander Marechal (http://www.jejik.com)

Thanks for your comments.

With regards to installing mdadm and lvm on Ubuntu: I tried that. It then complained about not finding kernel modules. So I installed the kernel modules as well, ran insmod and tried again, once more without success. lvm could see the volume group on my 80 GB drive but mdadm refused to see or start the RAID1. At that point I gave up and got SystemRescueCD because I knew it had everything out of the box.

@David: Nice trick with the vgmerge and pvmove. I looked at the commands but didn't put them together like you did. Someone on #lvm recommended using rsync or dd instead so I went with that. I'll definitely try it next time such a situation comes up.

#4 furicle (http://furicle.blogspot.com)

I'm pretty sure either the Alternate or Server CD in rescue mode would have worked 'out of the box' where the live cd wouldn't

#5 Sander Marechal (http://www.jejik.com)

Perhaps. I don't know the differences between the LiveCD kernel configuration and those for the Alternate or Server CDs.

#6 Thomas Harold (http://www.tgharold.com/techblog/)

David M's comment from Aug 26th 2008 is pretty much spot on. Especially if you are retiring an old VG or old PVs.

One of the tricks that I did earlier this year was to move PVs around on a system where I was changing from a 160GB 7200 PRM SATA (RAID-1) to a 150GB 10k RPM SATA (RAID-1). I also used the vgmerge, pvmove, vgreduce and pvremove commands.

The other brute-force solution is to unmount the original file system, create an identically sized LV in the target VG, and simply use the dd command to copy the blocks from the old LV to the new LV. But cp or rsync and mounting both at the same time works well. Multiple ways to skin the cat and all that.

(I greatly enjoy LVM's flexibility, it's saved me lots of trouble over the past few years where I needed to change things around on a Linux box.)

#7 Sander Marechal (http://www.jejik.com)

I love LVM as well, especially when combined with grub2's new functionality of booting off an LVM volume. Finally it's no longer necessary to keep a separate /boot partition around.

#8 Jaco Kroon (http://jkroon.blogs.uls.co.za)

Based on pvmerge+pvmove one could do this without even needing a livecd. Both those commands as well as the final vgreduce/pvremove commands can run on a live system, so you can continue working without having to worry about losing any data whilst this all happens. The only downtime would be for the reboot to remove the 80GB drive, which may even be do-able in a powered up state if you have hot-swap drives :).

#9 Stoat

VGmerge/PVmove is handy but it's of no use if you want to move stuff from one PV to a stripeset of PVs for speed (or to change the number of PVs in a stripset).

Usual warnings apply. The underlaying LUNs for the PVs in my case are 15-drive 24Tb RAID6 arrays, so this is effectively RAID60, not RAID0... :-)

#10 iSO (http://pascal-schwarz.ch)

i think there is an even easier way:

just create a new lv on the target vg with the same size, copy using dd, test, remove lv on source vg

and about rsync/cp: doesnt "cp -arv" work aswell?
"--archive same as -dR --preserve=all"

#11 Anonymous Coward

here is a step-by-step example on a similar setup

http://pleasedonttouchthescreen.blogspot.com/2011/10/migrating-logical-volumes-between.html

Comments have been retired for this article.