Hard disk partition pregap and grub2

A week or so back, I was at a customer to upgrade his server to wheezy. This server had originally been installed, not as squeeze but as something older, and was installed using an LVM-on-mdraid setup.

Booting off such a setup (without a /boot partition outside of the raid/LVM) is certainly possible with several boot loaders, including grub2; however, it was having issues in this particular instance, producing an error message along the likes of:

/sbin/grub2-setup: warn: Your embedding area is unusually small. core.img won't fit in it..

It took me a while to figure out what that meant, but the Internet to the rescue:

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000b3c3b

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   312580095   156289024   83  Linux

That partition starts at the 2048th block, resulting in a 1MiB gap before the first partition. The older default was to start on the 63rd block, for a pregap of just short of 32KiB.

Unfortunately, the only way to fix this issue without ugly hacks (such as the chainloading suggested in this fedora forum thread) is to repartition. Luckily, thanks to the magic of LVM, this isn't difficult (if a little involved). Steps:

pvmove -v /dev/md0
vgreduce -a
mdadm -S /dev/md0
(...repartition the hard disks that make up /dev/md0...)
mdadm -C /dev/md0 (... other options required to recreate /dev/md0 on the desired RAID level...)
pvcreate /dev/md0
vgextend /dev/vg /dev/md0

... and then rinse, repeat for md1. At that point, installing grub will work as usual.

An alternative (since we were using RAID1) could have been to remove one disk from the RAID array, repartition it, create a new RAID1 array in degraded mode, pvmove over to that, destroy the original array, and add the second disk to the new array. This would probably be a better idea when you only have one RAID array, for instance, or when you have more than one array and have more data than fits on a single array. However, that wasn't the case here (at least not after a resize2fs -M call followed by an appropriate lvresize one), and so I thought that not reducing redundancy, even if only temporarily, was a better move.