en/computer/debian/no more lvmWEBlog -- Wouter's Eclectic Bloghttps://grep.be/blog//en/computer/debian/no_more_lvm/WEBlog -- Wouter's Eclectic Blogikiwiki2014-03-01T13:42:06ZMetadata sizehttps://grep.be/blog//en/computer/debian/no_more_lvm/comment_1538/jd (julien@danjou.info)2014-03-01T13:42:06Z2009-04-18T11:30:50Z
<p>I think you're wrong about LVM metadata size. AFAIK it's even not relative, it something like a couple of extents that are reserved.</p>
<p>Are you not confusing about root reserved space (tune2fs -m) on ext fs?</p>
Re: Metadata sizehttps://grep.be/blog//en/computer/debian/no_more_lvm/comment_1539/wouter2014-03-01T13:42:06Z2009-04-18T11:59:20Z
<p>I might be wrong. However, in any case, LVM needs to store <em>something</em> which is relative to the number of extents; at the very least, it will need to store information such as 'this extent is assigned to this LV' somewhere. That does take up quite some diskspace, and I've seen it somewhere between 5 and 10% of the total diskspace.</p>
<p>But, hey, I didn't look at the LVM source, and it's not the main point of my post, so...</p>
Using software RAID to migrate back?https://grep.be/blog//en/computer/debian/no_more_lvm/comment_1540/Joachim Breitner (mail@joachim-breitner.de)2014-03-01T13:42:06Z2009-04-18T14:28:27Z
If you want to migrate your data back from the NBD to your local (non LVM) disk, couldn’t you create a software RAID-0 with the NBD as the first disk, then add the local disk, wait for the raid to be in sync, and remove the NBD? Then the downtime would just be the time to unmount the NBD device and to mount the RAID device containing the NBD device.
Re: Metadata sizehttps://grep.be/blog//en/computer/debian/no_more_lvm/comment_1541/Edward Allcutt (edward@allcutt.me.uk)2014-03-01T13:42:06Z2009-04-18T14:54:21Z
<p>All the on-disk metadata[0][1] is stored in the metadata header of each PV. This header is fixed in size when the PV is created and is usually much smaller than a single extent[2][3]. Each PV in a VG has a copy of the metadata for the entire VG.</p>
<p>[0] Excluding the backups made by userspace in /etc/lvm.
[1] The in-memory metadata may be considerable, especially for snapshots.
[2] Typical defaults are 192512 bytes of metadata (following a 4096 byte
fixed-size header) with the rest of the volume available for data.
Typical default for extent size is 4M but this is set by the LV not PV.
[3] The metadata size can be set with the --metadatasize option to pvcreate and
doing so is advisable if on a striped RAID volume or if expecting to create
many LVs or snapshots.</p>
LVM space overheadhttps://grep.be/blog//en/computer/debian/no_more_lvm/comment_1542/Simon McVittie2014-03-01T13:42:06Z2009-04-18T16:21:21Z
<p>On my laptop, the partition containing my only PV has 303,998,123,008 bytes (a little over 283GB). The LVM VG contains 72478 extents of 4MB each for a total of 303,994,765,312 usable bytes; I make that just over 3MB of overhead, or around 0.1% (so I suspect it's O(1) rather than O(size)).</p>
<p>Inside that VG, lvdisplay reports that my main filesystem occupies exactly 200GB (51200 extents), and blockdev(8) reports that the block device exposed for the filesystem is indeed exactly that size.</p>
<p>So I think you're wrong about the space cost of LVM...</p>
<p>-- smcv.pseudorandom.co.uk</p>
Re: Using software RAID to migrate back?https://grep.be/blog//en/computer/debian/no_more_lvm/comment_1543/wouter2014-03-01T13:42:06Z2009-04-19T10:41:12Z
<p>I doubt it.</p>
<p>First, I presume you meant to say RAID1 rather than RAID0. The latter does just striping, no mirroring, so you probably won't be able to use that.</p>
<p>Second, RAID1 also needs some metadata, so you need to 'reformat' a partition as an MD device, on which you can then store your actual data. It's not possible to take a 'regular' partition, and turn that into a RAID1 member.</p>
<p>At least not without tricks like using LVM; but since this was about <em>getting rid</em> of LVM, that's hardly helpful <img alt=":-)" src="https://grep.be/blog//smileys/smile.png" /></p>