blog/_posts/sysadm/2023-09-15-lvm-cheat-sheet.tl

113 lines
3.5 KiB
Plaintext

---
layout: default
title: LVM cheat sheet
date: 2023-09-15 22:23 +0200
---
Example commands for common LVM related tasks.
h2. RAID
h3(#convert-lv-to-raid1). Convert existing LV to RAID1
Extend VG containing the LV with additional PV before conversion.
{% highlight bash %}
# vgextend vgdev /dev/nvme0n1p2
# lvconvert --type raid1 --mirrors 1 vgdev/backup /dev/nvme0n1p2
{% endhighlight %}
List currently used devices and sync status:
{% highlight bash %}
# lvs -a -o +devices
{% endhighlight %}
h3. Fix RAID inconsistency due to hardware errors
Run scrubbing - a full scan of RAID LV, which verifies RAID metadata, LV data and
parity blocks:
{% highlight bash %}
# lvchange --syncaction check /dev/vgdev/backup
{% endhighlight %}
If problems are found, this is denoted by @m@ on 9th bit of attributes displayed
by @lvs@:
{% highlight bash %}
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
backup vgdev rwi-aor-m- 4.00g 100.00
{% endhighlight %}
In such case repair ir required. As of LVM tools 2.03.21(2) (2023-04-21)
@lvchange --syncaction repair@ is unable to deduce which data is correct. The
only safe choice is to rebuild PV on the failed device:
{% highlight bash %}
# lvchange --rebuild /dev/sdb1 /dev/vgdev/backup
{% endhighlight %}
h3. Rebuild RAID1 LV with integrity layer
As of LVM tools 2.03.21(2) (2023-04-21) @man lvmraid@ says:
bq. Integrity limitations
<br><br>
[...] The following are not yet permitted on RAID LVs with integrity: lvreduce,
pvmove, snapshots, splitmirror, raid syncaction commands, raid rebuild.
That means commands like @lvchange --syncaction repair@ or @lvchange --rebuild@
*should not be run* on RAID with integrity. They can give misleading results,
e.g. signal mismatch when there is none.
If RAID1 with integrity LV requires refresh (partial synchronization) due to
e.g. device write errors, but both PVs are otherwise available, take the
following steps to fix it:
* remove integrity layer:
{% highlight bash %}
# lvconvert --raidintegrity n vgdev/backup
{% endhighlight %}
* if it is known which PV failed, rebuild it:
{% highlight bash %}
# lvchange --rebuild /dev/sda vgdev/backup
{% endhighlight %}
and re-add integrity layer to finish. Otherwise split RAID1 into single LVs.
@lvchange --synaction repair@ could theoretically refresh RAID1. But due to
lack of integrity layer, it cannot tell which drive contains uncorrupted data,
so the process must be performed manually:
{% highlight bash %}
# lvconvert --splitmirrors 1 --name backup_split vgdev/backup
{% endhighlight %}
* check integrity of both RAID1 parts:
** whether they differ at all (this may be time consuming and it may be more
appropriate to proceed directly to file content comparison instead):
{% highlight bash %}
# sha512sum /dev/vgdev/backup /dev/vgdev/backup_split
{% endhighlight %}
** if LVs differ, check filesystems:
{% highlight bash %}
# fsck.ext4 -n -f /dev/vgdev/backup
# fsck.ext4 -n -f /dev/vgdev/backup_split
{% endhighlight %}
** once filesystems are fixed and can be mounted, compare file content to
determine which mirror is valid (no output from @rsync@ means no differences):
{% highlight bash %}
# mount /dev/vgdev/backup /mnt/mirror1
# mount /dev/vgdev/backup_split /mnt/mirror2
# rsync -n -c -a --delete --progress /mnt/mirror1 /mnt/mirror2
{% endhighlight %}
* finally remove invalid mirror, recreate RAID1, re-add integrity layer and
discard unused space if necessary
h2. SSD related maintenance
h3. Discard unused space
* OWC does not support TRIM