diff --git a/_posts/sysadm/2023-09-15-lvm-cheat-sheet.tl b/_posts/sysadm/2023-09-15-lvm-cheat-sheet.tl
new file mode 100644
index 0000000..54ae27f
--- /dev/null
+++ b/_posts/sysadm/2023-09-15-lvm-cheat-sheet.tl
@@ -0,0 +1,75 @@
+---
+layout: default
+title: LVM cheat sheet
+date: 2023-09-15 22:23 +0200
+---
+Example commands for common LVM related tasks.
+
+h2. RAID
+
+h3. Convert existing LV to RAID1
+
+Extend VG containing the LV with additional PV before conversion.
+{% highlight bash %}
+# vgextend vgdev /dev/nvme0n1p2
+# lvconvert --type raid1 --mirrors 1 vgdev/backup /dev/nvme0n1p2
+{% endhighlight %}
+
+List currently used devices and sync status:
+{% highlight bash %}
+# lvs -a -o fullname,devices,sync_percent
+{% endhighlight %}
+
+
+h3. Rebuild RAID1 LV with integrity layer
+
+As of LVM tools 2.03.21(2) (2023-04-21) @man lvmraid@ says:
+
+bq. Integrity limitations
+
+[...] The following are not yet permitted on RAID LVs with integrity: lvreduce,
+pvmove, snapshots, splitmirror, raid syncaction commands, raid rebuild.
+
+That means commands like @lvchange --syncaction repair@ or @lvchange --rebuild@
+won't work as expected.
+
+If RAID1 with integrity LV requires refresh (partial synchronization) due to
+e.g. device write errors, but both PVs are otherwise available, take the
+following steps to fix it:
+
+* remove integrity layer:
+{% highlight bash %}
+# lvconvert --raidintegrity n vgdev/backup
+{% endhighlight %}
+
+* if it is known which PV failed, rebuild it:
+{% highlight bash %}
+# lvchange --rebuild /dev/sda vgdev/backup
+{% endhighlight %}
+and re-add integrity layer to finish. Otherwise split RAID1 into single LVs.
+@lvchange --synaction repair@ could theoretically refresh RAID1. But due to
+lack of integrity layer, it cannot tell which drive contains uncorrupted data,
+so the process must be performed manually:
+{% highlight bash %}
+# lvconvert --splitmirrors 1 --name backup_split vgdev/backup
+{% endhighlight %}
+
+* check integrity of both RAID1 parts:
+** whether they differ at all:
+{% highlight bash %}
+# sha512sum /dev/vgdev/backup /dev/vgdev/backup_split
+{% endhighlight %}
+** if LVs differ, check filesystems:
+{% highlight bash %}
+# fsck.ext4 -n -f /dev/vgdev/backup
+# fsck.ext4 -n -f /dev/vgdev/backup_split
+{% endhighlight %}
+** once filesystems are fixed and can be mounted, compare file content to
+determine which mirror is valid:
+{% highlight bash %}
+# mount /dev/vgdev/backup /mnt/mirror1
+# mount /dev/vgdev/backup_split /mnt/mirror1
+# rsync -d /mnt/mirror1 /mnt/mirror2
+{% endhighlight %}
+
+* finally remove invalid mirror, recreate RAID1 and re-add integrity layer