diff --git a/_posts/sysadm/2023-06-24-directly-attache-storage-with-redundancy-and-integrity-using-lvm-lvmraid.tl b/_posts/sysadm/2023-06-24-directly-attache-storage-with-redundancy-and-integrity-using-lvm-lvmraid.tl index 79a7bb4..7a8b1e6 100644 --- a/_posts/sysadm/2023-06-24-directly-attache-storage-with-redundancy-and-integrity-using-lvm-lvmraid.tl +++ b/_posts/sysadm/2023-06-24-directly-attache-storage-with-redundancy-and-integrity-using-lvm-lvmraid.tl @@ -12,7 +12,8 @@ Data on disk can be lost due to: * hardware error correction insufficiency or complete disk failure - which is obvious and can be remedied using multiple copies of same data on multiple disks (RAID). RAID for Logical Volumes is provided by @dm-raid@ kernel module - and offers functionality similar to @mdadm@ utility. + and offers functionality similar to @mdadm@ utility, + * silent data corruption - when erros slip through disk's error detection mechanisms and remain undetected. Risk can be considerably reduced by using checksums of data stored on additional integrity sub-LVs using @dm-integrity@ @@ -21,30 +22,28 @@ Data on disk can be lost due to: RAID and integrity can be combined: each RAID sub-LV has its per-sector checksums stored on integrity sub-LVs. Whenever checksum mismatch detects silent data corruption, restoration from uncorrupted source is possible thanks to disk -redundancy. As of LVM tools 2.03.21 (2023-04-21) detecting which disk holds +redundancy. As of LVM tools 2.03.21 (2023-04-21), detecting which disk holds uncorrputed data would be impossible without @dm-integrity@. Even if 3-disk-RAID1 or RAID6 is used - which theoretically is enough to find source of -single sector corruption event - the mechanisms are not implemented. See _man -lvmraid, "Scrubbing Limitations"_. +single sector corruption event - the mechanisms are not implemented (see: _man +lvmraid, "Scrubbing Limitations"_). h3. Setup -First create RAID1, then extend it with integrity layer. +First create RAID1, then extend it with integrity layer. Integrity block size +cannot be smaller than drive's logical sector size and preferably should match +file system block size. Even before sync is finished, filesystem can be created +on LV: {% highlight bash %} -lvcreate --type raid1 --mirrors 1 -n backup -L 4t vgbackup /dev/sda /dev/sdb -{% endhighlight %} - -Integrity block size cannot be smaller than drive's logical sector size and -preferably should match file system block size. - -{% highlight bash %} -$ cat /sys/class/block/sda/queue/physical_block_size +# lvcreate --type raid1 --mirrors 1 -n backup -L 4t vgbackup /dev/sda /dev/sdb +# cat /sys/class/block/sda/queue/physical_block_size 4096 -$ cat /sys/class/block/nvme0n1/queue/logical_block_size +# cat /sys/class/block/nvme0n1/queue/logical_block_size 512 -$ lvconvert --raidintegrity y --raidintegrityblocksize 4096 --raidintegritymode bitmap /dev/vgbackup/backup +# lvconvert --raidintegrity y --raidintegrityblocksize 4096 --raidintegritymode bitmap /dev/vgbackup/backup +# mkfs.ext4 -b 4096 -O ^has_journal /dev/vgbackup/backup {% endhighlight %} Both operations - creating RAID mirror and adding integrity layer - require @@ -54,13 +53,7 @@ initial sync, the progress of which can be monitored: lvs -a -o name,segtype,devices,sync_percent {% endhighlight %} -Even before sync is finished, filesystem can be created on LV: - -{% highlight bash %} -mkfs.ext4 -b 4096 -O ^has_journal /dev/vgbackup/backup -{% endhighlight %} - -and the drives can be detached for later synchronization: +Drives can be detached for later synchronization at any time: {% highlight bash %} vgchange -an vgbackup