Replies: 1 comment 4 replies
-
|
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hardware: Raspberry Pi 4 Model B Rev 1.4 - 8 GB RAM
OS: Ubuntu Server 20.04.3 LTS
Kernel: Linux fs 5.4.0-1045-raspi #49-Ubuntu SMP PREEMPT Wed Sep 29 17:49:16 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
The currently installed zfsutils-linux version is 0.8.3
I'm setting up ZFS on a fresh install of Ubuntu Server for Raspberry Pi. It's a currently just a test setup. There's no production data at risk, so I can completely wipe and restart with a fresh install as needed.
I'm setting up an 8 x 6 TB raidz2 pool using 6 TB SATA drives in (2) four-bay SATA/USB3 docks.
I have no problem creating the pool using /dev/sda .. /dev/sdh
Once created, I am able to read/write some test data - disk benchmarking with sysbench, for example.
The physical drives don't get consistently assigned to /dev/sd* device files across reboots, so I want to have disks assigned to the pool by serial number, so I have a custom udev rule:
/lib/udev/rules.d/10-local.rules
KERNEL=="sd[a-h]", SUBSYSTEM=="block", PROGRAM="get_disk_serial.sh %k", SYMLINK+="disk/by-serial/%c"
At boot /dev/disk/by-serial looks like:
I should be able to export pool1 and then reimport using the links in /dev/disk/by-serial. I get the following:
Any ideas on how to troubleshoot/solve this. At first glance, this looks like it might be a problem with the zpool command handling the -d option, rather than some issue with how the physical disks are linked in /dev/disk/by-serial. I could be wrong.
Beta Was this translation helpful? Give feedback.
All reactions