You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 12, 2021. It is now read-only.
No, I would rather not do raid in the current installer script. If we want to start working on a full featured Linux installer we should do that in a language that isn't shell.
At least it would be useful to document what needs to be done, if anything, besides building the raid and running the installer against it. I tried this but couldn't get my machine to boot.
@pierreozoux mdadm is included base images but we haven't played with it at all. Setting up non-root raid volumes should work just the same as any other distro. Same ol' mdadm command for creating and assembling volumes. You may need to enable mdadm.service if you want to assemble volumes on boot via /etc/mdadm.conf as opposed to using the raid-autodetect partition type and letting the kernel do it. It might be possible to move the root filesystem as long as the raid-autodetect partition type is used but for that you are almost certainly better off with using multi-device support in btrfs.
What certainly won't work right now is installing all of coreos on top of software raid, the update and boot processes both assume the ESP and /usr partitions are plain disk partitions.
What about migrating after install. Eg to RAID 1 from installed /dev/sda (one partition sda1 for demonstration) should be something like this from a Rescue CD or similar:
Thereafter the disk mount configuration needs to be changed and the kernel root device in the bootloader, as well as the bootloader installed to both disks.
modify /mnt/target/etc/fstab Replace /dev/sda1 with /dev/md0 - but this is non-existent on CoreOS
bootloader since 435 seems to be GRUB which helps but I cannot find a grub binary only config in /usr/boot
@warwickchapman just in case you finished your exploration into this topic and came up with a complete solution - or if someone else has - I'd appreciate if you shared it. I know too little about setting up and messing with RAID / mounts / boot in order to complete this myself. It's not a hard requirement for my use case but it would help being able to have RAID to be able to use both/all disks in a system. I understand it's also possible to set up a distributed file system like Ceph and let it manage the disks without RAID, and that would work for the use cases I have in mind, but for now I'm happy about any additional complexity I can avoid!
As for md raid if the partition types are the raid-autodetect type then the raid volume will be assembled automatically. But you can only put the ROOT filesystem on raid, we don't currently support putting the other partitions on anything other than plain disk devices.
TBH I think it would be very important to add and/or document a solution for software RAID. Independently of how good coreos can handle failure (together with fleet, docker, ...) I'm not really fancy about "loosing a server" just because of a disc failure (which happens "all the time" btw).
I don't seam to get it to work, I've setup md0 and added the ROOT label, but even if I add rd.auto=1 to grub, it just hangs there on boot. After I type mdadm --assemble --scan in the emergency console, the boot continues. Any idea?
I worked on attached ephemeral raid and was able to get it working using the following units in my cloud config. I tested reboots as well and the raid came back and mounted correctly.
This code is specific to creating a software raid 0 on GCE using two local SSD's with the NVME interface. Feedback or suggestions would be appreciated.
This will set up a single RAID partition on vdb and vdc, assemble them into an array, and then create the new ROOT filesystem. It also wipes out the default ROOT partition on vda9 (since you don't want two). When you use this, you'll also need rd.auto to tell the initramfs to automatically assemble the array on every boot.
I'm new to CoreOS and i'd like to test this on my barebone server.
I have following Issue:
I want to have 2x SSD's in RAID1 for root as I always do.
I tried crawfords solution, installed CoreOS with ignition-file, but this resulted in an error, where the boot sequence is waiting on a job with no limit, which means forever due to that job will never be finished. When i kill the server manually and reboot, the drive is not bootable anymore.
When I try to setup a HW-Raid, my CoreOS-LiveUSB does not recognize my HW-RAID and takes the two drives as regular sda and sdb. Additionally, the coreos-install script fails with an error return code 32.
BTW: I have attached another 4 Drives for Storage in the same node.
Activity
marineam commentedon Jun 4, 2014
No, I would rather not do raid in the current installer script. If we want to start working on a full featured Linux installer we should do that in a language that isn't shell.
robszumski commentedon Jun 4, 2014
What about software raid outside of the installation script?
jsierles commentedon Jun 4, 2014
At least it would be useful to document what needs to be done, if anything, besides building the raid and running the installer against it. I tried this but couldn't get my machine to boot.
marineam commentedon Jun 4, 2014
@robszumski software raid is what we are talking about.
@jsierles sounds like we have bugs to fix because my intent is to make that work.
nekinie commentedon Aug 9, 2014
Any news on the software raid documentation?
Would be rather useful
ghost commentedon Aug 10, 2014
Also would see this as very useful functionality.
pierreozoux commentedon Aug 13, 2014
Yes, it would be great :)
@philips I saw this commit But yeah... Does anybody can tell me where to start if I want raid software?
emerge mdadm
??marineam commentedon Aug 13, 2014
@pierreozoux mdadm is included base images but we haven't played with it at all. Setting up non-root raid volumes should work just the same as any other distro. Same ol' mdadm command for creating and assembling volumes. You may need to enable mdadm.service if you want to assemble volumes on boot via /etc/mdadm.conf as opposed to using the raid-autodetect partition type and letting the kernel do it. It might be possible to move the root filesystem as long as the raid-autodetect partition type is used but for that you are almost certainly better off with using multi-device support in btrfs.
marineam commentedon Aug 13, 2014
What certainly won't work right now is installing all of coreos on top of software raid, the update and boot processes both assume the ESP and /usr partitions are plain disk partitions.
brejoc commentedon Sep 2, 2014
@marineam Would this constraint of CoreOS also apply to btrfs-raids?
marineam commentedon Sep 2, 2014
@brejoc multi-device btrfs for the root filesystem should work
warwickchapman commentedon Oct 20, 2014
What about migrating after install. Eg to RAID 1 from installed /dev/sda (one partition sda1 for demonstration) should be something like this from a Rescue CD or similar:
Thereafter the disk mount configuration needs to be changed and the kernel root device in the bootloader, as well as the bootloader installed to both disks.
modify /mnt/target/etc/fstab Replace /dev/sda1 with /dev/md0 - but this is non-existent on CoreOS
bootloader since 435 seems to be GRUB which helps but I cannot find a grub binary only config in /usr/boot
Thoughts?
seeekr commentedon Dec 11, 2014
@warwickchapman just in case you finished your exploration into this topic and came up with a complete solution - or if someone else has - I'd appreciate if you shared it. I know too little about setting up and messing with RAID / mounts / boot in order to complete this myself. It's not a hard requirement for my use case but it would help being able to have RAID to be able to use both/all disks in a system. I understand it's also possible to set up a distributed file system like Ceph and let it manage the disks without RAID, and that would work for the use cases I have in mind, but for now I'm happy about any additional complexity I can avoid!
marineam commentedon Dec 11, 2014
As noted on IRC, for btrfs if raid0 or raid1 is all you need then it is easiest to just add devices to btrfs and rebalance: https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
As for md raid if the partition types are the raid-autodetect type then the raid volume will be assembled automatically. But you can only put the ROOT filesystem on raid, we don't currently support putting the other partitions on anything other than plain disk devices.
23 remaining items
steigr commentedon Apr 27, 2015
I removed sda9 for the sake of md0. But you are true: It's kind of a hack atm and needs to be documented or discouraged.
anpieber commentedon May 11, 2015
TBH I think it would be very important to add and/or document a solution for software RAID. Independently of how good coreos can handle failure (together with fleet, docker, ...) I'm not really fancy about "loosing a server" just because of a disc failure (which happens "all the time" btw).
baracoder commentedon Jul 16, 2015
I don't seam to get it to work, I've setup md0 and added the ROOT label, but even if I add
rd.auto=1
to grub, it just hangs there on boot. After I typemdadm --assemble --scan
in the emergency console, the boot continues. Any idea?tobkle commentedon Aug 25, 2015
Cannot get CoreOS running with RAID 1 after spending 2 days. This is essential in my opinion. No valid documentation found therefore. Quitting CoreOS.
crawford commentedon Dec 15, 2015
coreos/bugs#1025
cmoad commentedon Jan 19, 2016
I worked on attached ephemeral raid and was able to get it working using the following units in my cloud config. I tested reboots as well and the raid came back and mounted correctly.
This code is specific to creating a software raid 0 on GCE using two local SSD's with the NVME interface. Feedback or suggestions would be appreciated.
levipierce commentedon Feb 5, 2016
I used cmoad's approach on aws running kubernetes with coreos worked like a charm!
vmatekole commentedon Apr 18, 2016
Hi!
Has there been any progress on this front? Developing consistent documentation for SWRAID. Would going with HWRAID be an easier option for now?
robszumski commentedon Apr 18, 2016
Check out Ignition's example for software raid: https://coreos.com/ignition/docs/latest/examples.html#create-a-raid-enabled-data-volume
More background on Ignition: https://coreos.com/blog/introducing-ignition.html
celevra commentedon May 1, 2016
can i use that example to setup a software raid for the root partition?
crawford commentedon Oct 17, 2016
@celevra you can use something like this (have not actually tested it):
This will set up a single RAID partition on vdb and vdc, assemble them into an array, and then create the new ROOT filesystem. It also wipes out the default ROOT partition on vda9 (since you don't want two). When you use this, you'll also need
rd.auto
to tell the initramfs to automatically assemble the array on every boot.celevra commentedon Oct 18, 2016
thank you crawford, will try that out
madejackson commentedon Dec 29, 2016
Dear All,
I'm new to CoreOS and i'd like to test this on my barebone server.
I have following Issue:
I want to have 2x SSD's in RAID1 for root as I always do.
I tried crawfords solution, installed CoreOS with ignition-file, but this resulted in an error, where the boot sequence is waiting on a job with no limit, which means forever due to that job will never be finished. When i kill the server manually and reboot, the drive is not bootable anymore.
When I try to setup a HW-Raid, my CoreOS-LiveUSB does not recognize my HW-RAID and takes the two drives as regular sda and sdb. Additionally, the coreos-install script fails with an error return code 32.
BTW: I have attached another 4 Drives for Storage in the same node.
Does anyone have a solution?