0

I created and assembled software RAID5 array (mdadm), 15 SSD disks. If I physically remove several disks and add them back, RAID doesn't assembles back. Disks stays in "spare" mode. Even after reassembling by commands.

But if I manually fail a drives: mdadm /dev/md0 --fail /dev/sd[xyz] and then after mdadm /dev/md0 --remove /dev/sd[xyz] remove physically and add disks back, with command mdadm /dev/md0 --add /dev/sd[xyz] RAID works fine!

Can anybody guess me, if there is a hot swap\remove\add in soft-raid mdadm or not? Should I always use commands before removing disks? No automation?

2
  • 1
    Please don't use R5, most storage professionals have considered it wildly dangerous for over a decade and we can't believe it's still an option on disk controllers and in OS's - we deal with people all the time who come here asking for help getting their data back from a broken R5. Please only use R1/10, R6/60 or ZRAID if you like that kind of thing - R5 has been 'dead' for a long time.
    – Chopper3
    Oct 18, 2022 at 8:12
  • RAID 5 can only tolerate the loss of a single drive. If you lose two drives, the array is inoperative until you can bring the second failed drive back. This is a limitation of RAID 5 generally, not something specific to Linux. Oct 19, 2022 at 1:52

0

You must log in to answer this question.

Browse other questions tagged .