If you have read the rest of this HOWTO, you should already have a pretty good idea about what reconstruction of a degraded RAID involves. Let us summarize:
raidhotadd /dev/mdX /dev/sdXto re-insert the disk in the array
Well, it usually is, unless you're unlucky and your RAID has been rendered unusable because more disks than the ones redundant failed. This can actually happen if a number of disks reside on the same bus, and one disk takes the bus with it as it crashes. The other disks, however fine, will be unreachable to the RAID layer, because the bus is down, and they will be marked as faulty. On a RAID-5 where you can spare one disk only, loosing two or more disks can be fatal.
The following section is the explanation that Martin Bene gave to me,
and describes a possible recovery from the scary scenario outlined
above. It involves using the
failed-disk directive in your
/etc/raidtab (so for people running patched 2.2 kernels, this will only
work on kernels 2.2.10 and later).
The scenario is:
If using mdadm, you could first try to run:
mdadm --assemble --forceIf not, there's one thing left: rewrite the RAID superblocks by
To get this to work, you'll need to have an up to date
/etc/raidtab - if
it doesn't EXACTLY match devices and ordering of the original
disks this will not work as expected, but will most likely
completely obliterate whatever data you used to have on your
Look at the sylog produced by trying to start the array, you'll see the event count for each superblock; usually it's best to leave out the disk with the lowest event count, i.e the oldest one.
failed-disk, the recovery
thread will kick in immediately and start rebuilding the parity blocks
- not necessarily what you want at that moment.
failed-disk you can specify exactly which disks you want
to be active and perhaps try different combinations for best
results. BTW, only mount the filesystem read-only while trying this
out... This has been successfully used by at least two guys I've been in