Software RAID installation guides

There are two issues to be aware of when using software RAID under Fedora Core 3:

Grub does not properly install itself to the boot records of the disks. This occurs during a fresh installation or after the kernel has been updated.

This can be worked around during a kickstart installation by adding the following section to the kickstart file:

%post
/sbin/grub --batch << EOF
device (hd0) /dev/sda
root (hd0,0)
setup (hd0)
device (hd1) /dev/sdb
root (hd1,0)
setup (hd1)
EOF

This will install grub to both members of the RAID 1 boot device. This script assumes that the RAID1 boot device is composed of the first partition of the first two SCSI/SATA drives in the system.

If kickstart installation is not used or a kernel upgrade has been performed, the system needs to be booted into rescue mode using the steps below:

  1. Boot the system using Fedora Core 2 CD #1. If necessary, change the device boot ordering in the BIOS.
  2. At the boot prompt, enter 'linux rescue'. For systems that require special device driver (Marvell SATA, Highpoint SATA, etc..), enter 'linux dd rescue'
  3. When applicable, load the driver disk
  4. Select English for language and select OK
  5. Select us keyboard and select OK
  6. Select No on starting the network interface
  7. Select Continue to locate an existing Linux installation
  8. Once found, select OK
  9. Run 'chroot /mnt/sysimage'

Next execute the following command:

/sbin/grub

GNU GRUB version 0.95 (640K lower / 3072K upper memory)

[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the
possible
completions of a device/filename.]

grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd1) /dev/sdb
grub> root (hd1,0)
grub> setup (hd1)
grub> quit

This assumes that the RAID1 boot device is composed of the first partition of the first two SCSI/SATA drives in the system. If the RAID consists of IDE disks, /dev/sd? should be replaced with /dev/hd?. If a partition other than the first is used to create the the boot RAID device, the 'root' command will need to be modified:

In general, the argument to the root command will be 1 less than the partition number.

Afterward, run 'exit' twice. That will reboot the system. Be sure to eject the FC2 CD#1 in order to boot from the hard drive.

Re-installing to disks with existing RAID superblocks will result in broken RAID devices.

To work around this issue during kickstart, add a section similar to the following to the kickstart file:

%pre
mdadm --misc --zero-superblock /dev/sda1
mdadm --misc --zero-superblock /dev/sda2
mdadm --misc --zero-superblock /dev/sda3

mdadm --misc --zero-superblock /dev/sdb1
mdadm --misc --zero-superblock /dev/sdb2
mdadm --misc --zero-superblock /dev/sdb3

The mdadm command must be executed once for each existing RAID partition in the system. This will remove the existing RAID superblock and will allow the installation to proceed normally.

Before installing manually, the system must be booted into rescue mode as described earlier with slight changes below:

  1. On step #7,select Skip instead of Continue
  2. Skip step #8 and #9

Once the bash prompt is presented, execute the following command to clear any RAID superblocks:

mdadm --misc --zero-superblock <partition>

Where partition is the RAID device name (ie: /dev/sda1, /dev/sdb1). This command needs to be executed once for each partition containing a RAID superblock. When the system reboots, it can be installed normally.