Mais informação sobre RAID 1

Montar fakeRAID: http://wiki.eyermonkey.com/My_Ubuntu_%287.10%29_Installation#Installation_on_RAID_0

Outro link: http://samokk.is-a-geek.com/wordpress/2006/01/15/running-ubuntu-gnulinux-on-a-fakeraid1-mirroring-array/

Outro link: http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html?page=2

Bug #120375 - Ubuntu cannot boot from degraded RAID: https://bugs.launchpad.net/ubuntu/+bug/120375

Edit "/usr/share/initramfs-tools/scripts/local" and find the following comment "# We've given up, but we'll let the user fix matters if they can".
Just before this comment add the following code:
# The following code was added to allow degraded RAID arrays to start
if [ ! -e "${ROOT}" ] || ! /lib/udev/vol_id "${ROOT}" >/dev/null 2>&1; then
# Try mdadm and allow degraded arrays to start in case a drive has failed
log_begin_msg "Attempting to start RAID arrays and allow degraded arrays"
/sbin/mdadm --assemble --scan
log_end_msg
fi
To rebuilt the boot image use "sudo update-initramfs -u" as suggested by Plnt. This script calls the "mkinitramfs" script mentioned by Peter and is easier to use as you don't have to supply the image name and other options.
I have tested this a couple of times, with and without my other drives plugged in without any problems. Just make sure you have a cron item setup to run "mdadm --monitor --oneshot" to ensure the System Administrator gets an email when an array is running degraded.


Este BUG foi resolvido no Intrepid: This bug was fixed in the package mdadm - 2.6.7-3ubuntu2

The mdadm patch supports an optional kernel parameter, which can be any of:
* bootdegraded
* bootdegraded=true
* bootdegraded=yes
* bootdegrade=1

No menu do GRUB fica:
title Ubuntu, kernel 2.6.20-16-generic (raid defect)
root (hd0,1)
kernel /boot/vmlinuz-
2.6.20-16-generic root=/dev/md1 ro bootdegraded
initrd /boot/initrd.img-2.6.20-16-generic

Recovery after disk failure
Next I simulated a disk failure by disconnecting /dev/sdb. The system still boots but the status shows /dev/sdb1, /dev/sdb2 and /dev/sdb3 have disappeared from the system and how the /dev/mdN is marked as "degraded" in the status field.
martti@ubuntu:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0]
12659136 blocks [2/2] [U_]

md1 : active raid1 sda2[0]
489856 blocks [2/2] [U_]

md0 : active raid1 sda1[0]
7815488 blocks [2/2] [U_]

unused devices: 

martti@ubuntu:~$ sudo mdadm --query --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Wed Oct 17 16:45:59 2007
Raid Level : raid1
Array Size : 7815488 (7.45 GiB 8.00 GB)
Used Dev Size : 7815488 (7.45 GiB 8.00 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Oct 17 15:16:18 2007
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 1760de71:d6ca4125:8324c8dc:300ec7e1
Events : 0.11

Number   Major   Minor   RaidDevice State
0       8        1        0      active sync   /dev/sda1
1       0        0        -      removed
Next I reconnected the disk and instructed the system to rebuild itself. After rebuild everything was ok again.
martti@ubuntu:~$ sudo mdadm --add /dev/md0 /dev/sdb1
mdadm: hot added /dev/sdb1

martti@ubuntu:~$ sudo mdadm --add /dev/md1 /dev/sdb2
mdadm: hot added /dev/sdb2

martti@ubuntu:~$ sudo mdadm --add /dev/md2 /dev/sdb3
mdadm: hot added /dev/sdb3

martti@ubuntu:~$ cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[2] sda3[0]
12659136 blocks [2/1] [U_]
resync=DELAYED

md1 : active raid1 sda2[0] sdb2[1]
489856 blocks [2/2] [UU]

md0 : active raid1 sdb1[2] sda1[0]
7815488 blocks [2/1] [U_]
[>....................]  recovery =  2.8% (215168/7815488) finish=16.2min speed=9780K/sec

unused devices: 

HOW-TO preliminar


Tentativa de recuperar um sistema que não faz boot por causa de problemas no RAID 1 de um dos discos.

Faça um boot pelo Live CD do Ubuntu e depois:
sudo mount -t ext3 /dev/md1 /target
ou então
sudo mount -t ext3 /dev/sda1 /target

sudo mount --bind /dev /target/dev
sudo mount -t proc proc /target/proc    # was already done
sudo mount -t sysfs sysfs /target/sys
sudo chroot /target
apt-get update
apt-get install dmraid

Depois corrigi o arquivo /usr/share/initramfs-tools/scripts/local
acrescentando o código abaixo:
nano /usr/share/initramfs-tools/scripts/local

***** acrescentar depois de "# We've given up, but we'll let the user fix matters if they can".

# The following code was added to allow degraded RAID arrays to start
if [ ! -e "${ROOT}" ] || ! /lib/udev/vol_id "${ROOT}" >/dev/null 2>&1; then
# Try mdadm and allow degraded arrays to start in case a drive has failed
log_begin_msg "Attempting to start RAID arrays and allow degraded arrays"
/sbin/mdadm --assemble --scan
log_end_msg
fi

e Rodando:
sudo update-initramfs -u

Atualizei lilo com o comando
lilo -H

Reinicie o sistema sem o LiveCD
sudo shutdown -r now

Verifiquei o RAID com:
sudo mdadm --query --detail /dev/md1

Botei pelo HD e corrigi o RAID com o comando:
sudo mdadm --add /dev/md1 /dev/sdb1

Para acompanhar o processo utilize o comando:
cat /proc/mdstat
Share this article :
 

Postar um comentário

 
Navegando pela Net - All Rights Reserved
Proudly powered by Blogger