Je viens de recevoir ce message de mdadmin :
---------- Message transmis ----------
Date: Mardi 27 Juin 2006 14:56
From: mdadm monitoring <root@???>
This is an automatically generated mail message from mdadm
running on in22
A DegradedArray event had been detected on md device /dev/md1.
Faithfully yours, etc.
-------------------------------------------------------
Ma config raid est faite de 2 disques de 4 partitions, toutes mirrorées en
raid 1 (md0=swap, md1=/, md2=/home, md3=/data)
En regardant dans dmesg, j'ai ceci :
devfs_mk_dev: could not append to parent for md/1
md: md1 stopped.
md: bind<hdb2>
md: bind<hda2>
md: kicking non-fresh hdb2 from array!
md: unbind<hdb2>
md: export_rdev(hdb2)
raid1: raid set md1 active with 1 out of 2 mirrors
VFS: Can't find ext3 filesystem on dev md1.
ReiserFS: md1: found reiserfs format "3.6" with standard journal
ReiserFS: md1: using ordered data mode
ReiserFS: md1: journal params: device md1, size 8192, journal first block
18, max trans len 1024, max batch 900, max commit age
30, max trans age 30
ReiserFS: md1: checking transaction log (md1)
ReiserFS: md1: Using r5 hash to sort names
[...]
md: md0 stopped.
md: bind<hdb1>
md: bind<hda1>
raid1: raid set md0 active with 2 out of 2 mirrors
devfs_mk_dev: could not append to parent for md/2
md: md2 stopped.
md: bind<hdb3>
md: bind<hda3>
raid1: raid set md2 active with 2 out of 2 mirrors
devfs_mk_dev: could not append to parent for md/3
md: md3 stopped.
md: bind<hdb4>
md: bind<hda4>
raid1: raid set md3 active with 2 out of 2 mirrors
Il semble que seule la partition md1 (/) ne soit plus mirrorée. À quoi
est-ce dû ? Ce n'est pas dit...
J'ai aussi lancé mdadm, mais ça ne m'éclaire pas plus :
# mdadm --examine /dev/hda2
/dev/hda2:
Magic : a92b4efc
Version : 00.90.01
UUID : c1f26c1b:c03a77e3:0ab61139:07d7540f
Creation Time : Fri Nov 4 11:05:41 2005
Raid Level : raid1
Device Size : 7004224 (6.68 GiB 7.17 GB)
Array Size : 7004224 (6.68 GiB 7.17 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Update Time : Tue Jun 27 15:13:52 2006
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Checksum : c69083d2 - correct
Events : 0.1692834
Number Major Minor RaidDevice State
this 0 3 2 0 active sync /dev/hda2
0 0 3 2 0 active sync /dev/hda2
1 1 0 0 1 faulty removed
# mdadm --examine /dev/hdb2
/dev/hdb2:
Magic : a92b4efc
Version : 00.90.01
UUID : c1f26c1b:c03a77e3:0ab61139:07d7540f
Creation Time : Fri Nov 4 11:05:41 2005
Raid Level : raid1
Device Size : 7004224 (6.68 GiB 7.17 GB)
Array Size : 7004224 (6.68 GiB 7.17 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Update Time : Tue Jun 27 08:46:03 2006
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : c69024a3 - correct
Events : 0.1692218
Number Major Minor RaidDevice State
this 1 3 66 1 active sync /dev/hdb2
0 0 3 2 0 active sync /dev/hda2
1 1 3 66 1 active sync /dev/hdb2
# mdadm --detail /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Fri Nov 4 11:05:41 2005
Raid Level : raid1
Array Size : 7004224 (6.68 GiB 7.17 GB)
Device Size : 7004224 (6.68 GiB 7.17 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Jun 27 15:18:00 2006
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : c1f26c1b:c03a77e3:0ab61139:07d7540f
Events : 0.1692924
Number Major Minor RaidDevice State
0 3 2 0 active sync /dev/hda2
1 0 0 1 removed
# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 hda4[0] hdb4[1]
30186048 blocks [2/2] [UU]
md2 : active raid1 hda3[0] hdb3[1]
2008000 blocks [2/2] [UU]
md0 : active raid1 hda1[0] hdb1[1]
1003904 blocks [2/2] [UU]
md1 : active raid1 hda2[0]
7004224 blocks [2/1] [U_]
unused devices: <none>
Une idée ?
--
Frédéric
http://www.gbiloba.org