Sunday, January 01, 2006

Here are some examples compiled with input from various searches on google.

Case 1: A power failure in the storage pulled some disks out of control of volume manager.

If the vxprint command shows that the plexes have a kernel state of NODEVICE.

Then it seems that the disks have gone offline as a result of the operation.

Vxreattach is the command for help.

Solution :

First check whether vxreattach is possible.

vxreattach –c c#t#d#s#

Example vxreattach -c c2t29d10s2
This will hopefully show what the disk_name used to be.
If so then run (re-attach in background).
vxreattach -br c2t29d9s2

Find the plexes that are in the DISABLED RECOVER state.

Here we consider that the volume automation is disabled and that the plex automation-01 is in DISABLED

For all DISABLED RECOVER plexes, perform the commands:

# vxmend -o force off (automation-01)

# vxmend on (automation-01)

# vxmend fix clean (automation-01)

# vxvol start (automation)

fsck the volume before mounting.

Case 2: 
From vxprint -ht:
v  bm-u1        -            ENABLED  ACTIVE   60817408 SELECT   -        fsgen
pl bm-u1-01     bm-u1        DISABLED TEMPRMSD 60817408 CONCAT   -        RW
sd raid60-v2-03 bm-u1-01     raid60-v2 49872896 60817408 0       fabric_5 ENA
pl bm-u1-02     bm-u1        ENABLED  STALE    60817408 CONCAT   -        WO
sd raid-82-vol1-11 bm-u1-02  raid-82-vol1 358612992 60817408 0   fabric_1 ENA
 
I want to remove this volume, it's not mounted anywhere and the usual
commands says:
 
# vxvol stop bm-u1
vxvm:vxvol: ERROR: Volume bm-u1 in use by another utility
# vxtask list
TASKID  PTID TYPE/STATE    PCT   PROGRESS
# vxassist remove volume bm-u1
vxvm:vxassist: ERROR:  Volume bm-u1 is adding a mirror
 
How do I get rid of it?  (data is of no importance)
 
Solution:
1) vxmend -g disk_group -r clear all bm-u1
2) vxedit -r rm bm-u1
Case 3:
Recovering a RAID5 volume, incase of one disk failure, should happen automatically. But things may not seem to work right sometimes. Largely this is due to parity corruption. Here are the steps to check and recover in such a sittuation.
If you run /etc/vx/bin/vxr5check on a volume and it tells you that the parity is bad, here is a procedure that allows you to rebuild it.
1.       # vxvol -g  stop 
2.       # vxmend -g  fix empty 
3.       # vxvol -g  start 
 Don't worry about the "fix empty stage deleting data". It will not. This procedure taken from Sun SRDB 12266. 

0 Comments:

Post a Comment

<< Home