ASM diskgroup cann’t mount and drop

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

 

IHAC who encounter an error as below after restart database and storage .

 

 

SQL> alter diskgroup DATA mount;
alter diskgroup DATA mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "3" is missing
 
Then we found /dev/raw/raw9 is in candidate status
SQL> select path,HEADER_STATUS,MOUNT_STATUS,MODE_STATUS from v$asm_disk;
 
PATH            HEADER_STATU MOUNT_S MODE_ST
--------------- ------------ ------- -------
/dev/raw/raw9   CANDIDATE    CLOSED  ONLINE
/dev/raw/raw6   MEMBER       CLOSED  ONLINE
/dev/raw/raw7   MEMBER       CLOSED  ONLINE
/dev/raw/raw8   MEMBER       CLOSED  ONLINE
/dev/raw/raw1   FOREIGN      CLOSED  ONLINE
/dev/raw/raw4   FOREIGN      CLOSED  ONLINE
/dev/raw/raw3   FOREIGN      CLOSED  ONLINE
/dev/raw/raw2   FOREIGN      CLOSED  ONLINE
/dev/raw/raw5   FOREIGN      CLOSED  ONLINE 
 
While we are using kfed checking the status , we found /dev/raw/raw9 was invalid.
[oracle@DCSDB2 ~]$ kfed read /dev/raw/raw9 blkn=2 | grep KFBTYP
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
[oracle@DCSDB2 ~]$ kfed read /dev/raw/raw9 blkn=4 | grep KFBTYP
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
[oracle@DCSDB2 ~]$ kfed read /dev/raw/raw9 blkn=10 | grep KFBTYP
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
[oracle@DCSDB2 ~]$ kfed read /dev/raw/raw9 blkn=100 | grep KFBTYP
kfbh.type: 
 

Right now we want to remove or drop /dev/raw/raw9 and bring the database up , But we can’t drop it in normal because /dev/raw/raw9 can’t mount.

My question is  how can we drop or remove /dev/raw/raw9 (CT clear that they is no data or important data in this device) and bring the database /ASM up.

 

[root@DCSDB1 ~]#   ls -l /dev/raw/*
crw------- 1 root   oinstall 162, 1 01-27 06:37 /dev/raw/raw1
crw------- 1 root   oinstall 162, 2 01-27 06:37 /dev/raw/raw2
crw------- 1 oracle oinstall 162, 3 01-27 06:37 /dev/raw/raw3
crw------- 1 oracle oinstall 162, 4 01-27 06:37 /dev/raw/raw4
crw------- 1 oracle oinstall 162, 5 01-27 06:37 /dev/raw/raw5
crw------- 1 oracle oinstall 162, 6 01-27 06:37 /dev/raw/raw6
crw------- 1 oracle oinstall 162, 7 01-27 06:37 /dev/raw/raw7
crw------- 1 oracle oinstall 162, 8 01-27 06:37 /dev/raw/raw8
crw------- 1 oracle oinstall 162, 9 01-27 06:37 /dev/raw/raw9

[root@DCSDB1 ~]# cat /etc/sysconfig/rawdevices
# raw device bindings
# format:    
#           
# example: /dev/raw/raw1 /dev/sda1
#          /dev/raw/raw2 8 5
/dev/raw/raw1      /dev/mapper/oravg-ocr1
/dev/raw/raw2      /dev/mapper/oravg-ocr2
/dev/raw/raw3      /dev/mapper/oravg-vot1
/dev/raw/raw4      /dev/mapper/oravg-vot2
/dev/raw/raw5      /dev/mapper/oravg-vot3
/dev/raw/raw6     /dev/mapper/oravg-data1
/dev/raw/raw7     /dev/mapper/oravg-data2
/dev/raw/raw8     /dev/mapper/oravg-data3
/dev/raw/raw9     /dev/mapper/oravg-data5

[root@DCSDB1 tmp]# cat /proc/partitions
major minor  #blocks  name

  8     0 1754880000 sda
  8     1     514048 sda1
  8     2 1754362260 sda2
  8    16  262144000 sdb
  8    32  262144000 sdc
  8    48  262144000 sdd
  8    64  262144000 sde
  8    80  262144000 sdf
  8    96  262144000 sdg
  8   112  262144000 sdh
  8   128  262144000 sdi
  8   144  262144000 sdj
  8   160  262144000 sdk
  8   176  262144000 sdl
  8   192  262144000 sdm
  8   208  262144000 sdn
  8   224  262144000 sdo
  8   240  262144000 sdp
 65     0  262144000 sdq
 65    16  262144000 sdr
 65    32  262144000 sds
 65    48  262144000 sdt
 65    64  262144000 sdu
 65    80  262144000 sdv
 65    96  262144000 sdw
 65   112  262144000 sdx
 65   128  262144000 sdy
 65   144  262144000 sdz
 65   160  262144000 sdaa
 65   176  262144000 sdab
 65   192  262144000 sdac
 65   208  262144000 sdad
 65   224  262144000 sdae
 65   240  262144000 sdaf
 66     0  262144000 sdag
 66    16  262144000 sdah
 66    32  262144000 sdai
 66    48  262144000 sdaj
 66    64  262144000 sdak
 66    80  262144000 sdal
 66    96  262144000 sdam
 66   112  262144000 sdan
 66   128  262144000 sdao
 66   144  262144000 sdap
 66   160  262144000 sdaq
 66   176  262144000 sdar
 66   192  262144000 sdas
 66   208  262144000 sdat
 66   224  262144000 sdau
 66   240  262144000 sdav
 67     0  262144000 sdaw
253     0    1048576 dm-0
253     1   52428800 dm-1
253     2   10485760 dm-2
253     3   10485760 dm-3
253     4   10485760 dm-4
253     5   10485760 dm-5
253     6   10485760 dm-6
253     7   33554432 dm-7
253     8 1073741824 dm-8
253     9  262144000 dm-9
253    10  262144000 dm-10
253    11  262144000 dm-11
253    12  262144000 dm-12
253    13  262144000 dm-13
253    14  262144000 dm-14
253    15  262144000 dm-15
253    16  262144000 dm-16
253    17  262144000 dm-17
253    18  262144000 dm-18
253    19  262144000 dm-19
253    20  262144000 dm-20
253    21     512000 dm-21
253    22     512000 dm-22
253    23     512000 dm-23
253    24     512000 dm-24
253    25     512000 dm-25
253    26  157286400 dm-26
253    27  157286400 dm-27
253    28  157286400 dm-28
253    29  157286400 dm-29
253    30  157286400 dm-30
253    31  157286400 dm-31
253    32  157286400 dm-32
253    33  157286400 dm-33
253    34  157286400 dm-34
253    35  157286400 dm-35
253    36  157286400 dm-36
253    37  157286400 dm-37
253    38  157286400 dm-38
253    39  157286400 dm-39
253    40  157286400 dm-40
253    41  157286400 dm-41
253    42  157286400 dm-42
253    43  157286400 dm-43
253    44  157286400 dm-44
 
[oracle@DCSDB2 dbs]$ cat /app/admin/+ASM/pfile/init.ora
 
 
##############################################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
##############################################################################
 
###########################################
# Cluster Database
###########################################
cluster_database=true
 
###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/app/admin/+ASM/bdump
core_dump_dest=/app/admin/+ASM/cdump
user_dump_dest=/app/admin/+ASM/udump
 
###########################################
# Miscellaneous
###########################################
instance_type=asm
 
###########################################
# Pools
###########################################
large_pool_size=12M
 
###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
 
 
asm_diskgroups='DATA'
 
+ASM2.instance_number=2
+ASM1.instance_number=1

 

 

‘m assuming the following are true:

–  that the DATA diskgroup redundancy is either NORMAL or HIGH.
–  the redundancy is NOT external
–  You have a recent backup of the database.

If this is the case, then do the following:

1.  Mount FORCE

alter diskgroup DATA mount force;

Let it mount.

2.  Inspect that everything is there.

3.  Drop force the disk

alter diskgorup drop disk ‘/dev/raw/raw9’ force;

3.  Issue a rebalance if one does not kick off automatcally.

alter diskgroup DATA rebalance;

and let it finish.

From the SQL language documentation for ALTER DISKGROUP:

  • In the FORCE mode, Oracle ASM attempts to mount the disk group even if it cannot discover all of the devices that belong to the disk group. This setting is useful if some of the disks in a normal or high redundancy disk group became unavailable while the disk group was dismounted. When MOUNT FORCE succeeds, Oracle ASM takes the missing disks offline.

    If Oracle ASM discovers all of the disks in the disk group, then MOUNT FORCE fails. Therefore, use the MOUNT FORCE setting only if some disks are unavailable. Otherwise, useNOFORCE.

    In normal- and high-redundancy disk groups, disks from one failure group can be unavailable and MOUNT FORCE will succeed. Also in high-redundancy disk groups, two disks in two different failure groups can be unavailable and MOUNT FORCE will succeed. Any other combination of unavailable disks causes the operation to fail, because Oracle ASM cannot guarantee that a valid copy of all user data or metadata exists on the available disks.

 

Are you sure its external?  I don’t mean to ask that like you wouldn’t know but here is a sure way to know.

There is a tool called amdu and its in your grid home.  This is 11gR2, correct?  If so, you can do the following:

$ORACLE_HOME/bin/amdu

It will create a amdu directory with the current date and in that directory it creates a file called report.txt.  It will report out all of the disks belonging to the DATA disk groups.  One of the fields for each disk
is redundancy.  If its set to 0 or 1, I believe your external. If its set to 2 or 3, your NORMAL or HIGH.

I don’t know how an external redundant AMS diskgroup can be recovered.

From the ASM doc:

  • External redundancy

    Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error causes a forced dismount of the disk group. All disks must be located to successfully mount the disk group.

I will let someone else comment but, you may have to restore and recover the database.

if only the ASM diskheader was corrupted, and not the whole disk, it might be worth a try to only recover the disk header. This does make sense in an external DG, since you can’t access the Data anyways. Search in MOS how to do this.

 

Comment

*

沪ICP备14014813号-2

沪公网安备 31010802001379号