Oracle ASM hidden parameters

_lm_asm_enq_hashing
_ges_diagnostics_asm_dump_level
_asm_runtime_capability_volume_support
_asm_disable_multiple_instance_check
_asm_disable_amdu_dump
_asmsid
_asm_allow_system_alias_rename
_asm_instlock_quota
asm_diskstring
_asm_disk_repair_time
asm_preferred_read_failure_groups
_asm_disable_profilediscovery
_asm_imbalance_tolerance
_asm_shadow_cycle
_asm_primary_load_cycles
_asm_primary_load
_asm_secondary_load_cycles
_asm_secondary_load
asm_diskgroups
asm_power_limit
_asm_log_scale_rebalance
_asm_sync_rebalance
_asm_ausize
_asm_blksize
_asm_acd_chunks
_asm_partner_target_disk_part
_asm_partner_target_fg_rel
_asm_automatic_rezone
_asm_rebalance_plan_size
_asm_rebalance_space_errors
_asm_libraries
_asm_maxio
_asm_allow_only_raw_disks
_asm_fob_tac_frequency
_asm_emulate_nfs_disk
_asmlib_test
_asm_allow_lvm_resilvering
_asm_lsod_bucket_size
_asm_iostat_latch_count
_asm_disable_smr_creation
_asm_wait_time
_asm_skip_resize_check
_asm_skip_rename_check
_asm_direct_con_expire_time
_asm_check_for_misbehaving_cf_clients
_asm_diag_dead_clients
_asm_reserve_slaves
_asm_kill_unresponsive_clients
_asm_disable_async_msgs
_asm_stripewidth
_asm_stripesize
_asm_random_zone
_asm_serialize_volume_rebalance
_asm_force_quiesce
_asm_dba_threshold
_asm_dba_batch
_asm_usd_batch
_asm_fail_random_rx
_asm_max_redo_buffer_size
_asm_max_cod_strides
_asm_kfioevent
_asm_evenread
_asm_evenread_alpha
_asm_evenread_alpha2
_asm_evenread_faststart
_asm_dbmsdg_nohdrchk
_asm_root_directory
_asm_hbeatiowait
_asm_hbeatwaitquantum
_asm_repairquantum
_asm_emulmax
_asm_emultimeout
_asm_kfdpevent
_asm_storagemaysplit
_asm_avoid_pst_scans
_asm_compatibility
_asm_admin_with_sysdba
_asm_allow_appliance_dropdisk_noforce
_asm_appliance_config_file
_disable_rebalance_space_check
_disable_rebalance_compact

 

 

 

_disable_rebalance_compact : Setting _DISABLE_REBALANCE_COMPACT=TRUE will disable the compacting phase of the disk group rebalance.

 

Since version 11.1, the ASM performs a disk compacting once the rebalance is complete.
This may have a noticeable impact on the first rebalance after the upgrade from 10g.
The idea of the compacting phase it to move the data as close the outer tracks of the disks
(the lower numbered offsets) as possible. The first time the rebalance runs in 11g, it could
take a while if the disk group configuration changed – for example after an ALTER
DISKGROUP … ADD DISK. Subsequent manual rebalance without a configuration change
should not take as much time.
A disk group where the compacting phase of the rebalance has done a lot of work will tend to
have better performance than the pre-compact disk group. The data should be clustered near
the higher performing tracks of the disk, and the seek times should be shorter.
Relevant initialization parameter: _DISABLE_REBALANCE_COMPACT.
Relevant disk group attribute: _REBALANCE_COMPACT.
The holemarks of the compacting phase:

  • Rebalance is taking ‘too long’
  • Updates to gv$asm_operation have stopped
  • In the ARB0 trace file we see many lines like this: ARB0 relocating file +<diskgroup>.nnn.mmm (1 entries)
  • Stack (systemstate, processstate or pstack) shows kfdCompact() function.

 

_asm_avoid_pst_scans:

If set, limit the number of full scans (of all disks) we do when reading the
PST in kfdp_readMeta(). This is also required to activate split PST checks
(which rely on the lock value block being available)

 

 

_asm_evenread – enable/disable even read.
0 = Defer to the disk group attribute (to be implemented)
1 = ENABLE (always on)
2 = DISABLE (always off)
18 = Enabled iff offline disks are found
_asm_evenread_tracing – tracing; 0 is off, 1 is 1 line per IO, 3 is verbose
_asm_evenread_alpha -alpha value, x2^14 (16 is a good value)
_asm_evenread_alpha2 – secondary alpha value, x2^14
_asm_evenread_faststart – number of IOs after offline/online to use alpha2 (default 0)

【Oracle ASM数据恢复】recovering COD cache read a corrupted block一例

某用户增加LUN到ASM DISKGROUP发现某个ASM Disk header KFBTYP_DISKHEAD被意外清除掉,导致该Diskgroup无法mount的问题, 后续DBA采用kfed merge等手法修复了KFBTYP_DISKHEAD block,但仍无法mount diskgroup,ALERT.log中出现如下的日志:

 

 

NOTE: F1X0 found on disk 0 fcn 0.0
NOTE: cache opening disk 1 of grp 1: VOL2 label:VOL2
NOTE: cache opening disk 2 of grp 1: VOL3 label:VOL3
NOTE: cache opening disk 3 of grp 1: VOL4 label:VOL4
NOTE: cache opening disk 4 of grp 1: VOL5 label:VOL5
NOTE: cache opening disk 5 of grp 1: VOL6 label:VOL6
NOTE: cache opening disk 6 of grp 1: VOL7 label:VOL7
NOTE: cache opening disk 7 of grp 1: VOL8 label:VOL8
NOTE: cache opening disk 8 of grp 1: VOL9 label:VOL9
NOTE: cache opening disk 9 of grp 1: VOL10 label:VOL10
NOTE: cache opening disk 10 of grp 1: VOL11 label:VOL11
NOTE: cache mounting (first) group 1/0x3A2C35D6 (DG)
* allocate domain 1, invalid = TRUE 
kjbdomatt send to node 0
kjbdomatt send to node 2
Mon Jan 27 02:18:51 CST 2014
NOTE: attached to recovery domain 1
Mon Jan 27 02:18:51 CST 2014
NOTE: starting recovery of thread=1 ckpt=1712.152 group=1
NOTE: advancing ckpt for thread=1 ckpt=1712.153
NOTE: cache recovered group 1 to fcn 0.491275704
Mon Jan 27 02:18:51 CST 2014
NOTE: LGWR attempting to mount thread 1 for disk group 1
NOTE: LGWR mounted thread 1 for disk group 1
NOTE: opening chunk 1 at fcn 0.491275704 ABA 
NOTE: seq=1713 blk=154 
Mon Jan 27 02:18:51 CST 2014
NOTE: cache mounting group 1/0x3A2C35D6 (DG) succeeded
SUCCESS: diskgroup DG was mounted
Mon Jan 27 02:18:53 CST 2014
NOTE: recovering COD for group 1/0x3a2c35d6 (DG)
WARNING: cache read a corrupted block gn=1 dsk=0 blk=2817 from disk 0
NOTE: a corrupted block was dumped to the trace file
ERROR: cache failed to read dsk=0  blk=2817 from disk(s): 0
ORA-15196: invalid ASM block header [kfc.c:8281] [endian_kfbh] [2147483648] [2817] [173 != 1]
System State dumped to trace file /u01/app/oracle/admin/+ASM/bdump/+asm2_rbal_31204.trc
NOTE: cache initiating offline of disk 0  group 1
WARNING: process 31204 initiating offline of disk 0.3913073997 (VOL1) with mask 0x3 in group 1
WARNING: Disk 0 in group 1 in mode: 0x7,state: 0x2 will be taken offline
NOTE: PST update: grp = 1, dsk = 0, mode = 0x6
Mon Jan 27 02:18:54 CST 2014
ERROR: too many offline disks in PST (grp 1)
Mon Jan 27 02:18:54 CST 2014
WARNING: Disk 0 in group 1 in mode: 0x7,state: 0x2 was taken offline
Mon Jan 27 02:18:54 CST 2014
NOTE: halting all I/Os to diskgroup DG
NOTE: active pin found: 0x0x65faff60
NOTE: active pin found: 0x0x65fb0170
NOTE: active pin found: 0x0x65fb0010
NOTE: active pin found: 0x0x65fb0220
NOTE: active pin found: 0x0x65fb02d0
NOTE: active pin found: 0x0x65fb00c0
NOTE: active pin found: 0x0x65fb0380
Mon Jan 27 02:18:54 CST 2014
ERROR: ORA-15130 in COD recovery for diskgroup 1/0x3a2c35d6 (DG)
ERROR: ORA-15130 thrown in RBAL for group number 1
Mon Jan 27 02:18:54 CST 2014
Errors in file /u01/app/oracle/admin/+ASM/bdump/+asm2_rbal_31204.trc:
ORA-15130: diskgroup "DG" is being dismounted
Mon Jan 27 02:18:54 CST 2014
ERROR: PST-initiated MANDATORY DISMOUNT of group DG
NOTE: cache dismounting group 1/0x3A2C35D6 (DG) 
Mon Jan 27 02:18:57 CST 2014
kjbdomdet send to node 0
detach from dom 1, sending detach message to node 0
kjbdomdet send to node 2
detach from dom 1, sending detach message to node 2
Mon Jan 27 02:18:57 CST 2014
Dirty detach reconfiguration started (old inc 23, new inc 23)
List of nodes:
 0 1 2
 Global Resource Directory partially frozen for dirty detach 
* dirty detach - domain 1 invalid = TRUE 
 138 GCS resources traversed, 0 cancelled
 6104 GCS resources on freelist, 6124 on array, 6124 allocated
Dirty Detach Reconfiguration complete
Mon Jan 27 02:18:57 CST 2014
freeing rdom 1
Mon Jan 27 02:18:57 CST 2014
WARNING: dirty detached from domain 1
Mon Jan 27 02:18:57 CST 2014
SUCCESS: diskgroup DG was dismounted
Mon Jan 27 02:18:57 CST 2014
WARNING: PST-initiated MANDATORY DISMOUNT of group DG not performed - group not mounted
Mon Jan 27 02:18:57 CST 2014
Errors in file /u01/app/oracle/admin/+ASM/bdump/+asm2_b001_31755.trc:
ORA-15001: diskgroup "DG" does not exist or is not mounted
ORA-15001: diskgroup "DG" does not exist or is not mounted
ORA-15001: diskgroup "DG" does not exist or is not mounted
Mon Jan 27 02:31:00 CST 2014

这里可以看到Diskgroup mount到了recovering COD for group 1/0x3a2c35d6 (DG)阶段时,发现了一个逻辑坏块WARNING: cache read a corrupted block gn=1 dsk=0 blk=2817 from disk 0 NOTE: a corrupted block was dumped to the trace file ERROR: cache failed to read dsk=0 blk=2817 from disk(s): 0,并因为该坏块引起了ORA-15196: invalid ASM block header [kfc.c:8281] [endian_kfbh] [2147483648] [2817] [173 != 1]。

这里2817是出错的ASM metadata的block number,173是实际从endian_kfbh位置读出的值,173!=1 这里的1是该位置理论上该有的值,由于读取到block中错误的字节序endian_kfbh信息,所以这里出现了ASM ORA-600错误。

这里recovering COD for group 1/0x3a2c35d6 (DG) 里的COD 指asm metadata file number 4 COD,  Continuing Operation Directory (COD) 该metadata file 4 中记录的是在单个metadata block中无法完成的操作记录到COD中,这样当ASM instance crash时可以恢复这些操作。例如创建 删除和resize文件,这其中file number 4 blkn=1为KFBTYP_COD_RB  即回滚rollback数据,后面的数据为KFBTYP_COD_DATA。

可回滚的操作opcodes包括:

1 - Create a file
2 - Delete a file
3 - Resize a file
4 - Drop alias entry
5 - Rename alias entry
6 - Rebalance space COD
7 - Drop disks force
8 - Attribute drop
9 - Disk Resync
10 - Disk Repair Time
11 - Volume create
12 - Volume delete
13 - Attribute directory creation
14 - Set zone attributes
15 - User drop

 

每次ASM diskgroup 尝试mount时都会读取FILE number 4 COD中的数据来保证操作要么完成、要么回滚。

 

对于此类ASM file number 4 COD出现了源数据坏块的情况, 一般需要手动设置内部事件,并尝试手动Patch ASM metadata的手法才能修复。

建议遇到此类事件第一时间备份ASM disk header 100M的数据,保护现场,以便专业恢复人员介入恢复时现场不被破坏。

如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!

 

 

 

【转】ASM Attributes Directory

9

The ASM attributes directory – the ASM metadata file number 9 – contains the information about disk group attributes. The attributes directory exists only in disk groups with the COMPATIBLE.ASM (attribute!) set to 11.1 or higher.

 

Disk group attributes were introduced in ASM version 11.1[1] and can be used to fine tune the disk group properties. It is worth noting that some attributes can be set only at the time of the disk group creation (e.g. AU_SIZE), while others can be set at any time (e.g. DISK_REPAIR_TIME). Some attribute values might be stored in the disk header (e.g. AU_SIZE), while some others (e.g. COMPATIBLE.ASM), can be stored either in thepartnership and status table or in the disk header (depending on the ASM version).

 

Public attributes

 

Most attributes are stored in the attributes directory and are externalized via V$ASM_ATTRIBUTE view. Let’s have a look at disk group attributes for all my disk groups.

 

SQL> SELECT g.name “Group”, a.name “Attribute”, a.value “Value”

FROM v$asm_diskgroup g, v$asm_attribute a

WHERE g.group_number=a.group_number and a.name not like ‘template%’;

 

Group Attribute               Value

—– ———————– —————-

ACFS  disk_repair_time        3.6h

au_size                 1048576

access_control.umask    026

access_control.enabled  TRUE

cell.smart_scan_capable FALSE

compatible.advm         11.2.0.0.0

compatible.rdbms        11.2

compatible.asm          11.2.0.0.0

sector_size             512

DATA  access_control.enabled  TRUE

cell.smart_scan_capable FALSE

compatible.rdbms        11.2

compatible.asm          11.2.0.0.0

sector_size             512

au_size                 1048576

disk_repair_time        3.6h

access_control.umask    026

SQL>

 

One attribute value we can modify at any time is the disk repair timer. Let’s use asmcmd to do that for disk group DATA.

 

$ asmcmd setattr -G DATA disk_repair_time ‘8.0h’

 

$ asmcmd lsattr -lm disk_repair_time

Group_Name  Name              Value  RO  Sys

ACFS        disk_repair_time  3.6h   N   Y

DATA        disk_repair_time  8.0h   N   Y

$

 

Hidden attributes

 

As mentioned in the introduction, the attributes directory is the ASM metadata file number 9. Let’s locate the attributes directory, in disk group number 2:

 

SQL> SELECT x.disk_kffxp “Disk#”,

x.xnum_kffxp “Extent”,

x.au_kffxp “AU”,

d.name “Disk name”

FROM x$kffxp x, v$asm_disk_stat d

WHERE x.group_kffxp=d.group_number

and x.disk_kffxp=d.disk_number

and d.group_number=2

and x.number_kffxp=9

ORDER BY 1, 2;

 

Disk# Extent   AU Disk name

—– —— —- ———

0      0 1146 ASMDISK1

1      0 1143 ASMDISK2

2      0 1150 ASMDISK3

SQL>

 

Now check out the attributes with the kfed tool.

 

$ kfed read /dev/oracleasm/disks/ASMDISK3 aun=1150 | more

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                           23 ; 0x002: KFBTYP_ATTRDIR

kfede[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0

kfede[0].entry.hash:                  0 ; 0x028: 0x00000000

kfede[0].entry.refer.number: 4294967295 ; 0x02c: 0xffffffff

kfede[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0

kfede[0].name:         disk_repair_time ; 0x034: length=16

kfede[0].value:                    8.0h ; 0x074: length=4

 

Fields kfede[i] will have the disk group attribute names and values. Let’s look at all of them:

 

$ kfed read /dev/oracleasm/disks/ASMDISK3 aun=1150 | egrep “name|value”

kfede[0].name:         disk_repair_time ; 0x034: length=16

kfede[0].value:                    8.0h ; 0x074: length=4

kfede[1].name:       _rebalance_compact ; 0x1a8: length=18

kfede[1].value:                    TRUE ; 0x1e8: length=4

kfede[2].name:            _extent_sizes ; 0x31c: length=13

kfede[2].value:                  1 4 16 ; 0x35c: length=6

kfede[3].name:           _extent_counts ; 0x490: length=14

kfede[3].value:   20000 20000 214748367 ; 0x4d0: length=21

kfede[4].name:                        _ ; 0x604: length=1

kfede[4].value:                       0 ; 0x644: length=1

kfede[5].name:                  au_size ; 0x778: length=7

kfede[5].value:               ; 0x7b8: length=9

kfede[6].name:              sector_size ; 0x8ec: length=11

kfede[6].value:               ; 0x92c: length=9

kfede[7].name:               compatible ; 0xa60: length=10

kfede[7].value:               ; 0xaa0: length=9

kfede[8].name:                     cell ; 0xbd4: length=4

kfede[8].value:                   FALSE ; 0xc14: length=5

kfede[9].name:           access_control ; 0xd48: length=14

kfede[9].value:                   FALSE ; 0xd88: length=5

 

This gives us a glimpse into the hidden (underscore) disk group attributes. We can see that the value of the _REBALANCE_COMPACT is TRUE. That is the attribute to do with the compacting phase of the disk group rebalance. We also see how the extent size will grow (_EXTENT_SIZES) – initial size will be 1 AU, then 4 AU and finally 16 AU. And the _EXTENT_COUNTS shows the breaking points for the extent size growth – first 20000 extents will be 1 AU in size, next 20000 will be 4 AU and the rest will be 16 AU.

 

Conclusion

 

Disk group attributes can be used to fine tune the disk group properties. Most attributes are stored in the attributes directory and are externalized via V$ASM_ATTRIBUTE view. For details about the attributes please see the ASM Disk Group Attributes post.

 

[1] In ASM version prior to 11.1 it was possible to create a disk group with user specified allocation unit size. That was done via hidden ASM initialization parameter _ASM_AUSIZE. While technically that was not a disk group attribute, it served the same purpose as the AU_SIZE attribute in ASM version 11.1 and later.

【转】ASM files number 10 and 11

10ASM metadata file number 10 is ASM user directory and ASM file number 11 is ASM group directory. These are supporting structures for ASM file access control feature.

ASM file access control can be used to restrict file access to specific ASM clients (typically databases), based on the operating system effective user identification number of a database home owner.

This information is externalized via V$ASM_USER, V$ASM_USERGROUP and V$ASM_USERGROUP_MEMBER views.

ASM users and groups

To make use of ASM file access control feature, we need to have the operating system users and groups in place. We would then add them to ASM disk group(s) via ALTER DISKGROUP ADD USERGROUP command. I have skipped that part to keep the focus on ASM user and group directories.

Here are the operating system users set up on this system

$ id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1020(asmadmin),1021(asmdba),1031(dba)
$ id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1021(asmdba),1031(dba)
$ id oracle1
uid=1102(oracle1) gid=1033(dba1) groups=1033(dba1)
$ id oracle2
uid=1103(oracle2) gid=1034(dba2) groups=1034(dba2)

And here are ASM users and groups I set up for my disk groups.

SQL> SELECT u.group_number “Disk group#”,
u.os_id “OS ID”,
u.os_name “OS user”,
u.user_number “ASM user#”,
g.usergroup_number “ASM group#”,
g.name “ASM user group”
FROM v$asm_user u, v$asm_usergroup g, v$asm_usergroup_member m
WHERE u.group_number=g.group_number and u.group_number=m.group_number
and u.user_number=m.member_number
and g.usergroup_number=m.usergroup_number
ORDER BY 1, 2;

Disk group# OS ID OS user ASM user# ASM group# ASM user group
———– —– ——- ——— ———- ————–
1 1100  grid            1          3 GRIDTEAM
1101  oracle          2          1 DBATEAM1
1102  oracle1         3          2 DBATEAM2
1103  oracle2         4          2 DBATEAM2
2 1101  oracle          2          1 DBATEAM1

Look inside 

Get allocation units for ASM user and group directories in disk group number 1.

SQL> SELECT x.number_kffxp “File#”,
x.disk_kffxp “Disk#”,
x.xnum_kffxp “Extent”,
x.au_kffxp “AU”,
d.name “Disk name”
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
and x.disk_kffxp=d.disk_number
and d.group_number=1
and x.number_kffxp in (10, 11)
ORDER BY 1, 2;

File#      Disk#     Extent         AU Disk name
———- ———- ———- ———- ——————————
10          0          0       2139 ASMDISK5
1          0       2139 ASMDISK6
11          0          0       2140 ASMDISK5
1          0       2140 ASMDISK6

The user directory metadata has one block per user entry, where the block number corresponds to the user number (v$asm_user.user_number). We have four users, with user numbers 1-4, so those should be in user directory blocks 1-4. Let’s have a look.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=2139 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           24 ; 0x002: KFBTYP_USERDIR

kfzude.user:                       1100 ; 0x038: length=4

So block 1 is for user with the OS user ID 1100. This agrees with the output from v$asm_user above. For the other blocks we have:

$ let b=1
$ while (( $b <= 4 ))
do
kfed read /dev/oracleasm/disks/ASMDISK5 aun=2139 blkn=$b | grep kfzude.user
let b=b+1
done

kfzude.user:                       1100 ; 0x038: length=4
kfzude.user:                       1101 ; 0x038: length=4
kfzude.user:                       1102 ; 0x038: length=4
kfzude.user:                       1103 ; 0x038: length=4

As expected that shows four operating user IDs in ASM user directory.

Group directory entries are also one per block, where the block number would match the ASM group number. Let’s have a look:

$ let b=1
$ while (( $b <= 3 ))
do
kfed read /dev/oracleasm/disks/ASMDISK5 aun=2140 blkn=$b | grep kfzgde.name
let b=b+1
done

kfzgde.name:                   DBATEAM1 ; 0x03c: length=8
kfzgde.name:                   DBATEAM2 ; 0x03c: length=8
kfzgde.name:                   GRIDTEAM ; 0x03c: length=8

This shows ASM group names as specified for this disk group.

Conclusion

ASM user and group directories are supporting structures for ASM file access control feature, introduced in version 11.2. This information is externalized via V$ASM_USER, V$ASM_USERGROUP and V$ASM_USERGROUP_MEMBER views.

【转】ASM file number 7

7ASM metadata file number 7 – volume directory – keeps track of files associated with ASM Dynamic Volume Manager (ADVM) volumes.

An ADVM volume device is constructed from an ASM dynamic volume. One or more ADVM volume devices may be configured within each disk group. ASM Cluster File System (ACFS) is layered on ASM through the ADVM interface. ASM dynamic volume manager is another client of ASM – the same way the database is. When a volume is opened, the corresponding ASM file is opened and ASM extents are sent to the ADVM driver.

There are two file types associated with ADVM volumes

  • ASMVOL – The volume file which is the container for the volume storage
  • ASMVDRL – The file that contains the volume’s Dirty Region Logging (DRL) information. This file is required for re-silvering mirrors

Turn up the ADVM volume

It is not necessary to create a dedicated disk group for ADVM, but it does make sense to do so. That way we keep the database files separate from the ACFS files. Let’s have a look at an example.

SQL> create diskgroup ACFS
disk ‘ORCL:ASMDISK5’, ‘ORCL:ASMDISK6’
attribute ‘COMPATIBLE.ASM’ = ‘11.2’, ‘COMPATIBLE.ADVM’ = ‘11.2’;

Diskgroup created.

To be able to add volumes to a disk group, attributes COMPATIBLE.ASM and COMPATIBLE.ADVM must be set to at least ‘11.2’. Also the ADVM/ACFS drivers have to loaded (this is always done in cluster environments, but it may have to be done manually in a single instance setup).

I can now create couple of volumes in this disk group.

$ asmcmd volcreate -G ACFS -s 2G ACFS_VOL1

$ asmcmd volcreate -G ACFS -s 2G ACFS_VOL2

$ asmcmd volinfo -a
Diskgroup Name: ACFS

Volume Name: ACFS_VOL1 Volume Device: /dev/asm/acfs_vol1-159 State: ENABLED Size (MB): 2048 Resize Unit (MB): 32 Redundancy: MIRROR Stripe Columns: 4 Stripe Width (K): 128 Usage: Mountpath: Volume Name: ACFS_VOL2 Volume Device: /dev/asm/acfs_vol2-159 State: ENABLED Size (MB): 2048 Resize Unit (MB): 32 Redundancy: MIRROR Stripe Columns: 4 Stripe Width (K): 128 Usage: Mountpath:

$

Note that there are no mount paths associated with the volumes as I haven’t used them yet.

Let’s now look at the ADVM volume metadata. First find the allocation units of the volume directory.

SQL> SELECT x.xnum_kffxp “Extent”,
x.au_kffxp “AU”,
x.disk_kffxp “Disk #”,
d.name “Disk name”
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
and x.disk_kffxp=d.disk_number
and x.group_kffxp=2
and x.number_kffxp=7
ORDER BY 1, 2;

Extent         AU     Disk # Disk name
———- ———- ———- ——————————
0         53          1 ASMDISK6
0         53          0 ASMDISK5

Use kfed to have a look at the actual metadata.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=53 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR

kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:           ++AVD_DG_NUMBER ; 0x034: length=15
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:                0 ; 0x4b8: 0x00000000
kfvvde.volfnum.incarn:                0 ; 0x4bc: 0x00000000
kfvvde.drlfnum.number:                0 ; 0x4c0: 0x00000000
kfvvde.drlfnum.incarn:                0 ; 0x4c4: 0x00000000
kfvvde.volnum:                        0 ; 0x4c8: 0x0000
kfvvde.avddgnum:                    159 ; 0x4ca: 0x009f
kfvvde.extentsz:                      0 ; 0x4cc: 0x00000000
kfvvde.volstate:                      4 ; 0x4d0: D=0 C=0 R=1

That was block 0 of the allocation unit 53. It only contains the marker for the ADVM volume (++AVD_DG_NUMBER). The actual volume info is in blocks 1 and up.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=53 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR

kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:                 ACFS_VOL1 ; 0x034: length=9
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:              257 ; 0x4b8: 0x00000101
kfvvde.volfnum.incarn:        771971291 ; 0x4bc: 0x2e0358db
kfvvde.drlfnum.number:              256 ; 0x4c0: 0x00000100
kfvvde.drlfnum.incarn:        771971289 ; 0x4c4: 0x2e0358d9
kfvvde.volnum:                        1 ; 0x4c8: 0x0001
kfvvde.avddgnum:                    159 ; 0x4ca: 0x009f
kfvvde.extentsz:                      8 ; 0x4cc: 0x00000008
kfvvde.volstate:                      2 ; 0x4d0: D=0 C=1 R=0

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=53 blkn=2 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR

kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:                 ACFS_VOL2 ; 0x034: length=9
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:              259 ; 0x4b8: 0x00000103
kfvvde.volfnum.incarn:        771971303 ; 0x4bc: 0x2e0358e7
kfvvde.drlfnum.number:              258 ; 0x4c0: 0x00000102
kfvvde.drlfnum.incarn:        771971301 ; 0x4c4: 0x2e0358e5
kfvvde.volnum:                        2 ; 0x4c8: 0x0002
kfvvde.avddgnum:                    159 ; 0x4ca: 0x009f
kfvvde.extentsz:                      8 ; 0x4cc: 0x00000008
kfvvde.volstate:                      2 ; 0x4d0: D=0 C=1 R=0

Block 1 of ASM metadata file 7 has the information about the first volume (kfvvde.volnm: ACFS_VOL1). Note that there are two files associated with that volume:

  • DRL file (kfvvde.drlfnum.number: 256)
  • Volume file (kfvvde.volfnum.number: 257)

Block 2 has the information about the second volume (kfvvde.volnm: ACFS_VOL2). There are also two files associated with that volume:

  • DRL file – kfvvde.drlfnum.number: 258
  • Volume file – kfvvde.volfnum.number: 259

As these are special files, they are not shown in the output of ‘asmcmd ls’ command or when we query V$ASM_ALIAS. But they do show up in V$ASM_FILE view.

SQL> SELECT file_number “File #”, bytes/1024/1024 “Size (MB)”, type
FROM v$asm_file
WHERE group_number=2;

File #  Size (MB) TYPE
———- ———- ———-
256         17 ASMVDRL
257       2048 ASMVOL
258         17 ASMVDRL
259       2048 ASMVOL

Create ASM cluster file system

I can now use the volume device to create an ASM cluster file system (ACFS).

# /sbin/mkfs -t acfs /dev/asm/acfs_vol1-159
mkfs.acfs: version                   = 11.2.0.3.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol1-159
mkfs.acfs: volume size               = 2147483648
mkfs.acfs: Format complete.

# mkdir /acfs1

# mount -t acfs /dev/asm/acfs_vol1-159 /acfs1

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/acfs_vol1-159 on /acfs1 type acfs (rw)

$ asmcmd volinfo -G ACFS ACFS_VOL1
Diskgroup Name: ACFS

Volume Name: ACFS_VOL1
Volume Device: /dev/asm/acfs_vol1-159
State: ENABLED
Size (MB): 2048
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /acfs1

$

Let’s see if the mount path info now shows up in volume directory:

$ kfed read /dev/oracleasm/disks/ASMDISK6 aun=53 blkn=1 | grep mountpath
kfvvde.mountpath:                /acfs1 ; 0x0b4: length=6

It does as expected.

Conclusion

One or more ADVM volume devices may be configured within each disk group. ASM Cluster File System (ACFS) is layered on ASM through the ADVM interface. ASM dynamic volume manager is another client of ASM – the same way the database is.

There are two internal file types associated with ASM volumes:

  • ASMVOL – The volume file which is the container for the volume storage
  • ASMVDRL – The file that contains the volume’s Dirty Region Logging (DRL) information

 

【转】ASM file number 8

8The disk Used Space Directory (USD) – ASM file number 8 – maintains the number of allocation units (AU) used per zone, per disk in a disk group. The USD is split into a set of Used Space Entries (USE). Each USE will maintain a counter for the number of used AUs per disk, per zone. A disk zone can be either HOT or COLD.

This structure is version 11.2 specific and is relevant to the Intelligent Data Placement feature. The USD will be present in a newly created disk group in version 11.2 or when the ASM compatibility is advanced to 11.2.

Locating the used space directory

Let’s get the allocation units for the used space directory – for all disk groups.

SQL> break on Group#
SQL> SELECT d.group_number “Group#”,
x.disk_kffxp “Disk#”,
x.xnum_kffxp “Extent”,
x.au_kffxp “AU”,
d.name “Disk name”
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
and x.disk_kffxp=d.disk_number
and x.number_kffxp=8
ORDER BY 1, 2;

Group#  Disk#  Extent     AU Disk name
——- —— ——- —— ————
1      0       0     51 ASMDISK5
1       0     51 ASMDISK6
2      0       0     41 ASMDISK1
2       0     39 ASMDISK3
3       0     38 ASMDISK4

Check the disk used space allocation for all disks in all disk groups.

SQL> SELECT group_number “Group#”,
name “Disk name”,
hot_used_mb “Hot (MB)”,
cold_used_mb “Cold (MB)”
FROM v$asm_disk_stat
ORDER BY 1;

Group# Disk name      Hot (MB)  Cold (MB)
——- ———— ———- ———-
1 ASMDISK5              0       4187
ASMDISK6              0       4187
2 ASMDISK4              0       1138
ASMDISK2              0       1135
ASMDISK1              0       1139
ASMDISK3              0       1144

The result shows that all space in all disks is allocated in the cold disk zones. Let’s have a closer look at the used space directory with kfed.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=51 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           26 ; 0x002: KFBTYP_USEDSPC

kfdusde[0].used[0].spare:             0 ; 0x000: 0x00000000
kfdusde[0].used[0].hi:                0 ; 0x004: 0x00000000
kfdusde[0].used[0].lo:             4134 ; 0x008: 0x00001026
kfdusde[0].used[1].spare:             0 ; 0x00c: 0x00000000
kfdusde[0].used[1].hi:                0 ; 0x010: 0x00000000
kfdusde[0].used[1].lo:                0 ; 0x014: 0x00000000
kfdusde[1].used[0].spare:             0 ; 0x018: 0x00000000
kfdusde[1].used[0].hi:                0 ; 0x01c: 0x00000000
kfdusde[1].used[0].lo:             4134 ; 0x020: 0x00001026
kfdusde[1].used[1].spare:             0 ; 0x024: 0x00000000
kfdusde[1].used[1].hi:                0 ; 0x028: 0x00000000
kfdusde[1].used[1].lo:                0 ; 0x02c: 0x00000000
kfdusde[2].used[0].spare:             0 ; 0x030: 0x00000000
kfdusde[2].used[0].hi:                0 ; 0x034: 0x00000000
kfdusde[2].used[0].lo:                0 ; 0x038: 0x00000000
kfdusde[2].used[1].spare:             0 ; 0x03c: 0x00000000
kfdusde[2].used[1].hi:                0 ; 0x040: 0x00000000
kfdusde[2].used[1].lo:                0 ; 0x044: 0x00000000

There are two disks in disk group number 1, so only the first two kfdusde entries are populated. And both show that all the space is allocated in the cold zone.

Check the used space directory entries for disk group 2.

$ kfed read /dev/oracleasm/disks/ASMDISK1 aun=41 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           26 ; 0x002: KFBTYP_USEDSPC

kfdusde[0].used[0].spare:             0 ; 0x000: 0x00000000
kfdusde[0].used[0].hi:                0 ; 0x004: 0x00000000
kfdusde[0].used[0].lo:             1092 ; 0x008: 0x00000444
kfdusde[0].used[1].spare:             0 ; 0x00c: 0x00000000
kfdusde[0].used[1].hi:                0 ; 0x010: 0x00000000
kfdusde[0].used[1].lo:                0 ; 0x014: 0x00000000
kfdusde[1].used[0].spare:             0 ; 0x018: 0x00000000
kfdusde[1].used[0].hi:                0 ; 0x01c: 0x00000000
kfdusde[1].used[0].lo:             1093 ; 0x020: 0x00000445
kfdusde[1].used[1].spare:             0 ; 0x024: 0x00000000
kfdusde[1].used[1].hi:                0 ; 0x028: 0x00000000
kfdusde[1].used[1].lo:                0 ; 0x02c: 0x00000000
kfdusde[2].used[0].spare:             0 ; 0x030: 0x00000000
kfdusde[2].used[0].hi:                0 ; 0x034: 0x00000000
kfdusde[2].used[0].lo:             1098 ; 0x038: 0x0000044a
kfdusde[2].used[1].spare:             0 ; 0x03c: 0x00000000
kfdusde[2].used[1].hi:                0 ; 0x040: 0x00000000
kfdusde[2].used[1].lo:                0 ; 0x044: 0x00000000
kfdusde[3].used[0].spare:             0 ; 0x048: 0x00000000
kfdusde[3].used[0].hi:                0 ; 0x04c: 0x00000000
kfdusde[3].used[0].lo:             1094 ; 0x050: 0x00000446
kfdusde[3].used[1].spare:             0 ; 0x054: 0x00000000
kfdusde[3].used[1].hi:                0 ; 0x058: 0x00000000
kfdusde[3].used[1].lo:                0 ; 0x05c: 0x00000000
kfdusde[4].used[0].spare:             0 ; 0x060: 0x00000000
kfdusde[4].used[0].hi:                0 ; 0x064: 0x00000000
kfdusde[4].used[0].lo:                0 ; 0x068: 0x00000000
kfdusde[4].used[1].spare:             0 ; 0x06c: 0x00000000
kfdusde[4].used[1].hi:                0 ; 0x070: 0x00000000
kfdusde[4].used[1].lo:                0 ; 0x074: 0x00000000

Disk group 2 has four disks and again all space is allocated in the cold disk zones.

Hot files

Let’s create a disk group template for hot files.

SQL> alter diskgroup DATA add template HOTFILE attributes (HOT);

Diskgroup altered.

Note that this feature requires the disk group attribute COMPATIBLE.RDBMS to be at least 11.2.

Now create a datafile that will be alocated in the disks’ hot zones.

SQL> create tablespace T1_HOT datafile ‘+DATA(HOTFILE)’ size 50M;

Tablespace created.

Let’s check the space allocation now, by running the last query again.

SQL> SELECT group_number “Group#”,
name “Disk name”,
hot_used_mb “Hot (MB)”,
cold_used_mb “Cold (MB)”
FROM v$asm_disk_stat
ORDER BY 1;

Group# Disk name                        Hot (MB)  Cold (MB)
———- —————————— ———- ———-
1 ASMDISK5                                0       4187
ASMDISK6                                0       4187
2 ASMDISK4                               13       1152
ASMDISK2                               12       1153
ASMDISK1                               13       1152
ASMDISK3                               13       1153

The result shows that 51 MB (50 MB for the file and 1 MB for the file header) are now allocated in the hot zones across all disk in the disk group.

Warm up a file

I can also move an existing datafile into the hot zone. Let’s find all datafiles in disk group DATA.

$ asmcmd find –type datafile +DATA “*”
+DATA/BR/DATAFILE/EXAMPLE.269.769030517
+DATA/BR/DATAFILE/NOT_IMPORTANT.273.771795255
+DATA/BR/DATAFILE/SYSAUX.257.769030245
+DATA/BR/DATAFILE/SYSTEM.256.769030243
+DATA/BR/DATAFILE/T1_HOT.274.772054033
+DATA/BR/DATAFILE/TRIPLE_C.272.771794469
+DATA/BR/DATAFILE/TRIPLE_M.271.771793293
+DATA/BR/DATAFILE/UNDOTBS1.258.769030245
+DATA/BR/DATAFILE/USERS.259.769030245

Let’s move the undo tablespace datafile into the hot zone.

SQL> alter diskgroup DATA modify file ‘+DATA/BR/DATAFILE/UNDOTBS1.258.769030245’ attributes (HOT);

Diskgroup altered.

This action triggers the rebalance for disk group DATA, as file extents have to be moved to disks’ hot regions. Once the rebalance completes, the last query shows more data in hot region for disks in disk group number 2.

SQL> SELECT group_number “Group#”,
name “Disk name”,
hot_used_mb “Hot (MB)”,
cold_used_mb “Cold (MB)”
FROM v$asm_disk_stat
ORDER BY 1;

Group# Disk name                        Hot (MB)  Cold (MB)
———- —————————— ———- ———-
1 ASMDISK5                                0       4187
ASMDISK6                                0       4187
2 ASMDISK4                               40       1125
ASMDISK2                               39       1126
ASMDISK1                               39       1126
ASMDISK3                               39       1127

Conclusion

The disk Used Space Directory (USD) – ASM file number 8 – maintains the number of allocation units (AU) used per zone, per disk in a disk group. It is a supporting metadata structure for the Intelligent Data Placement feature in ASM version 11.2. One handy use of this feature is a control of datafile placement in disks’ hot or cold zones.

【转】ASM file number 5

5The Template Directory – ASM file number 5 – contains information about all file templates for the disk group.

There are two types of templates – system and user created. The default (system) templates are always available for each file type supported by ASM. User created templates can be added for a custom template specifications.

Each template entry contains the following information:

  • The template name (for the default templates this corresponds to the file type)
  • The file redundancy (defaults to the disk group redundancy)
  • The file striping (default is file-type specific)
  • The system flag (set for the system templates)

Using templates

The full template information is externalized via V$ASM_TEMPLATE view. Let’s have a look at my templates:

SQL> SELECT name “Template Name”, redundancy “Redundancy”, stripe “Striping”, system “System”
FROM v$asm_template
WHERE group_number=1;

Template Name            Redundancy       Striping         System
———————— —————- —————- ——–
PARAMETERFILE            MIRROR           COARSE           Y
ASMPARAMETERFILE         MIRROR           COARSE           Y
DUMPSET                  MIRROR           COARSE           Y
CONTROLFILE              HIGH             FINE             Y
FLASHFILE                MIRROR           COARSE           Y
ARCHIVELOG               MIRROR           COARSE           Y
ONLINELOG                MIRROR           COARSE           Y
DATAFILE                 MIRROR           COARSE           Y
TEMPFILE                 MIRROR           COARSE           Y
BACKUPSET                MIRROR           COARSE           Y
AUTOBACKUP               MIRROR           COARSE           Y
XTRANSPORT               MIRROR           COARSE           Y
CHANGETRACKING           MIRROR           COARSE           Y
FLASHBACK                MIRROR           COARSE           Y
DATAGUARDCONFIG          MIRROR           COARSE           Y
OCRFILE                  MIRROR           COARSE           Y

16 rows selected.

There is one template that stands out – CONTROLFILE. It is the default template for database control files. Note that a file created with this template will always be triple mirrored and fine striped. The most interesting thing about it is that we can use it to create any database file.

Here is an example (note that I am connected to the database instance here):

SQL> create tablespace TRIPLE_F datafile ‘+DATA(CONTROLFILE)’ size 1m;

Tablespace created.

SQL> SELECT name FROM v$datafile WHERE name like ‘%triple_f%’;

NAME
——————————————————————————–
+DATA/br/datafile/triple_f.271.771793293

ASM assigned file number 271 to my file. Let’s now look at the redundancy of this file. This time I am connected to ASM instance:

SQL> SELECT group_number, name, type “Redundancy”
FROM v$asm_diskgroup
WHERE name=’DATA’;

GROUP_NUMBER NAME                             Redundancy
———— ——————————– —————-
1 DATA                             NORMAL

So this is a normal redundancy disk group. Still, files created with CONTROLFILE template should be tripple mirrored. Let’s check on my file 271:

SQL> SELECT xnum_kffxp “Extent”, au_kffxp “AU”, disk_kffxp “Disk”
FROM x$kffxp
WHERE group_kffxp=1 and number_kffxp=271
ORDER BY 1,2;

Extent         AU       Disk
———- ———- ———-
0       1126          1
0       1130          3
0       1136          2
1       1131          3
1       1132          0
1       1137          2

7       1132          1
7       1135          3
7       1141          2

24 rows selected.

As expected, the file is triple mirrored – we see that each virtual extent has three physical extents. But why do I see eight virtual extents when the size of my file is only 1 MB? Ah, that is because the file is fine striped as CONTROLFILE template dictates fine striping.

User templates

What if I want my file triple mirrored but with coarse striping? Well, I have to create my own template for that:

SQL> alter diskgroup DATA add template TRIPLE_COARSE attributes (HIGH COARSE);

Diskgroup altered.

Let’s now use this template. Back to the database instance…

SQL> create tablespace TRIPLE_C datafile ‘+DATA(TRIPLE_COARSE)’ size 1m;

Tablespace created.

SQL> SELECT name FROM v$datafile WHERE name like ‘%triple_c%’;

NAME
——————————————————————————–
+DATA/br/datafile/triple_c.272.771794469

Note the ASM file number is 272. Back to the ASM instance to check this file:

SQL> SELECT xnum_kffxp “Extent”, au_kffxp “AU”, disk_kffxp “Disk”
FROM x$kffxp
WHERE group_kffxp=1 and number_kffxp=272
ORDER BY 1,2;

Extent         AU       Disk
———- ———- ———-
0       1136          3
0       1137          0
0       1142          2
1       1133          1
1       1137          3
1       1143          2

6 rows selected.

Now I have one virtual extent allocated to my 1 MB file. Additional extent is for the file header. Note that the file is triple mirrored and coarsely striped.

I can also create a template for files that I don’t want mirrored at all. Let’s do that.

SQL> alter diskgroup DATA add template NO_MIRRORING attributes (UNPROTECTED);

Diskgroup altered.

And let’s now use that template:

SQL> create tablespace NOT_IMPORTANT datafile ‘+DATA(NO_MIRRORING)’ size 1m;

Tablespace created.

SQL> SELECT name FROM v$datafile WHERE name like ‘%not_important%’;

NAME
——————————————————————————–
+DATA/br/datafile/not_important.273.771795255

This is ASM file number 273. Let’s check it out:

SQL> SELECT xnum_kffxp “Extent”, au_kffxp “AU”, disk_kffxp “Disk”
FROM x$kffxp
WHERE group_kffxp=1 and number_kffxp=273
ORDER BY 1,2;

Extent         AU       Disk
———- ———- ———-
0       1138          0
1       1134          1

And we can see that this file is not mirrored.

Conclusion

The template directory contains the information about file templates in the disk group. Each disk group would have the default set of system templates and users can create additional templates as required. One good use of the templates is for creating triple mirrored files in normal redundancy disk groups. Note that for this to work we need at least three failgroups in the disk group.

【转】ASM file number 6

6The alias directory – ASM file number 6 – provides a hierarchical  naming system for all the files in a disk group.

A system file name is created for every file and it is based on the file type, database instance and type-specific information such as tablespace name. User alias may also be created if a full path name was given when the file was created.

Alias Directory entries include the following fields:

  • Alias or directory name
  • Alias incarnation number
  • File number
  • File incarnation number
  • Parent directory
  • System flag

The ASM alias information is externalised via V$ASM_ALIAS view.

Using the alias

The following SQL statement is taken from Oracle Press book Automatic Storage Management, Under-the-Hood & Practical Deployment Guide, by Nitin Vengurlekar, Murali Vallath and Rich Long. It demonstrates how to use V$ASM_ALIAS view to generates a list of all database files managed by ASM.

The output is organised in what appears as the list of directories, followed by the list of files with their full path names. I say what appears as the list of directories as ASM does not really keep the files in a hierarchical directory style structure. The output is just formatted so that the list of files is presented in a familiar, operating system like style.

The SQL assumes that the files were created using the ASM file name conventions. In particular, it assumes that the given database name is present in the alias name (the FULL_PATH column). The FULL_PATH variable in the query refers to the alias name. The DIR column indicates if this is a ‘directory’ and SYS column indicates whether the alias was created by the system.

col full_path format a64
col dir format a3
col sys format a3
set pagesize 1000
set linesize 200

SQL> SELECT full_path, dir, sys
FROM (
SELECT CONCAT (‘+’|| gname, sys_connect_by_path (aname,’/’)) full_path, dir, sys FROM ( SELECT g.name gname, a.parent_index pindex, a.name aname, a.reference_index rindex, a.alias_directory dir, a.system_created sys FROM v$asm_alias a, v$asm_diskgroup g WHERE a.group_number = g.group_number) START WITH (mod(pindex, power(2, 24))) = 0 CONNECT BY PRIOR rindex = pindex ORDER BY dir desc, full_path asc)
WHERE full_path LIKE upper(‘%/br%’);

FULL_PATH                                                        DIR SYS
—————————————————————- — —
+DATA/BR                                                         Y   Y
+DATA/BR/CONTROLFILE                                             Y   Y
+DATA/BR/DATAFILE                                                Y   Y
+DATA/BR/ONLINELOG                                               Y   Y
+DATA/BR/PARAMETERFILE                                           Y   Y
+DATA/BR/TEMPFILE                                                Y   Y
+RECO/BR                                                         Y   Y
+RECO/BR/DATAFILE                                                Y   Y
+DATA/BR/CONTROLFILE/Current.260.769030435                       N   Y
+DATA/BR/CONTROLFILE/Current.261.769030431                       N   Y
+DATA/BR/DATAFILE/EXAMPLE.269.769030517                          N   Y
+DATA/BR/DATAFILE/NOT_IMPORTANT.273.771795255                    N   Y
+DATA/BR/DATAFILE/SYSAUX.257.769030245                           N   Y
+DATA/BR/DATAFILE/SYSTEM.256.769030243                           N   Y
+DATA/BR/DATAFILE/TRIPLE_C.272.771794469                         N   Y
+DATA/BR/DATAFILE/TRIPLE_M.271.771793293                         N   Y
+DATA/BR/DATAFILE/UNDOTBS1.258.769030245                         N   Y
+DATA/BR/DATAFILE/USERS.259.769030245                            N   Y
+DATA/BR/ONLINELOG/group_1.262.769030439                         N   Y
+DATA/BR/ONLINELOG/group_1.263.769030445                         N   Y
+DATA/BR/ONLINELOG/group_2.264.769030453                         N   Y
+DATA/BR/ONLINELOG/group_2.265.769030461                         N   Y
+DATA/BR/ONLINELOG/group_3.266.769030471                         N   Y
+DATA/BR/ONLINELOG/group_3.267.769030479                         N   Y
+DATA/BR/PARAMETERFILE/spfile.270.769030977                      N   Y
+DATA/BR/TEMPFILE/TEMP.268.769030503                             N   Y
+DATA/BR/spfileBR.ora                                            N   N
+RECO/BR/DATAFILE/T1.256.771771469                               N   Y

28 rows selected.

Conclusion

Alias directory keeps track of all aliases in an ASM disk group. That information can then be accessed via V$ASM_ALIAS view to present file names in a user friendly format.

【转】ASM Active Change Directory

3

When the ASM instance needs to make an atomic change to multiple metadata blocks, a log record is written into the ASM active change directory (ACD), which is the ASM metadata file number 3. These log records are written in a single I/O.

 

The ACD is divided into chunks or threads, and each running ASM instance has its own 42 MB chunk. When a disk group is created, a single chunk is allocated for the ACD. As more instances mount the disk group, the ACD grows (by 42 MB) to accommodate every running instance with its own ACD chunk.

 

The ACD components are:

  • ACDC – ACD checkpoint
  • ABA – ACD block address
  • LGE – ACD redo log record
  • BCD – ACD block change descriptor

 

Locating ASM active change directory

 

We can query the X$KFFXP to find the ACD allocation units. The ACD is ASM file number 3 hence number_kffxp=3 in our query:

 

SQL> SELECT x.xnum_kffxp “Extent”,

x.au_kffxp “AU”,

x.disk_kffxp “Disk #”,

d.name “Disk name”

FROM x$kffxp x, v$asm_disk_stat d

WHERE x.group_kffxp=d.group_number

and x.disk_kffxp=d.disk_number

and x.group_kffxp=1

and x.number_kffxp=3

ORDER BY 1, 2;

 

Extent         AU     Disk # Disk name

———- ———- ———- ———

0          4          0 ASMDISK5

1          2          1 ASMDISK6

2          5          0 ASMDISK5

39         21          1 ASMDISK6

40         24          0 ASMDISK5

41         22          1 ASMDISK6

 

42 rows selected.

 

SQL>

 

The query returned 42 rows, i.e. 42 allocation units. As the allocation unit size for this disk group is 1MB, that means the total size of the ACD is 42 MB.

 

If I recreate the disk group with the larger allocation unit size, say 4 MB, we should still end up with a 42 MB ACD. Let’s have a look:

 

SQL> create diskgroup RECO external redundancy

disk ‘ORCL:ASMDISK5’, ‘ORCL:ASMDISK6’

attribute ‘au_size’=’4M’;

 

Diskgroup created.

 

SQL>

 

Now the same query from X$KFFXP and V$ASM_DISK_STAT returns 11 rows, showing that the ACD size is still 42 MB:

 

SQL> SELECT x.xnum_kffxp “Extent”…

 

Extent         AU     Disk # Disk name

———- ———- ———- ———

0          3          1 ASMDISK6

1          3          0 ASMDISK5

2          4          1 ASMDISK6

10          8          1 ASMDISK6

 

11 rows selected.

 

SQL>

 

Closer look at ASM active change directory

 

Let’s look at the ACD using the kfed utility. The last query shows that the ACD starts at AU 3 on disk ASMDISK6. Note that with the allocation unit size of 4 MB, I have to specify ausz=4m on the kfed command line:

 

$ kfed read /dev/oracleasm/disks/ASMDISK6 ausz=4m aun=3 | more

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            7 ; 0x002: KFBTYP_ACDC

kfracdc.eyec[0]:                     65 ; 0x000: 0x41

kfracdc.eyec[1]:                     67 ; 0x001: 0x43

kfracdc.eyec[2]:                     68 ; 0x002: 0x44

kfracdc.eyec[3]:                     67 ; 0x003: 0x43

kfracdc.thread:                       1 ; 0x004: 0x00000001

kfracdc.lastAba.seq:         4294967295 ; 0x008: 0xffffffff

kfracdc.lastAba.blk:         4294967295 ; 0x00c: 0xffffffff

kfracdc.blk0:                         1 ; 0x010: 0x00000001

kfracdc.blks:                     11263 ; 0x014: 0x00002bff

kfracdc.ckpt.seq:                     2 ; 0x018: 0x00000002

kfracdc.ckpt.blk:                     2 ; 0x01c: 0x00000002

kfracdc.fcn.base:                    16 ; 0x020: 0x00000010

kfracdc.fcn.wrap:                     0 ; 0x024: 0x00000000

kfracdc.bufBlks:                    512 ; 0x028: 0x00000200

kfracdc.strt112.seq:                  0 ; 0x02c: 0x00000000

kfracdc.strt112.blk:                  0 ; 0x030: 0x00000000

$

 

The output shows that this is indeed an ACD block (kfbh.type=KFBTYP_ACDC). The only interesting piece of information here is kfracdc.thread=1, which means that this ACD belong to ASM instance 1. In a cluster, this would match the ASM instance number.

 

That was block 0, the beginning of the ACD. Let’s now look at block 1 – the actual ACD data.

 

$ kfed read /dev/oracleasm/disks/ASMDISK6 ausz=4m aun=3 blkn=1 | more

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            8 ; 0x002: KFBTYP_CHNGDIR

kfracdb.lge[0].valid:                 1 ; 0x00c: V=1 B=0 M=0

kfracdb.lge[0].chgCount:              1 ; 0x00d: 0x01

kfracdb.lge[0].len:                  52 ; 0x00e: 0x0034

kfracdb.lge[0].kfcn.base:            13 ; 0x010: 0x0000000d

kfracdb.lge[0].kfcn.wrap:             0 ; 0x014: 0x00000000

kfracdb.lge[0].bcd[0].kfbl.blk:       0 ; 0x018: blk=0

kfracdb.lge[0].bcd[0].kfbl.obj:       4 ; 0x01c: file=4

kfracdb.lge[0].bcd[0].kfcn.base:      0 ; 0x020: 0x00000000

kfracdb.lge[0].bcd[0].kfcn.wrap:      0 ; 0x024: 0x00000000

kfracdb.lge[0].bcd[0].oplen:          4 ; 0x028: 0x0004

kfracdb.lge[0].bcd[0].blkIndex:       0 ; 0x02a: 0x0000

kfracdb.lge[0].bcd[0].flags:         28 ; 0x02c: F=0 N=0 F=1 L=1 V=1 A=0 C=0

kfracdb.lge[0].bcd[0].opcode:       212 ; 0x02e: 0x00d4

kfracdb.lge[0].bcd[0].kfbtyp:         9 ; 0x030: KFBTYP_COD_BGO

kfracdb.lge[0].bcd[0].redund:        17 ; 0x031: SCHE=0x1 NUMB=0x1

kfracdb.lge[0].bcd[0].pad:        63903 ; 0x032: 0xf99f

kfracdb.lge[0].bcd[0].KFRCOD_CRASH:   1 ; 0x034: 0x00000001

kfracdb.lge[0].bcd[0].au[0]:          8 ; 0x038: 0x00000008

kfracdb.lge[0].bcd[0].disks[0]:       0 ; 0x03c: 0x0000

$

 

We see that the ACD block 1 is of type KFBTYP_CHNGDIR, and contains the elements of kfracdb.lge[i] structure – the ASM redo records. Some of the things of interest here are the operation being performed (opcode) and the operation type (kfbtyp). None of this is very useful outside of the ACD context, so we will leave it at that.

 

Conclusion

 

This is an informational post only, to complete the ASM metadata story, as there are no practical benefits of understanding the inner works of the ASM active change directory.

【转】ASM Continuing Operations Directory

4

Some long-running ASM operations, like the rebalance, drop disk, create/delete/resize file, cannot be described by a single record in the ASM active change directory. Those operations are tracked via the ASM continuing operations directory (COD) – the ASM file number 4. There is one COD per disk group.

 

If the process performing the long-running operation dies before completing it, a recovery process will look at the entry and either complete or rollback the operation. There are two types of continuing operations – background and rollback.

 

Background operation

 

A background operation is performed by an ASM instance background process. It is done as part of a disk group maintenance and it continues until it is either completed or the ASM instance dies. If the instance dies, then the recovering instance needs to resume the background operation. The disk group rebalance is the best example of a background operation.

 

Let’s query the X$KFFXP view to find the COD allocation units for disk group 3 (group_kffxp=3). COD is ASM file number 4, hence number_kffxp=4 in the query:

 

SQL> SELECT x.xnum_kffxp “Extent”,

x.au_kffxp “AU”,

x.disk_kffxp “Disk #”,

d.name “Disk name”

FROM x$kffxp x, v$asm_disk_stat d

WHERE x.group_kffxp=d.group_number

and x.disk_kffxp=d.disk_number

and x.group_kffxp=3

and x.number_kffxp=4

ORDER BY 1, 2;

 

Extent         AU     Disk # Disk name

———- ———- ———- ——————————

0          8          0 ASMDISK5

 

SQL>

 

This is telling us that the ACD is in allocation unit 8 on disk ASMDISK5. Let’s have a closer look (note the AU size of 4 MB for this disk group):

 

$ kfed read /dev/oracleasm/disks/ASMDISK5 ausz=4m aun=8 blkn=0 | more

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                            9 ; 0x002: KFBTYP_COD_BGO

kfrcbg.size:                          0 ; 0x000: 0x0000

kfrcbg.op:                            0 ; 0x002: 0x0000

kfrcbg.inum:                          0 ; 0x004: 0x00000000

kfrcbg.iser:                          0 ; 0x008: 0x00000000

$

 

This shows the COD block for a background operation (kfbh.type=KFBTYP_COD_BGO) and not much happening at the moment – all kfrcbg fields are 0. Most notably the operation code (kfrcbg.op) is 0, which means that there are no active background operations. The op code 1 would indicate an active disk rebalance operation.

 

Rollback operation

 

A rollback operation is similar to a database transaction. It is started at the request of an ASM foreground process. To begin a rollback operation a slot must be found in the rollback directory – block 1 of the ASM continuing operations directory. If all slots are busy then the operation sleeps until one is free. During the operation the disk group is in an inconsistent state. The operation needs to either complete or rollback all its changes to the disk group. The foreground is usually performing the operation on behalf of a database instance. If the database instance dies or the ASM foreground process dies, or an unrecoverable error occurs, then the operation must be terminated.

 

Creating a file is a good example of a rollback operation. If an error occurs while allocating the space for the file, then the partially created file must be deleted. If the database instance does not commit the file creation, the file must be automatically deleted. If the ASM instance dies then this must be done by the recovering instance.

 

Let’s have a look at block 1 of the COD:

 

$ kfed read /dev/oracleasm/disks/ASMDISK5 ausz=4m aun=8 blkn=1 | more

kfbh.endian:                          1 ; 0x000: 0x01

kfbh.hard:                          130 ; 0x001: 0x82

kfbh.type:                           15 ; 0x002: KFBTYP_COD_RBO

kfrcrb10[0].opcode:                   1 ; 0x000: 0x0001

kfrcrb10[0].inum:                     1 ; 0x002: 0x0001

kfrcrb10[0].iser:                     1 ; 0x004: 0x00000001

kfrcrb10[0].pnum:                    18 ; 0x008: 0x00000012

kfrcrb10[1].opcode:                   0 ; 0x00c: 0x0000

kfrcrb10[1].inum:                     0 ; 0x00e: 0x0000

kfrcrb10[1].iser:                     0 ; 0x010: 0x00000000

kfrcrb10[1].pnum:                     0 ; 0x014: 0x00000000

$

 

Fields kfrcrb10[i] track the active rollback operations. We see that there is one operation in progress (kfrcrb10[0] have non-null values), and from the opcode list we know this is a file create operation. The value kfrcrb10[0].inum=1 means that the operation is running in the ASM instance 1.

 

The rollback operation opcodes are:

 

1 – Create a file

2 – Delete a file

3 – Resize a file

4 – Drop alias entry

5 – Rename alias entry

6 – Rebalance space COD

7 – Drop disks force

8 – Attribute drop

9 – Disk Resync

10 – Disk Repair Time

11 – Volume create

12 – Volume delete

13 – Attribute directory creation

14 – Set zone attributes

15 – User drop

 

Conclusion

 

The ASM continuing operations directory (COD) – keeps track of the long-running ASM operations. In case of any problems, the COD entries can be used to either continue or rollback the operation. The operation cleanup is performed by another ASM instance (in a cluster environments), or by the same ASM instance – usually after the instance restart.

沪ICP备14014813号-2

沪公网安备 31010802001379号