latch free:SQL memory manager latch

一套HP-UX上的10.2.0.4 RAC系统,其中一个节点出现大量的latch free:SQL memory manager latch等待事件, 相关ADDM性能信息:

 

SELECT KSMCHCLS CLASS, COUNT(KSMCHCLS) NUM, SUM(KSMCHSIZ) SIZ,
To_char( ((SUM(KSMCHSIZ)/COUNT(KSMCHCLS)/1024)),'999,999.00')||'k' "AVG SIZE"
FROM X$KSMSP
GROUP BY KSMCHCLS;
CLASS NUM SIZ AVG SIZE
-------- ---------- ---------- ------------
R-freea 927 9481080 9.99k
freeabl 1037478 2001975328 1.88k
R-free 593 170107192 280.14k
recr 515080 595805056 1.13k
R-perm 498 35345056 69.31k
R-recr 2 2224 1.09k
perm 97848 1320357168 13.18k
free 251577 161871664 .63k

DETAILED ADDM REPORT FOR TASK 'TASK_21286' WITH ID 21286
--------------------------------------------------------

Analysis Period: 26-OCT-2009 from 10:00:52 to 11:00:12
Database ID/Instance: 2429423994/1
Database/Instance Names: VPROD/VPROD
Host Name: crmdb1
Database Version: 10.2.0.4.0
Snapshot Range: from 10253 to 10254
Database Time: 420176 seconds
Average Database Load: 118 active sessions

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

FINDING 1: 75% impact (313574 seconds)
--------------------------------------
Database latches in the "Other" wait class were consuming significant database
time.

RECOMMENDATION 1: DB Configuration, 75% benefit (313574 seconds)
RATIONALE: The latch "SQL memory manager latch" with latch id 289 was
one of the top latches contributing to high "latch free" waits.
RELEVANT OBJECT: database latch# 289

SYMPTOMS THAT LED TO THE FINDING:
SYMPTOM: Wait class "Other" was consuming significant database time.
(75% impact [313620 seconds])

FINDINGS
========
Load Profile

Per Second Per Transaction
Redo size: 534,863.07 8,586.10
Logical reads: 527,828.57 8,473.18
Block changes: 13,065.49 209.74
Physical reads: 3,999.96 64.21
Physical writes: 617.50 9.91
User calls: 9,869.10 158.43
Parses: 3,287.89 52.78
Hard parses: 27.90 0.45
Sorts: 1,564.66 25.12
Logons: 4.35 0.07
Executes: 5,423.19 87.06

COMMENTS
==========
Load in terms of GETS is tremendous and no particular SQL is responsible for a
large percentage of the load. It may simply be too much for the hardware/configuration.

HOWEVER
=========
I/O while moderate in volume is very slow.

Tablespace IO Stats

* ordered by IOs (Reads + Writes) desc

Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
------------------------------------------------------------------------------------------------------
TBS_CRM_TS_S1 8,340,219 772 4.48 2.95 356,810 33 17,284,247 2.15
TBS_CRM_IDX_S1 1,643,104 152 7.89 1.22 298,265 28 22,684 4.34
TEMP 762,343 71 1.09 6.67 411,652 38 0 0.00
TBS_BAKDATA 358,137 33 6.00 4.11 93,672 9 335,632 6.32
TBS_RM_TS_S1 400,528 37 9.57 1.09 39,000 4 1,546 10.49
TBS_RM_IDX_S1 399,888 37 8.71 1.01 33,342 3 1,577 9.74
TBS_PROD 422,348 39 3.68 1.00 941 0 5 12.00
TBS_SP_TS_S1 341,104 32 6.26 12.19 504 0 44 5.68
TBS_BFBAKDATA 187,013 17 2.34 7.49 909 0 104,797 2.75
TBS_PF 141,810 13 8.76 1.79 8,571 1 563 11.39
TBS_BSS_SYS 118,965 11 2.26 14.72 2,171 0 7 0.00

So about 1/3 typical performance in terms of Av Rd(ms).

SEGMENTS
==========
Segments by Logical Reads

* Total Logical Reads: 5,699,755,248
* Captured Segments account for 72.1% of Total

Owner Tablespace Name Object Name Subobject Name Obj. Type Logical Reads %Total
CRM TBS_CRM_TS_S1 PROD_2_TD TABLE 454,967,408 7.98
RM TBS_RM_TS_S1 PHONE_NUMBER TABLE 435,121,200 7.63
EVENT TBS_CRM_TS_S1 CUST_INTERACTION TABLE 386,838,512 6.79
CRM TBS_CRM_TS_S1 CO_2_TD TABLE 383,932,304 6.74
CRM TBS_CRM_TS_S1 PARTY_IDENTIFY_EXTEND TABLE 298,659,184 5.24

So 5 individual segments are the target of 72% of all gets.

.
ACTION PLAN
============
Please do:

(1) Disk I/O
-----------------
Please ask your SA why Disk I/O is so slow.

(2) Segements
---------------------
Please consider partitioning these 5 objects to spread out GETS and READS:

Owner Tablespace Name Object Name
---------------------------------------------
CRM TBS_CRM_TS_S1 PROD_2_TD
RM TBS_RM_TS_S1 PHONE_NUMBER
EVENT TBS_CRM_TS_S1 CUST_INTERACTION
CRM TBS_CRM_TS_S1 CO_2_TD
CRM TBS_CRM_TS_S1 PARTY_IDENTIFY_EXTEND

 

诊断发现shared pool共享池有较多的空闲内存,反倒是IO响应速度的Av Rd(ms)要慢于典型的1 ms to 5 ms。

 

需要SA系统管理员进一步确认存储IO是否存在性能问题。

 

文档《Note.457063.1 Slow I/O On HP Unix》介绍了HP-UX平台上IO性能瓶颈的一些解决路径,引用如下:

 

 Slow I/O On HP Unix

Applies to:
Oracle Server – Enterprise Edition – Version: 8.1.7.0 to 10.2.0.3 – Release: 8.1.7 to 10.2
HP OpenVMS VAX
HP-UX PA-RISC (64-bit)
HP-UX PA-RISC (32-bit)
HP OpenVMS Alpha
HP-UX Itanium
HP Tru64 UNIX
Symptoms

Disk utility output shows 100% usage for disk continously:

ProcList CPU Rpt Mem Rpt Disk Rpt NextKeys SlctProc Help Exit
GlancePlus C.04.50.00 19:15:39 bplita3 ia64 Current Avg High
———————————————————————-
CPU  Util S SN NU U      | 90% 90% 90%
Disk Util F       F      |100% 100% 100% <=== Disk Too heavily loaded

In Statspack or AWR report,  high ‘free buffer waits’ seen even after with considerably increasing db_cache_size and db writers.

Statspack Report

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:   99.99       Redo NoWait %:     99.99
            Buffer  Hit   %:   58.92    In-memory Sort %:     99.82
            Library Hit   %:   89.19        Soft Parse %:     83.51
         Execute to Parse %:   50.76         Latch Hit %:    100.00
Parse CPU to Parse Elapsd %:   81.48     % Non-Parse CPU:     93.21

….

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~                                                     % Total
Event                                               Waits    Time (s) Ela Time
——————————————– ———— ———– ——–
free buffer waits                                  76,390      74,807    55.82
enqueue                                             4,071      11,625     8.67
…..

Cause

The problem is related to Disk I/O.
Solution

1. Use the O/S striping software

 Try to use the O/S striping software to distribute database files over as many disks as you can.

2. Use Direct IO
Mount the filesystem with direct IO option.
For example:
% Mount –F vxfs –o remount,nodatainlog,mincache=direct,convosync=direct /dev/vg00/lv_ora /soft/oracle

mincache and convosync
“mincache=direct” => bypass buffer cache on read
“convosync=direct” => force direct I/O for DB writers

Mincache=direct and convosync=direct allow data to be transferred directly from Oracle buffer cache to disk and disk to Oracle buffer cache. This avoids double buffering by bypassing the file system buffer cache and can improve physical read/write performance. However, cases where the disk read could have been avoided because a required block was in file system buffer cache may be negatively impacted.

If your filesystem is mounted with this option then FILESYSTEMIO_OPTIONS parameter needs to set to “SETALL”  in order to use DIO.
Parameters in Oracle influencing the use of Direct IO

    FILESYSTEMIO_OPTIONS defines the IO operations on filesystem files .This parameter should not normally be set by the user.
    The value may be any of the following:
        asynch – Set by default on HP. This allows asynchronous IO to be used where supported by the OS.
        directIO – This allows directIO to be used where supported by the OS. Direct IO bypasses any Unix buffer cache.
        setall – Enables both ASYNC and DIRECT IO.
        none – This disables ASYNC IO and DIRECT IO so that Oracle uses normal synchronous writes, without any direct io options.

See
Document 120697.1 Init.ora Parameter “FILESYSTEMIO_OPTIONS” Reference Note

Oracle recommends that you use the value SETALL instead of the value DIRECTIO, because the DIRECTIO value disables asynchronous I/O.

    DISK_ASYNCH_IO controls whether I/O to datafiles, control files, and logfiles is asynchronous (that is, whether parallel server processes can overlap I/O requests with CPU processing during table scans). If your platform supports asynchronous I/O to disk, Oracle recommends that you leave this parameter set to its default value. However, if the asynchronous I/O implementation is not stable, you can set this parameter to false to disable asynchronous I/O. If your platform does not support asynchronous I/O to disk, this parameter has no effect.

    If you set DISK_ASYNCH_IO to false, then you should also set DBWR_IO_SLAVES or DB_WRITER_PROCESSES to a value other than its default of zero in order to simulate asynchronous I/O.

    DB_WRITER_PROCESSES or DBWR_IO_SLAVES
    see comments in DISK_ASYNCH_IO

3. Concurrent I/O
An alternative solution to Direct I/O is to use Concurrent I/O. Concurrent I/O is available in OnlineJFS 5.0.1.

To enable Concurrent I/O, the filesystem must be mounted with “-o cio”.
Eg:

mount -F vxfs -o nodatainlog,cio /soevxfs/redos /oracle/mnt/redos

Please note that remount should not be used to enable Concurrent I/O on mounted filesystems.

“-o cio”
Concurrent I/O allows multiple processes to read from or write to the same file without blocking other read(2) or write(2) calls.With Concurrent I/O, the read and write operations are not serialized.  This advisory is generally used by applications that require high performance for accessing data and do not perform overlapping writes to the same file. It is the responsibility of the application or the running threads to coordinate the write activities to the same file. It also avoids double buffering by bypassing the filesystem buffer cache and thus improves physical read/write performance significantly. Concurrent I/O performs very close to that of raw logical volumes.

Oracle等待事件kfk:async disk IO

kfk: async disk IO等待事件是ASM下异步的System I/O等待事件,kfk内核层面在disk_asynch_io=true时被激活。当rbal或其他ASM相关后台进程在维护ASM磁盘组时可能进入kfk: async disk IO等待。

SQL> col name for a20
SQL> col PARAMETER1 for a10
SQL> col PARAMETER2 for a10
SQL> col PARAMETER3 for a10
SQL> col WAIT_CLASS for a15

SQL> select name,parameter1,parameter2,parameter3,wait_class from v$event_name where name='kfk: async disk IO';

NAME                 PARAMETER1 PARAMETER2 PARAMETER3 WAIT_CLASS
-------------------- ---------- ---------- ---------- ---------------
kfk: async disk IO   count      intr       timeout    System I/O

SQL> select * from v$version;    

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

SQL> select name,value from v$system_parameter where name in ('instance_type','asm_power_limit');

NAME                 VALUE
-------------------- ----------
instance_type        asm
asm_power_limit      10

SQL> conn / as sysasm
Connected.

SQL> oradebug setmypid;
Statement processed.
SQL> oradebug event 10046 trace name context forever,level 8;
Statement processed.
SQL> alter diskgroup data check all;

Diskgroup altered.

SQL> oradebug event 10046 trace name context off;
Statement processed.

SQL> oradebug tracefile_name;
/s01/orabase/diag/asm/+asm/+ASM1/trace/+ASM1_ora_29405.trc

=====================trace=====================
PARSING IN CURSOR #140442102181424 len=30 dep=0 uid=0 oct=193 
lid=0 tim=1313673029551496 hv=2849532521 ad='6bd58b50' sqlid='ft5h7dunxhum9'
alter diskgroup data check all
END OF STMT
PARSE #140442102181424:c=1999,e=14171,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1313673029551493
WAIT #140442102181424: nam='Disk file operations I/O' ela= 573 FileOperation=2 fileno=0 filetype=15 obj#=-1 
WAIT #140442102181424: nam='Disk file operations I/O' ela= 33 FileOperation=2 fileno=0 filetype=15 obj#=-1 
WAIT #140442102181424: nam='Disk file operations I/O' ela= 29 FileOperation=2 fileno=0 filetype=15 obj#=-1 
WAIT #140442102181424: nam='kfk: async disk IO' ela= 941 count=1 intr=0 timeout=4294967295 obj#=-1

fdp_checkDsk(): 20
----- Abridged Call Stack Trace -----
ksedsts()+461<-kfdp_checkDsk()+476<-kfdCheck()+1649<-kfgCheck()+477<-kfxdrvAl
terOne()+5976<-kfxdrvAlter()+2287<-kfxdrvEntry()+1306<-opiexe()+20028<-opiosq
0()+3993<-kpooprx()+274<-kpoal8()+800<-opiodr()+910<-ttcpip()+2289<-opitsk()+
1670<-opiino()+966<-opiodr()+910
<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+252<-main()+201<
-__libc_start_main()+244<-_start()+36
----- End of Abridged Call Stack Trace -----
WAIT #140442102181424: nam='rdbms ipc reply' ela= 1610 from_process=19 timeou
t=2147483647 p3=0 obj#=-1 tim=1313673029798048
kfdp_checkDsk(): 21
----- Abridged Call Stack Trace -----
ksedsts()+461<-kfdp_checkDsk()+476<-kfdCheck()+1649<-kfgCheck()+477<-kfxdrvAl
terOne()+5976<-kfxdrvAlter()+2287<-kfxdrvEntry()+1306<-opiexe()+20028<-opiosq
0()+3993<-kpooprx()+274<-kpoal8()+800<-opiodr()+910<-ttcpip()+2289<-opitsk()+
1670<-opiino()+966<-opiodr()+910
<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+252<-main()+201<
-__libc_start_main()+244<-_start()+36
----- End of Abridged Call Stack Trace -----
WAIT #140442102181424: nam='rdbms ipc reply' ela= 1677 from_process=19 timeou
t=2147483647 p3=0 obj#=-1 tim=1313673029885645
kfdp_checkDsk(): 22
----- Abridged Call Stack Trace -----
ksedsts()+461<-kfdp_checkDsk()+476<-kfdCheck()+1649<-kfgCheck()+477<-kfxdrvAl
terOne()+5976<-kfxdrvAlter()+2287<-kfxdrvEntry()+1306<-opiexe()+20028<-opiosq
0()+3993<-kpooprx()+274<-kpoal8()+800<-opiodr()+910<-ttcpip()+2289<-opitsk()+
1670<-opiino()+966<-opiodr()+910
<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+252<-main()+201<
-__libc_start_main()+244<-_start()+36
----- End of Abridged Call Stack Trace -----
WAIT #140442102181424: nam='rdbms ipc reply' ela= 1350 from_process=19 timeou
t=2147483647 p3=0 obj#=-1 tim=1313673029970397
kfdp_checkDsk(): 23
----- Abridged Call Stack Trace -----
kfdp_checkDsk(): 24
----- Abridged Call Stack Trace -----
ksedsts()+461<-kfdp_checkDsk()+476<-kfdCheck()+1649<-kfgCheck()+477<-kfxdrvAl
terOne()+5976<-kfxdrvAlter()+2287<-kfxdrvEntry()+1306<-opiexe()+20028<-opiosq
0()+3993<-kpooprx()+274<-kpoal8()+800<-opiodr()+910<-ttcpip()+2289<-opitsk()+
1670<-opiino()+966<-opiodr()+910
<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+252<-main()+201<
-__libc_start_main()+244<-_start()+36
----- End of Abridged Call Stack Trace -----

Comparation between ASM note [ID 373242.1] and note [ID 452924.1]

Question:
Oracle Support on the note “Lun Size And Performance Impact With Asm [ID 373242.1]” and on the note “How to Prepare Storage for ASM [ID 452924.1]” say that in other things:

– Maximize the number of disks in a disk group for maximum data distribution and higher I/O bandwidth
– Size alone should not affect the performance of a LUN. The underlying hardware is what counts.
There is no magic number for the LUN size
– For larger LUNs it is recommended using a large ALLOCATION UNIT.

The provieder of the Storage EMC-Vmax say that the LUNS size can be 200 GB for database OLTP.

We believe that the premise or EMC_Vmax contradicts to ORACLE so instead of maximize the disks they pretend minimize the disks
of the DISK GROUP.

You believe that this is to be true for Storage EMC-Vmax.

Answer:
As mentioned in the document 373242.1.LUN size alone should not affect the performance.

Size alone should not affect the performance of a LUN. The underlying hardware is what counts. There is no magic number for the LUN size. Seek the advice of the storage vendor to recommend the best configuration from a raid 1 or 5 perspective for performance and availability since this may vary between vendors. Given a database size and storage hardware you have available, our best practice recommends creating larger LUNs (if possible to reduce LUN management for the sys admins) and create LUNs from separate set of storage arrays (drives) so that LUNs are not sharing drives if possible.

For larger LUNs it is recommended using a large ALLOCATION UNIT.

Linode vps磁盘速度实测

事实证明Linode无愧于众多业界人士对其的推崇,今天实测了一下其磁盘速度真的不俗:

[root@li229-25 ~]# hdparm -tT /dev/xvda

/dev/xvda:
 Timing cached reads:   25536 MB in  1.99 seconds = 12843.60 MB/sec
 Timing buffered disk reads:  340 MB in  3.00 seconds = 113.20 MB/sec

[root@li229-25 ~]# dd if=/dev/xvda of=/root/dump bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 18.9223 seconds, 55.4 MB/s

以上为Linode vps的成绩,dd的速度为55MB/s

一下为笔者的台式机电脑,使用普通的西数硬盘

[root@rh2 ~]# cat /proc/scsi/scsi 
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: ATA      Model: WDC WD3200AAJS-0 Rev: 01.0
  Type:   Direct-Access                    ANSI SCSI revision: 05

[root@rh2 ~]# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   9132 MB in  2.00 seconds = 4569.83 MB/sec
 Timing buffered disk reads:  306 MB in  3.01 seconds = 101.72 MB/sec
[root@rh2 ~]# dd if=/dev/sda of=/root/dump bs=1024k count=1000  
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 22.6009 seconds, 46.4 MB/s

Linode虚拟服务器的磁盘速度略优于普通pc的磁盘速度,作为vps性能还是不错的;
如果作为网页服务器的话,结合memcached等缓存技术,一般来说IO性能不会成为主要瓶颈。

11g新特性之IO校准(IO Calibration)

IO校准(IO Calibration)特性可以帮助我们了解存储系统的真实性能,以进一步判断I/O性能问题是由数据库还是存储系统自身引起的。I/O校准特性(IO Calibration)通过对Oracle数据文件的随机I/O访问存储介质,其结论值更符合数据库IO性能的真实情况。
在使用该特性前,我们要确保满足以下条件:

  • 调用该存储过程需要用到SYSDBA权限
  • TIME_STATISTICS 参数为true:

SQL> show parameter timed_statistics;

NAME TYPE VALUE
———————————— ———– ——————————
timed_statistics boolean TRUE

  • 必须打开异步IO;注意在使用文件系统时,FILESYSTEMIO_OPTIONS需设为SETALL或ASYNC,否则Oracle不会启用异步IO。
  • 可以通过以下查询检验是否启用了异步IO:
SQL> SELECT NAME, ASYNCH_IO
2    FROM V$DATAFILE F, V$IOSTAT_FILE I
3   WHERE F.FILE# = I.FILE_NO
4     AND FILETYPE_NAME = 'Data File';

NAME                                                                             ASYNCH_IO
-------------------------------------------------------------------------------- ---------
D:\TOOLS\ADMINSTRATORG\ORADATA\PROD\DATAFILE\O1_MF_SYSTEM_65DN8HXT_.DBF       ASYNC_ON
D:\TOOLS\ADMINSTRATORG\ORADATA\PROD\DATAFILE\O1_MF_SYSAUX_65DN8J18_.DBF       ASYNC_ON
D:\TOOLS\ADMINSTRATORG\ORADATA\PROD\DATAFILE\O1_MF_UNDOTBS1_65DN8J1S_.DBF     ASYNC_ON
D:\TOOLS\ADMINSTRATORG\ORADATA\PROD\DATAFILE\O1_MF_USERS_65DN8J2X_.DBF        ASYNC_ON

IO Calibration特性可以通过DBMS_RESOURCE_MANAGER.CALIBRATE_IO存储过程调用;该过程会对Oracle数据文件引发一系列IO敏感的只读工作负载(由1MB大小的随机IO组成),从而判断存储系统所能持续的最大IOPS(每秒最大IO请求数)和MBPS(每秒IO传输速率)。为了使结果更具代表性,应当保持IO Calibration测试过程中整个数据库是空闲的,没有其他IO负载损耗。

让我们具体来看一下CALIBRATE_IO存储过程的调用方法:

 -- Initiate an I/O calibration
 --
 -- Input arguments:
 --  num_physical_disks   - Approximate number of physical disks in
 --                         the database storage
 --  max_latency          - Maximum tolerable latency in milliseconds for
 --                         database-block-sized IO requests
 --
 -- Output arguments:
 --  max_iops       - Maximum number of I/O requests per second that can be
 --                   sustained.  The I/O requests are randomly-distributed,
 --                   database-block-sized reads.
 --  max_mbps       - Maximum throughput of I/O that can be sustained,
 --                   expressed in megabytes per second. The I/O requests
 --                   are randomly-distributed, 1 megabyte reads.
 --  actual_latency - Average latency of database-block-sized I/O
 --                   requests at "max_iops" rate, expressed in milliseconds.
 --
  PROCEDURE calibrate_io (
   num_physical_disks  IN  PLS_INTEGER DEFAULT 1,
   max_latency         IN  PLS_INTEGER DEFAULT 20,
   max_iops            OUT PLS_INTEGER,
   max_mbps            OUT PLS_INTEGER,
   actual_latency      OUT PLS_INTEGER
   );

其中输入参数num_physical_disks用以指定数据库存储所用物理磁盘的近似数;max_latency指定了客户所能容忍的最大IO 延迟时间,单位为ms;一般我们认为IO平均等待在10ms左右是一种较好的表现,远大于这个数字往往说明IO负载过高了,这里我们假定15ms是我们的deadline。

SQL> set serveroutput on;
SQL> DECLARE
  2    lat  INTEGER;
  3    iops INTEGER;
  4    mbps INTEGER;
  5  BEGIN
  6  -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (disk_count,max_latency , iops, mbps, lat);
  7     DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 15, iops, mbps, lat);
  8
  9    DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
 10    DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
 11    dbms_output.put_line('max_mbps = ' || mbps);
 12  end;
 13  /

max_iops = 127
latency  = 15
max_mbps = 38

PL/SQL procedure successfully completed
/*平均延迟为15ms时,最大iops为127,mbps为38*/

/*执行过程中也可以通过V$IO_CALIBRATION_STATUS视图了解进度*/
SQL> select * from V$IO_CALIBRATION_STATUS;

STATUS        CALIBRATION_TIME
------------- --------------------------------------------------------------------------------
IN PROGRESS   11-8月 -10 12.02.35.121 下午

SQL> alter session set nls_timestamp_format='HH24:MI';

Session altered
SQL> col start_time for a10;
SQL> col end_time for a10;
SQL> select * from DBA_RSRC_IO_CALIBRATE;

START_TIME END_TIME     MAX_IOPS   MAX_MBPS  MAX_PMBPS    LATENCY NUM_PHYSICAL_DISKS
---------- ---------- ---------- ---------- ---------- ---------- ------------------
12:55      12:59             127         38         14         15                  2
/*DBA_RSRC_IO_CALIBRATE视图记录了IO CALIBRATION的历史信息*/

[gview file=”http://askmac.cn/wp-content/uploads/resource/calibrate_io.pdf”]

沪ICP备14014813号-2

沪公网安备 31010802001379号