利用RMAN检测数据库坏块的脚本

虽然我们也可以通过dbv(db file verify)工具做到对单个数据文件的坏块检测,但是直接使用RMAN的”backup validate check logical database;”结合V$DATABASE_BLOCK_CORRUPTION视图要方便地多。

如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!

Script:

1) $ rman target / nocatalog

2) RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup validate check logical database;
}

3) select * from V$DATABASE_BLOCK_CORRUPTION ;

REM www.askmac.cn & www.askmac.cn

4) If V$DATABASE_BLOCK_CORRUPTION contains rows please run this query to 
find the objects that contains the corrupted blocks:

SELECT e.owner,
       e.segment_type,
       e.segment_name,
       e.partition_name,
       c.file#,
       greatest(e.block_id, c.block#) corr_start_block#,
       least(e.block_id + e.blocks - 1, c.block# + c.blocks - 1) corr_end_block#,
       least(e.block_id + e.blocks - 1, c.block# + c.blocks - 1) -
       greatest(e.block_id, c.block#) + 1 blocks_corrupted,
       null description
  FROM dba_extents e, v$database_block_corruption c
 WHERE e.file_id = c.file#
   AND e.block_id <= c.block# + c.blocks - 1
   AND e.block_id + e.blocks - 1 >= c.block#
UNION
SELECT s.owner,
       s.segment_type,
       s.segment_name,
       s.partition_name,
       c.file#,
       header_block corr_start_block#,
       header_block corr_end_block#,
       1 blocks_corrupted,
       'Segment Header' description
  FROM dba_segments s, v$database_block_corruption c
 WHERE s.header_file = c.file#
   AND s.header_block between c.block# and c.block# + c.blocks - 1
UNION
SELECT null owner,
       null segment_type,
       null segment_name,
       null partition_name,
       c.file#,
       greatest(f.block_id, c.block#) corr_start_block#,
       least(f.block_id + f.blocks - 1, c.block# + c.blocks - 1) corr_end_block#,
       least(f.block_id + f.blocks - 1, c.block# + c.blocks - 1) -
       greatest(f.block_id, c.block#) + 1 blocks_corrupted,
       'Free Block' description
  FROM dba_free_space f, v$database_block_corruption c
 WHERE f.file_id = c.file#
   AND f.block_id <= c.block# + c.blocks - 1
   AND f.block_id + f.blocks - 1 >= c.block#
 order by file#, corr_start_block#;

SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents
WHERE file_id = &fileid
and &blockid between block_id AND block_id + blocks - 1;

Does Rman Backup benefit from Large Pool?

我们在学习Oracle的过程中,或多或少会存在个人对概念的理解错误、误解或者根本是教材编写存在不严谨的地方,这样或以讹传讹或三人言虎,导致在Oracle圈子存在着一些古老相传的迷信(superstition),因为这些迷信已经深入人心了,所以我们几乎很难纠正过来;这其实很有意思,IT作为一个高科技的领域也会出现迷信,说明我们在IT技术的”教学”和”思考”上存在问题,这一点值得深思。

这里我列出几个最为常见的迷信,算作抛砖引玉:

1.几乎所有的Oracle入门教程都会在介绍Large pool的时候这样描述:”RMAN 备份使用large pool作为磁盘I/O缓冲区,配置Large pool有助于提高RMAN备份性能”

Truth:除非你启用了slaves IO,否则rman并不使用large pool

RMAN I/O可以分成三种模式:

Mode Disk tape
Asynchronous I/O 绝大多数操作系统支持AIO,默认disk_asynch_io为TRUE,即默认启用磁盘异步IO。如果磁盘设备不支持AIO,那么会使用synchronous I/O。磁盘异步模式下RMAN I/O缓冲区域从PGA中分配,相关IO性能信息存放在V$backup_async_io视图中 磁带设备本身不支持AIO(tape I/O is always synchronous),虽然默认tape_asynch_io为TRUE,但磁带设备只能通过IO slaves模拟异步IO,所以启用磁带AIO需要另外设置backup_tape_io_slaves=TRUE。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中
Synchronous I/O 若disk_asynch_io设置为false,或操作系统不支持异步IO,且dbwr_io_slaves=0时启用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关IO性能信息存放在V$backup_sync_io视图中 默认backup_tape_io_slaves为false,即磁带设备默认不启用AIO而使用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关性能信息存放在V$backup_sync_io视图中
Slaves I/O 启用disk slaves I/O,要求设置disk_asynch_io=false且dbwr_io_slaves>0。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中 设置tape_asynch_io=true且backup_tape_io_slaves=true时启用,磁带的AIO模式其实就是使用slaves Io模拟获得的。所以此模式下的一切和tape AIO完全一样

我们在使用RMAN备份数据库时无论是磁盘备份还是磁带备份总是优先期望使用AIO异步IO特性(tape aio比较特殊,见上表),使用AIO的前提是设置合理的初始化参数以及操作系统支持AIO,如果我们使用的操作系统不支持AIO那么我们将不得不使用Synchronous IO同步IO。这并不是世界末日,因为Oracle提供了IO从属进程(slaves IO)来模拟AIO,当然这是退而求其次的。为了启用slaves IO,我们需要手动设置backup_tape_io_slaves或dbwr_io_slaves参数来启用IO从属特性,当使用磁带备份时设置backup_tape_io_slaves(此时tape_asynch_io应当为true)为true,当使用磁盘设备时设置dbwr_io_slaves(此时disk_asynch_io应当为false)为非零值。在启用slaves IO的前提下RMAN才会从Large pool当中分配内存并加以利用,如果没有配置large pool(注意如果启用了ASMM,那么Oracle会自动为large pool分配一个granule大小的空间)或者large pool过小,那么RMAN的内存缓冲区将从shared pool中分配。如果Oracle仍不能获得足够内存,那么将本地进程获取足够的IO缓存。若我们启用了I/O slaves,那么很有必要配置一个足够大的Large pool(一般60-100M就足够了),这样RMAN的I/O缓存区可以从large pool中分配,避免了RMAN的I/O buffer和shared pool中的library cache等其他组件发生竞争。

If I/O slaves are used, I/O buffers are obtained from the SGA ,or the large pool, if configured.If LARGE_POOL_SIZE is set, then Oracle attempts to get memory from the large pool. If this value is not large enough, then Oracle does not try to get buffers from the shared pool.If Oracle cannot get enough memory, then it obtains I/O buffer memory from local process memory and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.

在默认情况下Oracle对于磁盘设备使用AIO模式(disk_asynch_io=true & dbwr_io_slaves=0 by default),而对于磁带设备使用synchronous I/O(tape_asynch_io=true & backup_tape_io_slaves=false by default),都不会启用slaves I/O,所以默认情况下RMAN总是从PGA中分配缓存。换而言之在默认情况下,即便配置了较大的Large pool也不会为RMAN所用。

RMAN allocates the tape buffers in the SGA or the PGA, depending on whether I/O slaves are used. If you set the initialization parameter BACKUP_TAPE_IO_SLAVES = true, then RMAN allocates tape buffers from the SGA or the large pool if the LARGE_POOL_SIZE initialization parameter is set. If you set the parameter to false, then RMAN allocates the buffers from the PGA.

我们来通过以下演示,进一步验证AIO/Slave IO环境下RMAN内存缓冲区从哪里分配,并加强印象:

SQL> select * From v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> show parameter async

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
disk_asynch_io                       boolean     TRUE
tape_asynch_io                       boolean     TRUE

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  15873376

backup as backupset database skip offline;

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  15873376

/* 在AIO模式下,全库备份后发现large pool未发生变化 */

SQL> alter system set disk_asynch_io=false scope=spfile;
System altered.

SQL> alter system set dbwr_io_slaves=2 scope=spfile;
System altered.

/* 以上启用了磁盘I/O Slave特性 */

SQL> startup force;

[oracle@rh2 ~]$ ps -ef|grep i10|grep -v grep
oracle   20761     1  0 20:44 ?        00:00:00 ora_i101_G10R2
oracle   20763     1  0 20:44 ?        00:00:00 ora_i102_G10R2

/* 启用I/O Slave后会出现ora_ixxx_SID这样的后台进程 */

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  15873376

RMAN> backup as backupset database skip offline;

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  24151392
large pool   KSFQ Buffers                 25276416

SQL> /

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  41006432
large pool   KSFQ Buffers                  8421376

/* 启用了I/O Slave后执行备份操作,
    large pool中出现了KSFQ Buffers,这个KSFQ buffer就是RMAN所使用的缓冲区,
    实际决定该buffer大小的是隐藏参数_backup_ksfq_bufsz和_backup_ksfq_bufcnt */

SQL> col name for a30
SQL> col describ for a70
SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ
  2   FROM SYS.x$ksppi x, SYS.x$ksppcv y
  3   WHERE x.inst_id = USERENV ('Instance')
  4   AND y.inst_id = USERENV ('Instance')
  5   AND x.indx = y.indx
  6  AND x.ksppinm LIKE '%ksfq%';

NAME                           VALUE      DESCRIB
------------------------------ ---------- ----------------------------------------------------------------------
_backup_ksfq_bufsz             0          size of the ksfq buffer used for backup/restore
_backup_ksfq_bufcnt            0          number of the ksfq buffers used for backup/restore

/* 在10g中似乎Oracle会自动调控以上2个参数 */

SQL> alter system set "_backup_ksfq_bufsz"=131072;
System altered.

SQL> alter system set "_backup_ksfq_bufcnt"=1;
System altered.

RMAN> backup tablespace data01;

/*  I/O slaves的IO统计信息仍存放在V$backup_sync_io视图中,
    而非可能是你所预期的v$backup_sync_io视图  */

SQL> select type,buffer_size,buffer_count from v$backup_async_io;

TYPE      BUFFER_SIZE BUFFER_COUNT
--------- ----------- ------------
AGGREGATE           0            0
INPUT          131072            1
OUTPUT        1048576            4

另外large pool的使用量可以通过下列公式来估算:
LARGE_POOL_SIZE =
(4 * {RMAN Channels} * {DB_BLOCK_SIZE} * {DB_DIRECT_IO_COUNT} * {Multiplexing Level})
+
(4 * {RMAN Channels} * {Tape Buffer Size})

事实上如果你probe过PGA的内存使用情况,那么你或许会在PGA headdump中看到过”KSFQ heap”的相关信息。显然当在非slaves IO模式下,RMAN会从PGA HEAP->KSFQ heap这个subheap子堆中分配必要的buffer。

我们在磁盘AIO模式下执行必要的backup操作,之后找出RMAN相关的shadow process并对其做heapdump,分析其pga内存使用情况

SQL> select spid,pga_used_mem,pga_max_mem from v$process where addr in
  2  (select paddr from v$session where program like '%rman%')
  3  order by pga_used_mem desc ;

SPID         PGA_USED_MEM PGA_MAX_MEM
------------ ------------ -----------
24424             5750341    14410829
24425             4717957    12134125
24413             3308341     9626701
24423              435773      993005

SQL> oradebug setospid 24424;
Oracle pid: 25, Unix process pid: 24424, image: oracle@rh2.oracle.com (TNS V1-V3)

SQL> oradebug dump heapdump 536870917;
Statement processed.

SQL> oradebug tracefile_name;
/s01/admin/G10R2/udump/g10r2_ora_24424.trc

==========================heapdump details==============================

FIVE LARGEST SUB HEAPS for heap name="pga heap"   desc=0x68d3ec0
  Subheap ds=0x87c83e8  heap name=       KSFQ heap  size=         4205296
   owner=(nil)  latch=(nil)

******************************************************
HEAP DUMP heap name="KSFQ heap"  desc=0x87c83e8
 extent sz=0x1040 alt=32767 het=32767 rec=0 flg=2 opc=2
 parent=0x68d3ec0 owner=(nil) nex=(nil) xsz=0x20228
EXTENT 0 addr=0x7f86bf788dd8
  Chunk     7f86bf788de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 1 addr=0x7f86bf988dd8
  Chunk     7f86bf988de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 2 addr=0x7f86bfb88dd8
  Chunk     7f86bfb88de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 3 addr=0x7f86bfc98dd8
  Chunk     7f86bfc98de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 4 addr=0x7f86bfddf358
  Chunk     7f86bfddf368 sz=     5192    freeable  "KSFQ ctx       "
EXTENT 5 addr=0x87c7680
  Chunk        0087c7690 sz=      984    perm      "perm           "  alo=984
  Chunk        0087c7a68 sz=     1944    free      "               "
  Chunk        0087c8200 sz=      464    freeable  "KSFQ buffer poo"
Total heap size    =  4205032
FREE LISTS:
 Bucket 0 size=0
  Chunk        0087c7a68 sz=     1944    free      "               "
Total free space   =     1944
UNPINNED RECREATABLE CHUNKS (lru first):
PERMANENT CHUNKS:
  Chunk        0087c7690 sz=      984    perm      "perm           "  alo=984
Permanent space    =      984

/* 以上可以看到KSFQ Heap子堆共占用了4205296=4M内存,
    而该服务进程的pga_used_memory总量为5750341 bytes,即KSFQ占该进程PGA的73%
    另外这里KSFQ Buffer的大多Chunk是freeable的,仅少量为perm                    */

另外磁带I/O缓冲区的大小可以在配置通道时指定,其默认值由操作系统决定,一般为64k。我们使用allocate channel命令设置磁带I/O缓冲区,为了达到最佳性能,可以将磁带I/O缓冲区大小设置为256KB或更大,如:

allocate channel maclean1 device type sbt
parms="blksize=262144,ENV=(NB_ORA_SERV=nas,NB_ORA_POLICY=racdb,NB_ORA_CLIENT=rh2)";

结论:

  1. 在默认情况下(即disk backup使用AIO而tape backup使用sync IO),RMAN backup并不会从Large Pool中获益,而是从PGA中的KSFQ heap中分配必要的I/O内存缓冲区。当然我们还是推荐设置Large_pool_size为100M左右,因为即使是PC服务器也不会缺这一点内存
  2. 仅当启用I/O slaves时RMAN backup从Large Pool中分配ksfq buffer(ksfq buffer used for backup/restore),在9i/10g中常有因large pool过小而引起ORA-04031错误的案例;若large pool大小为0,那么ksfq buffer会从shared pool中分配,一方面ORA-04031:(“shared pool”,”unknown object”,”sga heap(1,0)”,”KSFQ Buffers”)错误的概率将大大增加,另一个方面将造成KSFQ与library cache间的竞争,不利于性能。这种情况下RMAN backup的确从Large pool中得到收益,设置large_pool_size为100M仍是被推荐的
  3. 在非slaves IO模式下RMAN从PGA的KSFQ heap子堆中分配I/O缓冲区,因为在非slaves IO模式下该内存缓冲区没有共享的必要

待修订!!

Mode Disk tape
Asynchronous I/O 绝大多数磁盘设备支持AIO,默认disk_asynch_io为TRUE,即默认启用磁盘异步IO。如果磁盘设备不支持AIO,那么会使用synchronous I/O。磁盘异步模式下RMAN I/O缓冲区域从PGA中分配,相关IO性能信息存放在V$backup_async_io视图中 磁带设备本身不支持AIO,虽然默认tape_asynch_io为TRUE,但磁带设备只能通过IO slaves模拟异步IO,所以启用磁带AIO需要另外设置backup_tape_io_slaves=TRUE。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中
Synchronous I/O 若disk_asynch_io设置为false,或磁盘设备不支持异步IO,且dbwr_io_slaves=0时启用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关IO性能信息存放在V$backup_sync_io视图中 默认backup_tape_io_slaves为false,即磁带设备默认不启用AIO而使用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关性能信息存放在V$backup_sync_io视图中
Slaves I/O 启用disk slaves I/O,要求设置disk_asynch_io=false且dbwr_io_slaves>0。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中 同tape AIO时一样

Oracle备份恢复:Rman Backup缓慢问题一例

近日客户报多套10g的数据库在使用NBU磁带备份时出现RMAN FULL BACKUP十分缓慢的问题,这些数据库中最大一个的达到2.61T,该数据库在一个月前地全库0级备份耗时在3-4个小时,而在最近猛涨到17个小时。客户之前已经向Symantec提交了服务请求,但暂时没有得到结论。希望我们从Oracle角度分析该备份速度变慢问题。

我们首先分析了备份信息的动态视图V$rman_backup_job_details:

OUTPUT_DEVICE INPUT_TYPE ELAPSED_SECONDS INPUT_BYTES_DISPLAY INPUT_BYTES_PER_SEC OUTPUT_BYTES_PER_SEC
17 SBT_TAPE DB INCR 62078 2.61T 44.08M 18.10M

以上可以确认在对2.61T大小的数据库执行全库磁带备份时耗费了62078s,这里还显示了backup时每秒的读取IO为44M,每秒的写出IO为18M;这里不能因为简单的output io per second<input io per second,而断论写出能力存在瓶颈;备份时对数据文件的读取和写出backup piece到备份介质上的操作是一个整体,CPU、Input IO、Output IO任何一环都可能成为备份的瓶颈;譬如因为对数据文件的读取IO存在瓶颈,那么相应的写出IO也会慢下来;又譬如当RMAN与备份服务器之间的IO带宽存在瓶颈,那么相应的读取IO也会不得不慢下来。具体是哪一个环节出现了问题,我们需要求助于其他的RMAN动态性能视图,如:

V$BACKUP_SYNC_IO
Displays rows when the I/O is synchronous to the process (or thread on some platforms) performing the backup.

V$BACKUP_ASYNC_IO
Displays rows when the I/O is asynchronous to the process (or thread on some platforms) performing the backup.

以上2个视图的区别在于一个汇聚了使用同步IO执行RMAN备份恢复操作的性能信息,而另一个是异步IO的。

因为客户使用默认配置disk_async_io为true,故而我们首先关注input IO的性能信息,这些信息存放在V$backup_async_io视图中;而对于磁带设备未启用slaves IO模拟的异步IO(tape_asynch_io=true但是backup_tape_io_slaves为默认的false),所以与磁带设备相关的output IO的性能信息存放在v$backup_sync_io视图中。

DEVICE OPEN_TIME ELAPSED BYTES/S IO_COUNT READY long_waits LONG_WAIT_TIME_TOTAL LONG_WAIT_TIME_MAX
DISK 4/4 388900 372827681 2765564 2074114 90028 231181 53
DISK 4/5 753900 192323498 2765564 2074114 90028 178548 41
DISK 4/6 369000 392934106 2765564 2074114 90028 243507 41
DISK 4/7 405100 357918255 2765564 2074114 90028 268117 73
DISK 4/8 347900 416765407 2765564 2074114 90028 183543 77
DISK 4/9 395800 366328159 2765564 2074114 90028 255399 48
DISK 4/10 428400 338451646 2765564 2074114 90028 268544 45
DISK 4/11 413200 350901949 2765564 2074114 90028 269857 42
DISK 4/12 735400 197161661 2765564 2074114 90028 169016 34
DISK 4/13 410000 353640696 2765564 2074114 90028 283607 60
DISK 4/14 408300 355113116 2765564 2074114 90028 279012 38
DISK 4/15 442700 327519054 2765564 2074114 90028 308744 605
DISK 4/16 393000 368938130 2765564 2074114 90028 251509 205
DISK 4/17 423100 342691291 2765564 2074114 90028 273868 42
DISK 4/18 375600 386029513 2765564 2074114 90028 230859 328
DISK 4/19 721200 201043657 2765564 2074114 90028 191753 162
DISK 4/20 401000 361577769 2765564 2074114 90028 272852 147
DISK 4/21 346600 418328578 2765564 2074114 90028 209569 109
DISK 4/22 400500 362029177 2765564 2074114 90028 265060 112
DISK 4/23 402400 360319794 2765564 2074114 90028 259008 594
DISK 4/24 449600 322492627 2765564 2074114 90028 274202 64
DISK 4/25 413900 350308493 2765564 2074114 90028 269380 230
DISK 4/26 748600 193685126 2765564 2074114 90028 177477 105
DISK 4/27 389900 371871468 2765564 2074114 90028 261200 38
DISK 4/28 383800 377781879 2765564 2074114 90028 241870 158
DISK 4/29 403700 359159488 2765564 2074114 90028 266135 212
DISK 4/30 390600 371205031 2765564 2074114 90028 248161 340
DISK 5/1 463600 312753851 2765564 2074114 90028 271247 39
DISK 5/2 419900 345302894 2765564 2074114 90028 255042 117
DISK 5/3 705700 205459381 2765564 2074114 90028 173043 189
DISK 5/4 418400 346540835 2765564 2074114 90028 291614 47
DISK 5/5 357400 405687424 2765564 2074114 90028 222676 188
DISK 5/6 421400 344073767 2765564 2074114 90028 285860 95
DISK 5/7 405100 357918255 2765564 2074114 90028 263761 38
DISK 5/8 381500 380059463 2765564 2074114 90028 203510 210
DISK 5/9 918400 157875311 2765564 2074114 90028 221293 37
DISK 5/10 3378600 42915020 2765564 2074114 90028 142211 36
DISK 5/11 559900 258961753 2765564 2074114 90028 252911 174
DISK 5/12 622500 232919976 2765564 2074114 90028 241495 40
DISK 5/13 626700 231359000 2765564 2074114 90028 237973 41
DISK 5/14 802000 180788884 2765564 2074114 90028 231283 42
DISK 5/15 601200 241172131 2765564 2074114 90028 209418 40
DISK 5/16 1387800 104476643 2765564 2074114 90028 211878 36

这里我们关心的几个时间指标包括:ELAPSED(Input IO的总耗时)、LONG_WAIT_TIME_TOTAL(长IO的总等待时间)、LONG_WAIT_TIME_MAX(最长一次的IO等待时间)的单位均为百分之一秒,该视图的具体定义参考这里
从以上chart中(由于列宽的原因只截取了部分数据)我们可以看到从4/4到5/16之间备份的Input IO总耗时(elapsed)虽然间歇性的暴涨到过33786s,但其他IO指标:IO总数、READY IO总数、LONG IO WAIT的次数、LONG IO WAIT的总耗时、最长LONG IO WAIT均没有出现大的变化;基本可以判定在备份期间对数据文件的读取不存在瓶颈,为了进一步诊断需要分析备份输出的IO性能状况:

DEVICE date ELAPSED BYTES BYTES/S IO_COUNT IO_TIME_TOTAL IO_TIME_MAX DISCRETE_BYTES_PER_SECOND
SBT_TAPE 4/5 754900 584663433216 77449123 2230314 440365 2600 132767916
SBT_TAPE 4/5 703900 553432907776 78623797 2111179 381135 5800 145206530
SBT_TAPE 4/12 736400 588200542208 79875142 2243807 433298 3400 135749655
SBT_TAPE 4/12 692300 556839731200 80433299 2124175 369237 2600 150808216
SBT_TAPE 4/19 722200 591873179648 81954193 2257817 395904 3400 149499166
SBT_TAPE 4/19 829000 561043210240 67677106 2140210 510746 2801 109847793
SBT_TAPE 4/26 749600 596010598400 79510485 2273600 435940 2600 136718493
SBT_TAPE 4/26 700300 565171191808 80704154 2155957 377019 2800 149905228
SBT_TAPE 5/3 706800 600177377280 84914739 2289495 396965 5800 151191510
SBT_TAPE 5/3 712500 569155518464 79881476 2171156 392324 5800 145072827
SBT_TAPE 5/10 3379700 604452159488 17884787 2305802 3093781 2802 19537652
SBT_TAPE 5/10 2798400 573396746240 20490164 2187335 2524296 2801 22715115
SBT_TAPE 5/17 428095307776 1633054 2216291 5800 19315844

可以从以上chart中可以发现到5/3为止的output io总耗时还处于合理范围内(为7068+7125)s,约为4小时。而到了5/10时输出IO的总耗时达到了(33797+29784)s,约为17.6小时。研究其他IO统计信息可以发现5/10的IO_TIME_TOTAL总数为(30937+25242)s,IO_TIME_TOTAL代表了某个IO等待的总耗时,单位为百分之一秒。从另一个IO性能指标DISCRETE_BYTES_PER_SECOND来看,在5/10备份文件的平均传输率急剧下降。

综合上述现象,诱发客户环境中的RMAN全库备份缓慢的主要原因是备份文件输出IO性能在一段时间内急剧下降,瓶颈应当存在于RMAN与NBU备份服务器之间,与数据库读取性能没有关系。给客户的建议是复查数据库服务器到NBU备份服务器间的网络带宽是否正常,NBU服务器的性能是否正常,磁带库的写出性能是否正常。

这个case后续经过对NBU的复查发现原因是虚拟磁带库VTL的存储电池过期,导致读写数据时直接从磁盘上进行,而不经过缓存,故影响了数据库全备份的速度。由于VTL存储电池过期信息需要从串口直接访问其存储才能确认问题,所以在问题发生之初无法从VTL的管理界面中获取该信息。

这个case告诉我们不仅是raid卡的冲放电、存储的ups电池过期会引发严重的IO性能问题,VTL的存储电池过期也可能给数据库备份造成麻烦,要让系统7*24时刻正常运行,有太多指标需要我们去关注,良好的规范很重要,同时很多时候我们不得不和硬件打交道。

fast incremental backup failed on standby database

一套Linux上的11.1.0.7的physical standby物理备库在使用fast incremental backup进行高于0级的增量备份时会出现ORA-19648错误,其出错记录如下:

RMAN>  backup incremental level 1 database;

Starting backup at 22-MAR-11
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/standby/oradata/SBDB2/datafile/o1_mf_sysaux_22m6ov92_.dbf
input datafile file number=00003 name=/standby/oradata/SBDB2/datafile/o1_mf_undotbs1_23m6ovap_.dbf
input datafile file number=00006 name=/standby/oradata/SBDB2/datafile/o1_mf_enc_25m6ovba_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAR-11
channel ORA_DISK_2: starting incremental level 1 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00001 name=/standby/oradata/SBDB2/datafile/o1_mf_system_21m6ov92_.dbf
input datafile file number=00005 name=/standby/oradata/SBDB2/datafile/o1_mf_example_24m6ovar_.dbf
input datafile file number=00004 name=/standby/oradata/SBDB2/datafile/o1_mf_users_26m6ovba_.dbf
channel ORA_DISK_2: starting piece 1 at 22-MAR-11
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/22/2011 20:20:28
ORA-19648: datafile 2: incremental-start SCN equals checkpoint SCN
ORA-19640: datafile checkpoint is SCN 1249353 time 03/09/2011 05:50:27
continuing other job steps, job failed will not be re-run
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_2 channel at 03/22/2011 20:22:33
ORA-19648: datafile 1: incremental-start SCN equals checkpoint SCN
ORA-19640: datafile checkpoint is SCN 1249352 time 03/09/2011 05:50:26

经过分析该ORA-19640->ORA-19648是由11.1.0.7上的Bug引起的,MOS已经确认其为Bug 9288598:

Affects:
Product (Component)	 Oracle Server (Rdbms)
Range of versions believed to be affected	 Versions BELOW 12.1
Versions confirmed as being affected	
11.1.0.7
Platforms affected	 Generic (all / most platforms affected)
Fixed:

This issue is fixed in	
12.1 (Future Release)
11.2.0.2 (Server Patch Set)

An RMAN Backup incremental level 1 can fail with ORA-19648 / ORA-19640 if the standby database is open read only rather than just skipping the datafile.
PROBLEM:
--------
Following error occurring during incremental backup:

Starting backup at 02-DEC-2009 20:37:44
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=154 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Veritas NetBackup for Oracle - Release 6.5 
(2008052301)
channel ORA_SBT_TAPE_1: starting incremental level 1 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00002 
name=/data/ora_data01/KAHCB4P/KAHCB4P_sysaux_01.dbf
input datafile file number=00001 
name=/data/ora_data01/KAHCB4P/KAHCB4P_system_01.dbf
input datafile file number=00003 
name=/data/ora_data01/KAHCB4P/KAHCB4P_undo_01.dbf
input datafile file number=00004 
name=/data/ora_data01/KAHCB4P/KAHCB4P_users_01.dbf
input datafile file number=00005 
name=/data/ora_data01/KAHCB4P/KAHCB4P_tools_01.dbf
input datafile file number=00006 
name=/data/ora_data01/KAHCB4P/KAHCB4P_backup_test_01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 02-DEC-2009 20:38:01
RMAN-3009: failure of backup command on ORA_SBT_TAPE_1 channel at 12/02/2009 
20:38:02
ORA-19648: datafile 2: incremental-start SCN equals checkpoint SCN
ORA-19640: datafile checkpoint is SCN 3911695 time 11/27/2009 12:36:48
continuing other job steps, job failed will not be re-run
channel ORA_SBT_TAPE_1: starting incremental level 1 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set

DIAGNOSTIC ANALYSIS:
--------------------
Same problem as Bug 6903819
Requested backport, but this hasn't fixed the problem.
Confirmed that the patch was installed correctly and relinked successfully.

RELEASE NOTES:
]]Backup incremental leve 1 fails with ORA-19648 and ORA-19640 when standby
]]database opened for read only.
*** 02/14/10 02:43 pm *** (ADD: Impact/Symptom->FEATURE UNUSABLE )

REDISCOVERY INFORMATION:
Backup incremental level 1 fails with ORA-19648 and the standby database is
open read-only.
WORKAROUND:
None

可以从上述Note中看到该Bug需要到11.2.0.2中才得到fix,那么在11.1.0.7上我们有什么办法能解决或者绕过该问题吗?
经过测试,我得到2种workaround的方法:
该ORA-19648错误首先可以通过在物理备库(physical standby)上停止介质恢复(MRP)进程的方式来workaround:

SQL>  alter database recover managed standby database cancel;
Database altered.

RMAN> backup incremental level 1 database;

Starting backup at 22-MAR-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=141 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=140 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/standby/oradata/SBDB2/datafile/o1_mf_sysaux_22m6ov92_.dbf
input datafile file number=00004 name=/standby/oradata/SBDB2/datafile/o1_mf_users_26m6ovba_.dbf
input datafile file number=00006 name=/standby/oradata/SBDB2/datafile/o1_mf_enc_25m6ovba_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAR-11
channel ORA_DISK_2: starting incremental level 1 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00001 name=/standby/oradata/SBDB2/datafile/o1_mf_system_21m6ov92_.dbf
input datafile file number=00003 name=/standby/oradata/SBDB2/datafile/o1_mf_undotbs1_23m6ovap_.dbf
input datafile file number=00005 name=/standby/oradata/SBDB2/datafile/o1_mf_example_24m6ovar_.dbf
channel ORA_DISK_2: starting piece 1 at 22-MAR-11
channel ORA_DISK_1: finished piece 1 at 22-MAR-11
piece handle=/standby/backup/2km7suui_1_1.bak tag=TAG20110322T212538 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:33
channel ORA_DISK_2: finished piece 1 at 22-MAR-11
piece handle=/standby/backup/2lm7suv1_1_1.bak tag=TAG20110322T212538 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:27
Finished backup at 22-MAR-11

Starting Control File and SPFILE Autobackup at 22-MAR-11
piece handle=/standby/backup/c-157018592-20110322-01.bak comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAR-11

该ORA-19648错误也可以通过禁用fast incremental backup的块跟踪(block change tracking)特性来绕过问题,当然在可能的情况下我们更推荐使用上面那种方法,因为毕竟还可以利用到fast incremental backup特性:

SQL> alter database recover managed standby database using current logfile disconnect;
Database altered.

SQL> select process,status from v$managed_standby;

PROCESS   STATUS
--------- ------------
ARCH	  CONNECTED
ARCH	  CONNECTED
ARCH	  CONNECTED
ARCH	  CONNECTED
MRP0	  APPLYING_LOG

SQL> select status from v$block_change_tracking;

STATUS
----------
ENABLED


SQL> alter database disable block change tracking;
Database altered.


RMAN>  backup incremental level 2 database;

Starting backup at 22-MAR-11
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting incremental level 2 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/standby/oradata/SBDB2/datafile/o1_mf_sysaux_22m6ov92_.dbf
input datafile file number=00004 name=/standby/oradata/SBDB2/datafile/o1_mf_users_26m6ovba_.dbf
input datafile file number=00006 name=/standby/oradata/SBDB2/datafile/o1_mf_enc_25m6ovba_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAR-11
channel ORA_DISK_2: starting incremental level 2 datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00001 name=/standby/oradata/SBDB2/datafile/o1_mf_system_21m6ov92_.dbf
input datafile file number=00003 name=/standby/oradata/SBDB2/datafile/o1_mf_undotbs1_23m6ovap_.dbf
input datafile file number=00005 name=/standby/oradata/SBDB2/datafile/o1_mf_example_24m6ovar_.dbf
channel ORA_DISK_2: starting piece 1 at 22-MAR-11
channel ORA_DISK_1: finished piece 1 at 22-MAR-11
piece handle=/standby/backup/2nm7sv8d_1_1.bak tag=TAG20110322T213052 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_2: finished piece 1 at 22-MAR-11
piece handle=/standby/backup/2om7sv8s_1_1.bak tag=TAG20110322T213052 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:15
Finished backup at 22-MAR-11

Starting Control File and SPFILE Autobackup at 22-MAR-11
piece handle=/standby/backup/c-157018592-20110322-02.bak comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAR-11

Fractured block found during backing up datafile

一套AIX上的10.2.0.3系统,在rman备份期间告警日志出现如下记录:

======================= alert log record ============================
Hex dump of (file 35, block 1087687) in trace file /oracle/product/10.2.0/admin/MS/udump/ms_ora_103548.trc
Corrupt block relative dba: 0x08d098c7 (file 35, block 1087687)
Fractured block found during backing up datafile
Data in bad block:
 type: 6 format: 2 rdba: 0x08d098c7
 last change scn: 0x0006.44443e06 seq: 0x1 flg: 0x06
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x6d910601
 check value in block header: 0xc0b0
 computed block checksum: 0x4286
Reread of blocknum=1087687, file=/dev/vx/rdsk/oradgMS/lv_ms_DB31. found valid data

=========== trace information for process 103548 ========================
Corrupt block relative dba: 0x08d098c7 (file 35, block 1087687)
Fractured block found during backing up datafile
Data in bad block:
 type: 6 format: 2 rdba: 0x08d098c7
 last change scn: 0x0006.44443e06 seq: 0x1 flg: 0x06
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0x6d910601
 check value in block header: 0xc0b0
 computed block checksum: 0x4286
Reread of blocknum=1087687, file=/dev/vx/rdsk/oradgMS/lv_ms_DB31. found valid data

可以看到该块的type为6,下面列出了Oracle 9.2中已经存在的各种块类型:

Type Description
1 KTU UNDO HEADER
2 KTU UNDO BLOCK
3 KTT SAVE UNDO HEADER
4 KTT SAVE UNDO BLOCK
5 DATA SEGMENT HEADER
6 trans data
7 Unknown
8 Unknown
9 Unknown
10 DATA SEGMENT FREE LIST BLOCK
11 Unknown
12 DATA SEGMENT HEADER WITH FREE LIST BLOCKS
13 Compatibility segment
14 KTU UNDO HEADER W/UNLIMITED EXTENTS
15 KTT SAVE UNDO HEADER W/UNLIMITED EXTENTS
16 DATA SEGMENT HEADER – UNLIMITED
17 DATA SEGMENT HEADER WITH FREE LIST BLKS – UNLIMITED
18 EXTENT MAP BLOCK
19 Unknown
20 Unknown
21 Unknown
22 DATA SEGMENT FREE LIST BLOCK WITH FREE BLOCK COUNT
23 BITMAPPED DATA SEGMENT HEADER
24 BITMAPPED DATA SEGMENT FREELIST
25 BITMAP INDEX BLOCK
26 BITMAP BLOCK
27 LOB BLOCK
28 KTU BITMAP UNDO HEADER – LIMITED EXTENTS
29 KTFB Bitmapped File Space Header
30 KTFB Bitmapped File Space Bitmap
31 TEMP INDEX BLOCK
32 FIRST LEVEL BITMAP BLOCK
33 SECOND LEVEL BITMAP BLOCK
34 THIRD LEVEL BITMAP BLOCK
35 PAGETABLE SEGMENT HEADER
36 PAGETABLE EXTENT MAP BLOCK
37 EXTENT MAP BLOCK OF SYSTEM MANAGED UNDO SEGMENT
38 KTU SMU HEADER BLOCK
39 Unknown
40 PAGETABLE MANAGED LOB BLOCK
41 Unknown
42 Unknown
43 Unknown
44 Unknown
45 Unknown
46 Unknown
47 Unknown

表和索引的块均可能为type 6的trans data;也就是说rman在备份期间读取到该数据块,并在初次读取时发现该块断裂了(Fractured),但Fractured并不代表块就真的corrupted了;从告警日志看rman在初次读取发现该块Fractured后,又再次读取时发现数据块已经恢复正常(found valid data)。所以上述告警日志并代表所列出的数据块存在讹误,很有可能是该数据块所在数据文件在备份期间发生了剧烈的IO操作,当rman读取到该数据块时可能存储正在对其进行写的操作,所以rman在第一次读取时认为该快断裂了(Fractured);之后rman对该块进行reread发现”断裂”现象已不存在,而”Corrupt block”仅仅是一种假象;针对上述问题可以对表或索引进行进一步的analyze..validate操作以确保不存在坏块。
同时上述”Corrupt block误报”现象极有可能是因为在Rman备份期间个别数据文件的IO过于活跃所致(如频繁的dml操作),建议在磁盘活跃度低的时间段运行rman备份工作。

参考文档:

Fractured Block Messages in Alert.log During RMAN Backup of Datafile
    * fact: Oracle Server - Enterprise Edition 8
    * fact: Oracle Server - Enterprise Edition 9
    * fact: Recovery Manager (RMAN)
    * symptom: Fractured block found during backup up datafile
    * symptom: Reread of blocknum found some corrupt data
    * symptom: Analyze table validate structure cascade returns no errors
    * change: NOTE ROLE: The messages are of the form Reread of blocknum=36256, 
       file=/pdscdata/pdsclive/data1/dispatch_data_large2. dbf. 
found same corrupt data *** Corrupt block relative dba: 0xfc008dc0 (file 63, block 36288) 
Fractured block found during backing up datafile Data in bad block - 
type: 0 format: 0 rdba: 0x00000000 last change scn: 0x0000.00000000 seq: 0x0 flg: 0x00 consistency 
value in tail: 0x53494e53 check value in block header: 0x0, block checksum disabled 
spare1: 0x0, spare2: 0x0, spare3: 0x0
    * cause: RMAN backups of datafile are being performed while the datafile is involved in heavy I/O. 
RMAN reads Oracle blocks from disk. If it finds that the block is fractured, which means it is being actively used, 
it performs a reread of the block. If that fails again then the block is assumed to be corrupt. 
By identifying the object that these blocks belong to by following Handling Oracle Block Corruptions in 
Oracle7/8/8i and performing an analyze .. validate structure cascade on the object involved you can 
confirm that the object is not corrupt. 

fix:
Run the backups when the tablespace has less I/O activity.

沪ICP备14014813号-2

沪公网安备 31010802001379号