Script:检查数据库当前是否有备份操作在执行中

以下脚本可以用于检测数据库当前是否有备份操作在执行中:

SELECT DECODE(os_backup.backup + rman_backup.backup, 0, 'FALSE', 'TRUE') backup
  FROM (SELECT COUNT(*) backup FROM gv$backup WHERE status = 'ACTIVE') os_backup,
       (SELECT COUNT(*) backup
          FROM gv$session
         WHERE status = 'ACTIVE'
           AND client_info like '%rman%') rman_backup
/

Does Duplicate Target Database need Pre-existing DB backup?

之前有网友问我在10g中通过RMAN 的duplicate target database命令复制数据库时是否需要先完成全库的备份。

实际我在10g中并不常用duplicate target database 来帮助创建DataGuard Standby Database,所以虽然记忆中仍有些印象,却不能十分确定地回答了。

今天查了一下资料,发现原来Active database duplication 和 Backup-based duplication 是11g才引入的特性,换句话说10g中duplication是要求预先完成数据库的RMAN backup备份的。

具体关于以上2个特性见文档<RMAN ‘Duplicate Database’ Feature in 11G>,引文如下:

RMAN 'Duplicate Database' Feature in 11G

You can create a duplicate database using the RMAN duplicate command.
The duplicate database has a different DBID from the source database and functions
entirely independently.Starting from 11g you can do duplicate database in 2 ways.

1. Active database duplication
2. Backup-based duplication

Active database duplication copies the live target database over the network to the
auxiliary destination and then creates the duplicate database.Only difference is that you
don't need to have the pre-existing RMAN backups and copies.

The duplication work is performed by an auxiliary channel.
This channel corresponds to a server session on the auxiliary instance on the auxiliary host.

As part of the duplicating operation, RMAN automates the following steps:

1. Creates a control file for the duplicate database
2. Restarts the auxiliary instance and mounts the duplicate control file
3. Creates the duplicate datafiles and recovers them with incremental backups and archived redo logs.
4. Opens the duplicate database with the RESETLOGS option

For the active database duplication, RMAN does one extra step .i.e. copy the
target database datafiles over the network to the auxiliary instance

A RAC TARGET database can be duplicated as well. The procedure is the same as below.
If the auxiliary instance needs to be a RAC-database as well,
than start the duplicate procedure for to a single instance and convert
the auxiliary to RAC after the duplicate has succeeded.

 

而在10g 中不仅需要对目标数据库进行备份,还需要手动将备份集(backupset)拷贝至目标主机上,这确实过于繁琐了:

 

Oracle10G RMAN Database Duplication
 If you are using a disk backup solution and duplicate to a
remote node you must first copy the backupsets from the original hosts backup
location to the same mount and path on the remote server. Because duplication
uses auxiliary channels the files must be where the IO pipe is allocated. So the
IO will take place on the remote node and disk backups must be locally available.

Script:收集Oracle备份恢复信息

我们在诊断Oracle backup restore问题时总是希望能获得足够的诊断信息,一般来说RDA会是一个最好的诊断信息收集工具,但是有时候客户会很反感使用RDA(不信任感),这里我们提供一段专门用来收集oracle备份恢复信息的脚本。

运行以下脚本需要设置合理的”ORACLE_HOME、ORACLE_SID”环境变量,并设置NLS_DATE_FORMAT环境变量,如

NLS_DATE_FORMAT="DD-MON-RRRR HH24:MI:SS"
export NLS_DATE_FORMAT

以”rman target /”登陆并运行:

spool log to rman_report.log
set echo on
show all;
report schema;
list incarnation;
list backup summary;
list backup;
list copy;
report need backup;
report obsolete;
restore database preview;
spool log off

以下脚本在sqlplus中以sysdba身份执行,执行要求数据库至少处于mounted已加载状态下;注意该原始脚本是只读readonly的,它仅仅是读取数据字典,不会造成危害,当然请确保你的脚本来源!!

spool results01.txt
set echo on feedback on time on timing on pagesize 100 linesize 80 numwidth 13
show user
alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
select * from v$version;
select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') as current_date from dual;
column name format a30
column value format a49
select name, value from v$parameter where isdefault='FALSE' order by 1;
column parameter format a30
column value format a49
select * from v$nls_parameters order by parameter;
column name format a10
select dbid, name,
       to_char(created, 'DD-MON-YYYY HH24:MI:SS') created,
       open_mode, log_mode,
       to_char(checkpoint_change#, '999999999999999') as checkpoint_change#,
       controlfile_type,
       to_char(controlfile_change#, '999999999999999') as controlfile_change#,
       to_char(controlfile_time, 'DD-MON-YYYY HH24:MI:SS') controlfile_time,
       to_char(resetlogs_change#, '999999999999999') as resetlogs_change#,
       to_char(resetlogs_time, 'DD-MON-YYYY HH24:MI:SS') resetlogs_time
from v$database;
select * from v$instance;
archive log list;
select * from v$thread order by thread#;
select * from v$log order by first_change#;
column member format a45
select * from v$logfile;
column name format a79
select '#' || ts.name || '#' as tablespace_name, ts.ts#,
       '#' || df.name || '#' as filename, df.file#, df.status, df.enabled, df.creation_change#,
       to_char(df.creation_time, 'DD-MON-YYYY HH24:MI:SS') as creation_time,
       to_char(df.checkpoint_change#, '999999999999999') as checkpoint_change#,
       to_char(df.checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
       to_char(df.offline_change#, '999999999999999') as offline_change#,
       to_char(df.online_change#, '999999999999999') as online_change#,
       to_char(df.online_time, 'DD-MON-YYYY HH24:MI:SS') as online_time,
       to_char(df.unrecoverable_change#, '999999999999999') as online_change#,
       to_char(df.unrecoverable_time, 'DD-MON-YYYY HH24:MI:SS') as online_time,
       to_char(df.bytes, '9,999,999,999,990') as bytes, block_size
from v$datafile df, v$tablespace ts
where ts.ts# = df.ts#
and ( df.status <> 'ONLINE'
or    df.checkpoint_change# <> (select checkpoint_change# from v$database) );
select '#' || ts.name || '#' as tablespace_name, ts.ts#,
       '#' || dh.name || '#' as filename, dh.file#, dh.status, dh.error, dh.
fuzzy, dh.creation_change#,
       to_char(dh.creation_time, 'DD-MON-YYYY HH24:MI:SS') as creation_time,
       to_char(dh.checkpoint_change#, '999999999999999') as checkpoint_change#,
       to_char(dh.checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
       to_char(dh.resetlogs_change#, '999999999999999') as resetlogs_change#,
       to_char(dh.resetlogs_time, 'DD-MON-YYYY HH24:MI:SS') as resetlogs_time,
       to_char(dh.bytes, '9,999,999,999,990') as bytes
from v$datafile_header dh, v$tablespace ts
where ts.ts# = dh.ts#
and ( dh.status <> 'ONLINE'
or    dh.checkpoint_change# <> (select checkpoint_change# from v$database) );
select * from v$tempfile;
select HXFIL File_num,substr(HXFNM,1,60) file_name, FHTNM tablespace_name,
       FHTYP type, HXERR validity,
       FHSCN SCN, FHTIM SCN_Time, FHSTA status,
       FHTHR Thread, FHRBA_SEQ Sequence
from X$KCVFH
--where HXERR > 0
order by HXERR, FHSTA, FHSCN, HXFIL;
column error format a15
select error, fuzzy, status, checkpoint_change#,
       to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
       count(*)
from v$datafile_header
group by error, fuzzy, status, checkpoint_change#, checkpoint_time
order by checkpoint_change#, checkpoint_time;
select * from V$INSTANCE_RECOVERY;
select * from v$recover_file order by change#;
select * from dba_tablespaces where status <> 'ONLINE';
SELECT * FROM database_properties order by property_name;
select *
from X$KCCLH, (select min(checkpoint_change#) df_min_scn,
min(checkpoint_change#) df_max_scn
               from v$datafile_header
               where status='ONLINE') df
where LHLOS in (select first_change# from v$log)
or df.df_min_scn between LHLOS and LHNXS
or df.df_max_scn between LHLOS and LHNXS;
select * from v$backup where status <> 'NOT ACTIVE';
select ADDR, XIDUSN, XIDSLOT, XIDSQN,
       UBAFIL, UBABLK, UBASQN,
       START_UBAFIL, START_UBABLK, START_UBASQN,
       USED_UBLK, STATUS
from   v$transaction;
select * from v$archive_gap;
select * from v$archive_dest_status where recovery_mode <> 'IDLE';
column USED_GB format 999,990.999
column USED% format 990.99
column RECLAIM_GB format 999,990.999
column RECLAIMABLE% format 990.99
column LIMIT_GB format 999,990.999
select frau.file_type as type,
       frau.percent_space_used/100 * rfd.space_limit /1024/1024/1024 "USED_GB",
       frau.percent_space_used "USED%",
       frau.percent_space_reclaimable "RECLAIMABLE%",
       frau.percent_space_reclaimable/100 * rfd.space_limit /1024/1024/1024 "RECLAIM_GB",
       frau.number_of_files "FILES#"
from   v$flash_recovery_area_usage frau,
       v$recovery_file_dest rfd
order by file_type;
select name,
       space_limit/1024/1024/1024 "LIMIT_GB",
       space_used/1024/1024/1024 "USED_GB",
       space_used/space_limit*100 "USED%",
       space_reclaimable/1024/1024/1024 "RECLAIM_GB",
       number_of_files "FILE#"
from   v$recovery_file_dest;
select * from v$backup_corruption;
select * from v$copy_corruption order by file#, block#;
select * from v$database_block_corruption order by file#, block#;
SELECT f.file#, f.name,
       e.tablespace_name, e.segment_type, e.owner, e.segment_name,
       c.file#, c.block#, c.blocks, c.corruption_change#, c.corruption_type
FROM dba_extents e, V$database_block_corruption c, v$datafile f
WHERE c.file# = f.file#
and   e.file_id = c.file#
and   c.block# between e.block_id AND e.block_id + e.blocks - 1;
select * from v$database_incarnation;
select * from v$rman_configuration;
select s.recid as bs_key, p.recid as bp_key, p.status, p.tag, p.device_type,
       p.handle, p.media, p.completion_time, p.bytes
from   v$backup_piece p, v$backup_set s
where  p.set_stamp = s.set_stamp
and    s.controlfile_included='YES'
order by p.completion_time;
select s.recid as bs_key, p.recid as bp_key, p.status, p.tag, p.device_type,
       p.handle, p.media, p.completion_time, f.absolute_fuzzy_change#, p.bytes
from   v$backup_datafile f, v$backup_piece p, v$backup_set s
where  p.set_stamp = s.set_stamp
and    f.set_stamp = s.set_stamp
and    p.handle is not null
and    f.file# = 1
order by p.completion_time;
SELECT
  session_recid,
  input_bytes_per_sec_display,
  output_bytes_per_sec_display,
  time_taken_display,
  end_time
FROM v$rman_backup_job_details
ORDER BY end_time;
select * from v$filestat;
column EBS_MB format 9,990.99
column TOTAL_MB format 999,990.99
select SID, SERIAL, FILENAME, EFFECTIVE_BYTES_PER_SECOND/1024/1024 as EBS_MB,
      OPEN_TIME, CLOSE_TIME, ELAPSED_TIME, TOTAL_BYTES/1024/1024 as TOTAL_MB,
      STATUS, MAXOPENFILES, buffer_size, buffer_count
from v$backup_async_io
where close_time >= sysdate-3
order by close_time;
select SID, SERIAL, FILENAME, EFFECTIVE_BYTES_PER_SECOND/1024/1024 as EBS_MB,
      OPEN_TIME, CLOSE_TIME, ELAPSED_TIME, TOTAL_BYTES/1024/1024 as TOTAL_MB,
      STATUS, MAXOPENFILES, buffer_size, buffer_count
from v$backup_sync_io
where close_time >= sysdate-3;
select * from v$controlfile_record_section order by type;
select to_char(rownum) || '. ' || output rman_output from v$rman_output;
select * from v$rman_status where trunc(end_time) > trunc(sysdate)-3;
select protection_mode, protection_level from v$database;
select * from v$recovery_progress;
select s.client_info,
       sl.message,
       sl.sid, sl.serial#, p.spid,
       round(sl.sofar/sl.totalwork*100,2) "% Complete"
from   v$session_longops sl, v$session s, v$process p
where  p.addr = s.paddr
and    sl.sid=s.sid
and    sl.serial#=s.serial#
and    opname LIKE 'RMAN%'
and    opname NOT LIKE '%aggregate%'
and    totalwork != 0
and    sofar <> totalwork;
select AL.*,
       DF.min_checkpoint_change#, DF.min_checkpoint_time
from v$archived_log AL,
     (select min(checkpoint_change#) min_checkpoint_change#,
             min(checkpoint_time) min_checkpoint_time
      from v$datafile_header
      where status='ONLINE') DF
where DF.min_checkpoint_change# between AL.first_change# and AL.next_change#
order by AL.first_change#;
select * from v$asm_diskgroup;
select * from v$asm_disk;
select * from v$flashback_database_log;
select * from v$flashback_database_logfile order by first_change# desc;
select * from v$flashback_database_stat order by begin_time desc;
select * from v$restore_point;
select * from v$rollname;
select * from v$undostat;
select * from dba_rollback_segs;
spool off

Does Rman Backup benefit from Large Pool?

我们在学习Oracle的过程中,或多或少会存在个人对概念的理解错误、误解或者根本是教材编写存在不严谨的地方,这样或以讹传讹或三人言虎,导致在Oracle圈子存在着一些古老相传的迷信(superstition),因为这些迷信已经深入人心了,所以我们几乎很难纠正过来;这其实很有意思,IT作为一个高科技的领域也会出现迷信,说明我们在IT技术的”教学”和”思考”上存在问题,这一点值得深思。

这里我列出几个最为常见的迷信,算作抛砖引玉:

1.几乎所有的Oracle入门教程都会在介绍Large pool的时候这样描述:”RMAN 备份使用large pool作为磁盘I/O缓冲区,配置Large pool有助于提高RMAN备份性能”

Truth:除非你启用了slaves IO,否则rman并不使用large pool

RMAN I/O可以分成三种模式:

Mode Disk tape
Asynchronous I/O 绝大多数操作系统支持AIO,默认disk_asynch_io为TRUE,即默认启用磁盘异步IO。如果磁盘设备不支持AIO,那么会使用synchronous I/O。磁盘异步模式下RMAN I/O缓冲区域从PGA中分配,相关IO性能信息存放在V$backup_async_io视图中 磁带设备本身不支持AIO(tape I/O is always synchronous),虽然默认tape_asynch_io为TRUE,但磁带设备只能通过IO slaves模拟异步IO,所以启用磁带AIO需要另外设置backup_tape_io_slaves=TRUE。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中
Synchronous I/O 若disk_asynch_io设置为false,或操作系统不支持异步IO,且dbwr_io_slaves=0时启用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关IO性能信息存放在V$backup_sync_io视图中 默认backup_tape_io_slaves为false,即磁带设备默认不启用AIO而使用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关性能信息存放在V$backup_sync_io视图中
Slaves I/O 启用disk slaves I/O,要求设置disk_asynch_io=false且dbwr_io_slaves>0。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中 设置tape_asynch_io=true且backup_tape_io_slaves=true时启用,磁带的AIO模式其实就是使用slaves Io模拟获得的。所以此模式下的一切和tape AIO完全一样

我们在使用RMAN备份数据库时无论是磁盘备份还是磁带备份总是优先期望使用AIO异步IO特性(tape aio比较特殊,见上表),使用AIO的前提是设置合理的初始化参数以及操作系统支持AIO,如果我们使用的操作系统不支持AIO那么我们将不得不使用Synchronous IO同步IO。这并不是世界末日,因为Oracle提供了IO从属进程(slaves IO)来模拟AIO,当然这是退而求其次的。为了启用slaves IO,我们需要手动设置backup_tape_io_slaves或dbwr_io_slaves参数来启用IO从属特性,当使用磁带备份时设置backup_tape_io_slaves(此时tape_asynch_io应当为true)为true,当使用磁盘设备时设置dbwr_io_slaves(此时disk_asynch_io应当为false)为非零值。在启用slaves IO的前提下RMAN才会从Large pool当中分配内存并加以利用,如果没有配置large pool(注意如果启用了ASMM,那么Oracle会自动为large pool分配一个granule大小的空间)或者large pool过小,那么RMAN的内存缓冲区将从shared pool中分配。如果Oracle仍不能获得足够内存,那么将本地进程获取足够的IO缓存。若我们启用了I/O slaves,那么很有必要配置一个足够大的Large pool(一般60-100M就足够了),这样RMAN的I/O缓存区可以从large pool中分配,避免了RMAN的I/O buffer和shared pool中的library cache等其他组件发生竞争。

If I/O slaves are used, I/O buffers are obtained from the SGA ,or the large pool, if configured.If LARGE_POOL_SIZE is set, then Oracle attempts to get memory from the large pool. If this value is not large enough, then Oracle does not try to get buffers from the shared pool.If Oracle cannot get enough memory, then it obtains I/O buffer memory from local process memory and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.

在默认情况下Oracle对于磁盘设备使用AIO模式(disk_asynch_io=true & dbwr_io_slaves=0 by default),而对于磁带设备使用synchronous I/O(tape_asynch_io=true & backup_tape_io_slaves=false by default),都不会启用slaves I/O,所以默认情况下RMAN总是从PGA中分配缓存。换而言之在默认情况下,即便配置了较大的Large pool也不会为RMAN所用。

RMAN allocates the tape buffers in the SGA or the PGA, depending on whether I/O slaves are used. If you set the initialization parameter BACKUP_TAPE_IO_SLAVES = true, then RMAN allocates tape buffers from the SGA or the large pool if the LARGE_POOL_SIZE initialization parameter is set. If you set the parameter to false, then RMAN allocates the buffers from the PGA.

我们来通过以下演示,进一步验证AIO/Slave IO环境下RMAN内存缓冲区从哪里分配,并加强印象:

SQL> select * From v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> show parameter async

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
disk_asynch_io                       boolean     TRUE
tape_asynch_io                       boolean     TRUE

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  15873376

backup as backupset database skip offline;

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  15873376

/* 在AIO模式下,全库备份后发现large pool未发生变化 */

SQL> alter system set disk_asynch_io=false scope=spfile;
System altered.

SQL> alter system set dbwr_io_slaves=2 scope=spfile;
System altered.

/* 以上启用了磁盘I/O Slave特性 */

SQL> startup force;

[oracle@rh2 ~]$ ps -ef|grep i10|grep -v grep
oracle   20761     1  0 20:44 ?        00:00:00 ora_i101_G10R2
oracle   20763     1  0 20:44 ?        00:00:00 ora_i102_G10R2

/* 启用I/O Slave后会出现ora_ixxx_SID这样的后台进程 */

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  15873376

RMAN> backup as backupset database skip offline;

SQL> select * From v$sgastat where pool='large pool';

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  24151392
large pool   KSFQ Buffers                 25276416

SQL> /

POOL         NAME                            BYTES
------------ -------------------------- ----------
large pool   PX msg pool                    903840
large pool   free memory                  41006432
large pool   KSFQ Buffers                  8421376

/* 启用了I/O Slave后执行备份操作,
    large pool中出现了KSFQ Buffers,这个KSFQ buffer就是RMAN所使用的缓冲区,
    实际决定该buffer大小的是隐藏参数_backup_ksfq_bufsz和_backup_ksfq_bufcnt */

SQL> col name for a30
SQL> col describ for a70
SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ
  2   FROM SYS.x$ksppi x, SYS.x$ksppcv y
  3   WHERE x.inst_id = USERENV ('Instance')
  4   AND y.inst_id = USERENV ('Instance')
  5   AND x.indx = y.indx
  6  AND x.ksppinm LIKE '%ksfq%';

NAME                           VALUE      DESCRIB
------------------------------ ---------- ----------------------------------------------------------------------
_backup_ksfq_bufsz             0          size of the ksfq buffer used for backup/restore
_backup_ksfq_bufcnt            0          number of the ksfq buffers used for backup/restore

/* 在10g中似乎Oracle会自动调控以上2个参数 */

SQL> alter system set "_backup_ksfq_bufsz"=131072;
System altered.

SQL> alter system set "_backup_ksfq_bufcnt"=1;
System altered.

RMAN> backup tablespace data01;

/*  I/O slaves的IO统计信息仍存放在V$backup_sync_io视图中,
    而非可能是你所预期的v$backup_sync_io视图  */

SQL> select type,buffer_size,buffer_count from v$backup_async_io;

TYPE      BUFFER_SIZE BUFFER_COUNT
--------- ----------- ------------
AGGREGATE           0            0
INPUT          131072            1
OUTPUT        1048576            4

另外large pool的使用量可以通过下列公式来估算:
LARGE_POOL_SIZE =
(4 * {RMAN Channels} * {DB_BLOCK_SIZE} * {DB_DIRECT_IO_COUNT} * {Multiplexing Level})
+
(4 * {RMAN Channels} * {Tape Buffer Size})

事实上如果你probe过PGA的内存使用情况,那么你或许会在PGA headdump中看到过”KSFQ heap”的相关信息。显然当在非slaves IO模式下,RMAN会从PGA HEAP->KSFQ heap这个subheap子堆中分配必要的buffer。

我们在磁盘AIO模式下执行必要的backup操作,之后找出RMAN相关的shadow process并对其做heapdump,分析其pga内存使用情况

SQL> select spid,pga_used_mem,pga_max_mem from v$process where addr in
  2  (select paddr from v$session where program like '%rman%')
  3  order by pga_used_mem desc ;

SPID         PGA_USED_MEM PGA_MAX_MEM
------------ ------------ -----------
24424             5750341    14410829
24425             4717957    12134125
24413             3308341     9626701
24423              435773      993005

SQL> oradebug setospid 24424;
Oracle pid: 25, Unix process pid: 24424, image: oracle@rh2.oracle.com (TNS V1-V3)

SQL> oradebug dump heapdump 536870917;
Statement processed.

SQL> oradebug tracefile_name;
/s01/admin/G10R2/udump/g10r2_ora_24424.trc

==========================heapdump details==============================

FIVE LARGEST SUB HEAPS for heap name="pga heap"   desc=0x68d3ec0
  Subheap ds=0x87c83e8  heap name=       KSFQ heap  size=         4205296
   owner=(nil)  latch=(nil)

******************************************************
HEAP DUMP heap name="KSFQ heap"  desc=0x87c83e8
 extent sz=0x1040 alt=32767 het=32767 rec=0 flg=2 opc=2
 parent=0x68d3ec0 owner=(nil) nex=(nil) xsz=0x20228
EXTENT 0 addr=0x7f86bf788dd8
  Chunk     7f86bf788de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 1 addr=0x7f86bf988dd8
  Chunk     7f86bf988de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 2 addr=0x7f86bfb88dd8
  Chunk     7f86bfb88de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 3 addr=0x7f86bfc98dd8
  Chunk     7f86bfc98de8 sz=  1049112    freeable  "KSFQ Buffers   "
EXTENT 4 addr=0x7f86bfddf358
  Chunk     7f86bfddf368 sz=     5192    freeable  "KSFQ ctx       "
EXTENT 5 addr=0x87c7680
  Chunk        0087c7690 sz=      984    perm      "perm           "  alo=984
  Chunk        0087c7a68 sz=     1944    free      "               "
  Chunk        0087c8200 sz=      464    freeable  "KSFQ buffer poo"
Total heap size    =  4205032
FREE LISTS:
 Bucket 0 size=0
  Chunk        0087c7a68 sz=     1944    free      "               "
Total free space   =     1944
UNPINNED RECREATABLE CHUNKS (lru first):
PERMANENT CHUNKS:
  Chunk        0087c7690 sz=      984    perm      "perm           "  alo=984
Permanent space    =      984

/* 以上可以看到KSFQ Heap子堆共占用了4205296=4M内存,
    而该服务进程的pga_used_memory总量为5750341 bytes,即KSFQ占该进程PGA的73%
    另外这里KSFQ Buffer的大多Chunk是freeable的,仅少量为perm                    */

另外磁带I/O缓冲区的大小可以在配置通道时指定,其默认值由操作系统决定,一般为64k。我们使用allocate channel命令设置磁带I/O缓冲区,为了达到最佳性能,可以将磁带I/O缓冲区大小设置为256KB或更大,如:

allocate channel maclean1 device type sbt
parms="blksize=262144,ENV=(NB_ORA_SERV=nas,NB_ORA_POLICY=racdb,NB_ORA_CLIENT=rh2)";

结论:

  1. 在默认情况下(即disk backup使用AIO而tape backup使用sync IO),RMAN backup并不会从Large Pool中获益,而是从PGA中的KSFQ heap中分配必要的I/O内存缓冲区。当然我们还是推荐设置Large_pool_size为100M左右,因为即使是PC服务器也不会缺这一点内存
  2. 仅当启用I/O slaves时RMAN backup从Large Pool中分配ksfq buffer(ksfq buffer used for backup/restore),在9i/10g中常有因large pool过小而引起ORA-04031错误的案例;若large pool大小为0,那么ksfq buffer会从shared pool中分配,一方面ORA-04031:(“shared pool”,”unknown object”,”sga heap(1,0)”,”KSFQ Buffers”)错误的概率将大大增加,另一个方面将造成KSFQ与library cache间的竞争,不利于性能。这种情况下RMAN backup的确从Large pool中得到收益,设置large_pool_size为100M仍是被推荐的
  3. 在非slaves IO模式下RMAN从PGA的KSFQ heap子堆中分配I/O缓冲区,因为在非slaves IO模式下该内存缓冲区没有共享的必要

待修订!!

Mode Disk tape
Asynchronous I/O 绝大多数磁盘设备支持AIO,默认disk_asynch_io为TRUE,即默认启用磁盘异步IO。如果磁盘设备不支持AIO,那么会使用synchronous I/O。磁盘异步模式下RMAN I/O缓冲区域从PGA中分配,相关IO性能信息存放在V$backup_async_io视图中 磁带设备本身不支持AIO,虽然默认tape_asynch_io为TRUE,但磁带设备只能通过IO slaves模拟异步IO,所以启用磁带AIO需要另外设置backup_tape_io_slaves=TRUE。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中
Synchronous I/O 若disk_asynch_io设置为false,或磁盘设备不支持异步IO,且dbwr_io_slaves=0时启用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关IO性能信息存放在V$backup_sync_io视图中 默认backup_tape_io_slaves为false,即磁带设备默认不启用AIO而使用Synchronous I/O。此时RMAN I/O缓冲区从PGA中分配,相关性能信息存放在V$backup_sync_io视图中
Slaves I/O 启用disk slaves I/O,要求设置disk_asynch_io=false且dbwr_io_slaves>0。此模式下RMAN I/O缓冲区从shared pool或者large pool中分配,相关IO性能信息存放在V$backup_async_io视图中 同tape AIO时一样

Oracle备份恢复:Rman Backup缓慢问题一例

近日客户报多套10g的数据库在使用NBU磁带备份时出现RMAN FULL BACKUP十分缓慢的问题,这些数据库中最大一个的达到2.61T,该数据库在一个月前地全库0级备份耗时在3-4个小时,而在最近猛涨到17个小时。客户之前已经向Symantec提交了服务请求,但暂时没有得到结论。希望我们从Oracle角度分析该备份速度变慢问题。

我们首先分析了备份信息的动态视图V$rman_backup_job_details:

OUTPUT_DEVICE INPUT_TYPE ELAPSED_SECONDS INPUT_BYTES_DISPLAY INPUT_BYTES_PER_SEC OUTPUT_BYTES_PER_SEC
17 SBT_TAPE DB INCR 62078 2.61T 44.08M 18.10M

以上可以确认在对2.61T大小的数据库执行全库磁带备份时耗费了62078s,这里还显示了backup时每秒的读取IO为44M,每秒的写出IO为18M;这里不能因为简单的output io per second<input io per second,而断论写出能力存在瓶颈;备份时对数据文件的读取和写出backup piece到备份介质上的操作是一个整体,CPU、Input IO、Output IO任何一环都可能成为备份的瓶颈;譬如因为对数据文件的读取IO存在瓶颈,那么相应的写出IO也会慢下来;又譬如当RMAN与备份服务器之间的IO带宽存在瓶颈,那么相应的读取IO也会不得不慢下来。具体是哪一个环节出现了问题,我们需要求助于其他的RMAN动态性能视图,如:

V$BACKUP_SYNC_IO
Displays rows when the I/O is synchronous to the process (or thread on some platforms) performing the backup.

V$BACKUP_ASYNC_IO
Displays rows when the I/O is asynchronous to the process (or thread on some platforms) performing the backup.

以上2个视图的区别在于一个汇聚了使用同步IO执行RMAN备份恢复操作的性能信息,而另一个是异步IO的。

因为客户使用默认配置disk_async_io为true,故而我们首先关注input IO的性能信息,这些信息存放在V$backup_async_io视图中;而对于磁带设备未启用slaves IO模拟的异步IO(tape_asynch_io=true但是backup_tape_io_slaves为默认的false),所以与磁带设备相关的output IO的性能信息存放在v$backup_sync_io视图中。

DEVICE OPEN_TIME ELAPSED BYTES/S IO_COUNT READY long_waits LONG_WAIT_TIME_TOTAL LONG_WAIT_TIME_MAX
DISK 4/4 388900 372827681 2765564 2074114 90028 231181 53
DISK 4/5 753900 192323498 2765564 2074114 90028 178548 41
DISK 4/6 369000 392934106 2765564 2074114 90028 243507 41
DISK 4/7 405100 357918255 2765564 2074114 90028 268117 73
DISK 4/8 347900 416765407 2765564 2074114 90028 183543 77
DISK 4/9 395800 366328159 2765564 2074114 90028 255399 48
DISK 4/10 428400 338451646 2765564 2074114 90028 268544 45
DISK 4/11 413200 350901949 2765564 2074114 90028 269857 42
DISK 4/12 735400 197161661 2765564 2074114 90028 169016 34
DISK 4/13 410000 353640696 2765564 2074114 90028 283607 60
DISK 4/14 408300 355113116 2765564 2074114 90028 279012 38
DISK 4/15 442700 327519054 2765564 2074114 90028 308744 605
DISK 4/16 393000 368938130 2765564 2074114 90028 251509 205
DISK 4/17 423100 342691291 2765564 2074114 90028 273868 42
DISK 4/18 375600 386029513 2765564 2074114 90028 230859 328
DISK 4/19 721200 201043657 2765564 2074114 90028 191753 162
DISK 4/20 401000 361577769 2765564 2074114 90028 272852 147
DISK 4/21 346600 418328578 2765564 2074114 90028 209569 109
DISK 4/22 400500 362029177 2765564 2074114 90028 265060 112
DISK 4/23 402400 360319794 2765564 2074114 90028 259008 594
DISK 4/24 449600 322492627 2765564 2074114 90028 274202 64
DISK 4/25 413900 350308493 2765564 2074114 90028 269380 230
DISK 4/26 748600 193685126 2765564 2074114 90028 177477 105
DISK 4/27 389900 371871468 2765564 2074114 90028 261200 38
DISK 4/28 383800 377781879 2765564 2074114 90028 241870 158
DISK 4/29 403700 359159488 2765564 2074114 90028 266135 212
DISK 4/30 390600 371205031 2765564 2074114 90028 248161 340
DISK 5/1 463600 312753851 2765564 2074114 90028 271247 39
DISK 5/2 419900 345302894 2765564 2074114 90028 255042 117
DISK 5/3 705700 205459381 2765564 2074114 90028 173043 189
DISK 5/4 418400 346540835 2765564 2074114 90028 291614 47
DISK 5/5 357400 405687424 2765564 2074114 90028 222676 188
DISK 5/6 421400 344073767 2765564 2074114 90028 285860 95
DISK 5/7 405100 357918255 2765564 2074114 90028 263761 38
DISK 5/8 381500 380059463 2765564 2074114 90028 203510 210
DISK 5/9 918400 157875311 2765564 2074114 90028 221293 37
DISK 5/10 3378600 42915020 2765564 2074114 90028 142211 36
DISK 5/11 559900 258961753 2765564 2074114 90028 252911 174
DISK 5/12 622500 232919976 2765564 2074114 90028 241495 40
DISK 5/13 626700 231359000 2765564 2074114 90028 237973 41
DISK 5/14 802000 180788884 2765564 2074114 90028 231283 42
DISK 5/15 601200 241172131 2765564 2074114 90028 209418 40
DISK 5/16 1387800 104476643 2765564 2074114 90028 211878 36

这里我们关心的几个时间指标包括:ELAPSED(Input IO的总耗时)、LONG_WAIT_TIME_TOTAL(长IO的总等待时间)、LONG_WAIT_TIME_MAX(最长一次的IO等待时间)的单位均为百分之一秒,该视图的具体定义参考这里
从以上chart中(由于列宽的原因只截取了部分数据)我们可以看到从4/4到5/16之间备份的Input IO总耗时(elapsed)虽然间歇性的暴涨到过33786s,但其他IO指标:IO总数、READY IO总数、LONG IO WAIT的次数、LONG IO WAIT的总耗时、最长LONG IO WAIT均没有出现大的变化;基本可以判定在备份期间对数据文件的读取不存在瓶颈,为了进一步诊断需要分析备份输出的IO性能状况:

DEVICE date ELAPSED BYTES BYTES/S IO_COUNT IO_TIME_TOTAL IO_TIME_MAX DISCRETE_BYTES_PER_SECOND
SBT_TAPE 4/5 754900 584663433216 77449123 2230314 440365 2600 132767916
SBT_TAPE 4/5 703900 553432907776 78623797 2111179 381135 5800 145206530
SBT_TAPE 4/12 736400 588200542208 79875142 2243807 433298 3400 135749655
SBT_TAPE 4/12 692300 556839731200 80433299 2124175 369237 2600 150808216
SBT_TAPE 4/19 722200 591873179648 81954193 2257817 395904 3400 149499166
SBT_TAPE 4/19 829000 561043210240 67677106 2140210 510746 2801 109847793
SBT_TAPE 4/26 749600 596010598400 79510485 2273600 435940 2600 136718493
SBT_TAPE 4/26 700300 565171191808 80704154 2155957 377019 2800 149905228
SBT_TAPE 5/3 706800 600177377280 84914739 2289495 396965 5800 151191510
SBT_TAPE 5/3 712500 569155518464 79881476 2171156 392324 5800 145072827
SBT_TAPE 5/10 3379700 604452159488 17884787 2305802 3093781 2802 19537652
SBT_TAPE 5/10 2798400 573396746240 20490164 2187335 2524296 2801 22715115
SBT_TAPE 5/17 428095307776 1633054 2216291 5800 19315844

可以从以上chart中可以发现到5/3为止的output io总耗时还处于合理范围内(为7068+7125)s,约为4小时。而到了5/10时输出IO的总耗时达到了(33797+29784)s,约为17.6小时。研究其他IO统计信息可以发现5/10的IO_TIME_TOTAL总数为(30937+25242)s,IO_TIME_TOTAL代表了某个IO等待的总耗时,单位为百分之一秒。从另一个IO性能指标DISCRETE_BYTES_PER_SECOND来看,在5/10备份文件的平均传输率急剧下降。

综合上述现象,诱发客户环境中的RMAN全库备份缓慢的主要原因是备份文件输出IO性能在一段时间内急剧下降,瓶颈应当存在于RMAN与NBU备份服务器之间,与数据库读取性能没有关系。给客户的建议是复查数据库服务器到NBU备份服务器间的网络带宽是否正常,NBU服务器的性能是否正常,磁带库的写出性能是否正常。

这个case后续经过对NBU的复查发现原因是虚拟磁带库VTL的存储电池过期,导致读写数据时直接从磁盘上进行,而不经过缓存,故影响了数据库全备份的速度。由于VTL存储电池过期信息需要从串口直接访问其存储才能确认问题,所以在问题发生之初无法从VTL的管理界面中获取该信息。

这个case告诉我们不仅是raid卡的冲放电、存储的ups电池过期会引发严重的IO性能问题,VTL的存储电池过期也可能给数据库备份造成麻烦,要让系统7*24时刻正常运行,有太多指标需要我们去关注,良好的规范很重要,同时很多时候我们不得不和硬件打交道。

图文详解安装NetBackup 6.5备份恢复Oracle 10g rac 数据库(修订)

我们使用Linux平台进行测试,OS版本为Oracle Enterprise Linux 5.5 x86_64:
[root@nas servsoft]# cat /etc/issue
Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
Kernel \r on an \m

Netbackup 6.0仅支持2.4内核的Linux版本,2.6内核的Linux版本(主流的包括RHEL4,5 Centos等)需要使用NBU,只能安装Netbackup 6.5或以上版本。
现在我们有三台主机: rh1(rac的2号节点),rh2(rac的1号节点),nas(NBU Server)。
首先需要安装的是Netbackup Server端软件,当然你需要用到安装介质,你可以尝试在Veritas的官方网站下载到最新的版本;获取到安装介质后,我们首先要解压它:

[root@nas netbackup]# cp NetBackup_6.5_LinuxRedhat2.6.tar.gz /tmp

[root@nas tmp]# gunzip NetBackup_6.5_LinuxRedhat2.6.tar.gz

[root@nas tmp]# tar -xvf NetBackup_6.5_LinuxRedhat2.6.tar

在进入安装前确认xinetd服务正确运行着:

[root@nas tmp]# service xinetd status

xinetd (pid  2886) is running...

[root@nas NB_65_LinuxR_x86_20070723]# ./install

Do you want to install NetBackup and Media Manager files? [y,n] (y) y

NetBackup and Media Manager are normally installed in /usr/openv.

Is it OK to install in /usr/openv? [y,n] (y) y

Reading NetBackup files from /tmp/NB_65_LinuxR_x86_20070723/linuxR_x86/anb

...................

Enter the full path name to the directory where the

appropriate installics script is located followed by

a  to continue. This script will then install

the package(s).

        OR

Enter q to stop this install and abort.

此时我们需要输入Netbackup ISC(Infrastructure Core Services)软件所在的目录,当然你也可以从Veritas官方网站下载到该软件包,尝试解压:

[root@nas tmp]# cp NetBackup_6.5_ICS_LinuxX86.tar.gz /tmp

[root@nas tmp]# cd /tmp

[root@nas tmp]# gunzip NetBackup_6.5_ICS_LinuxX86.tar.gz

[root@nas tmp]# tar -xvf NetBackup_6.5_ICS_LinuxX86.tar

则此时ISC安装介质位于/tmp/NB_65_ICS_1.4.37.0_LinuxX86下,在原终端窗口中输入该目录

Enter q to stop this install and abort.

/tmp/NB_65_ICS_1.4.37.0_LinuxX86

Installing VRTSpbx...

A NetBackup Server or Enterprise Server license key is needed

for installation to continue.

Enter license key:

继续安装,此时需要输入您所购买的License注册码;如果您没有购买该软件但仍想使用的话,

可以尝试下面一串字符:DEX6-23FJ-T92R-O4O4-O4O4-K777-7777-EPXP-3XO。

Enter license key: DEX6-23FJ-T92R-O4O4-O4O4-K777-7777-EPXP-3XO

DEX6-23FJ-T92R-O4O4-O4O4-K777-7777-EPXP-3XO:

        NetBackup Enterprise Server Base product with all the features enabled

        has been registered.

All additional keys should be added at this time.

Do you want to add additional license keys now? [y,n] (y) n

Use /usr/openv/netbackup/bin/admincmd/get_license_key

to add, delete or list license keys at a later time.

Installing NetBackup Enterprise Server version: 6.5

If this machine will be using a different network interface than the

default (nas), the name of the preferred interface should be used

as the configured server name.  If this machine will be part of a

cluster, the virtual name should be used as the configured server name.

Would you like to use "nas" as the configured

name of the NetBackup server? [y,n] (y) y

Is nas the master server? [y,n] (y) y

Do you have any media servers? [y,n] (n) n

Checking /etc/services for the needed NetBackup and Media Manager services.

Copying original /etc/services file to /etc/services.NBU_062910.14:27:41

Editing /etc/services to update NetBackup and Media Manager services.

/etc/services will be updated to add the following entries for

NetBackup/Media Manager.

bpjobd  13723/tcp       bpjobd

vmd     13701/tcp       vmd

acsd    13702/tcp       acsd

tl8cd   13705/tcp       tl8cd

tldcd   13711/tcp       tldcd

odld    13706/tcp       odld

tl4d    13713/tcp       tl4d

tshd    13715/tcp       tshd

tlmd    13716/tcp       tlmd

tlhcd   13717/tcp       tlhcd

rsmd    13719/tcp       rsmd

...................

好了Netbackup Server端软件已经在NAS主机上安装完成,接下来我们需要进一步配置备份策略。

将/usr/openv/netbackup/bin路径加入到你的用户环境变量PATH中,以方便调用相关执行文件;

并在具有X11 forwarding功能的软件中(譬如Xmanager)中输入jnbSA命令,

您可能遭遇java.lang.UnsatisfiedLinkError: /usr/openv/java/jre/lib/i386/libawt.so: libXp.so的错误,

一般是由于没有安装libXp(i386和x86_64版本的都装一下)包所导致的。

正确安装的话输入jnbSA命令可以看到以下界面:

接着我们需要定义存储单元(Storage Unit),如果你同我一样没有真实的磁带机的话那么我们可以定义普通Disk

类型的存储单元, 选择Netbackup Management->Storage -> Storage Unit,在右边分隔栏右键点击New Storage Unit,

为你的存储单元起一个名字,并输入相关存储目录:

接下来点击NetBackup Management->Policies 选项定义Oracle备份使用到的备份策略,启用Backup Policy Configuration Wizard,并选择Oracle为备份策略类型:

在客户端列表(client list)中加入需要备份2台RAC所在主机,分别为rh1,rh2;硬件与操作系统选择Linux,Redhat2.6

好了,server端的配置完成了,接下来我们安装client端软件,安装前确认你已经获得了相关安装介质,以NBU6.5举例来说你需要有:NetBackup_6.5_CLIENTS2.tar.gz和NetBackup_6.5_UnixOptions.tar.gz 分别为Client端和Oracle Agent软件。

[root@rh2 tmp]# gunzip  NetBackup_6.5_UnixOptions.tar.gz
[root@rh2 tmp]# gunzip  NetBackup_6.5_CLIENTS2.tar.gz
[root@rh2 tmp]# tar -xvf NetBackup_6.5_UnixOptions.tar
[root@rh2 tmp]# tar -xvf NetBackup_6.5_CLIENTS2.tar
[root@rh2 tmp]# cd NB_65_CLIENTS2_20070723/
[root@rh2 NB_65_CLIENTS2_20070723]# ./install

Symantec Installation Script
Copyright 1993 - 2007 Symantec Corporation, All Rights Reserved.

Installing NetBackup Client Software

NOTE:  To install NetBackup Server software, insert the appropriate
NetBackup Server cdrom.

Do you wish to continue? [y,n] (y) y
Do you want to install the NetBackup client software for this client? [y,n] (y) y

This package will install Linux/RedHat2.6 client.

This package will install NetBackup client 6.5.

Enter the name of the NetBackup server : nas

Would you like to use "rh2" as the configured
name of the NetBackup client? [y,n] (y) y
........................
File /usr/openv/tmp/install_trace.10994 contains a trace of this install.
That file can be deleted after you are sure the install was successful.

[root@rh2 tmp]# cd NB_65_UOptions_20070723/
root@rh2 NB_65_UOptions_20070723]# ./install

Symantec Installation Script
Copyright 1993 - 2007 Symantec Corporation, All Rights Reserved.

Installation Options

1 NetBackup Add-On Product Software
2 NetBackup Database Agent Software

q To quit from this script
Choose an option [default: q]: 2

**********

There are two ways to install database agent software.

1.  Remote Installation:  Loads the software on a server with
the intent of pushing database software out to affected clients.

2.  Local Installation:   Loads and installs the software only to this
local machine.

**********

Do you want to do a local installation? [y,n] (n) y

**********

NetBackup Database Agent Installation

Choose the Database Agents you wish to install
one at a time or select Install All Database Agents.

1)  NetBackup for DB2
2)  NetBackup for Informix
3)  NetBackup for Lotus Notes
4)  NetBackup for Oracle
5)  NetBackup for SAP
6)  NetBackup for Sybase

7)  Install All Database Agents

q)  Done Selecting Agents
x)  Exit from this Script

Choose an option: 4

Choose an option: q

You have chosen to install these Database Agents:

NetBackup for Oracle

Is this list correct? [y,n] (y) y

**********

Of the agents selected, the following are supported
on this platform and will be installed:

Oracle

Loading the Database Agent packages into the
/usr/openv/netbackup/dbext directory and installing.

**********

Installing NetBackup for Oracle

Installing NetBackup for Oracle...
..........................
NetBackup for Oracle installation completed.

完成NBU客户端和Netbackup for Oracle Agent安装后,我们还需要对MML介质库文件进行链接,使用dba或oinstall组账户执行/usr/openv/netbackup/bin/oracle_link文件:
[root@rh2 NB_65_UOptions_20070723]# su - maclean
[maclean@rh2 ~]$ cd /usr/openv/netbackup/bin/
[maclean@rh2 bin]$ ./oracle_link
Tue Jun 29 19:22:28 EDT 2010
All Oracle instances should be shutdown before running this script.

Please log into the Unix system as the Oracle owner for running this script

Do you want to continue? (y/n) [n]
[maclean@rh2 bin]$ echo $ORACLE_HOME
/s01/rac10g
[maclean@rh2 bin]$ ./oracle_link
Tue Jun 29 19:22:35 EDT 2010
All Oracle instances should be shutdown before running this script.

Please log into the Unix system as the Oracle owner for running this script

Do you want to continue? (y/n) [n] y

LIBOBK path: /usr/openv/netbackup/bin
ORACLE_HOME: /s01/rac10g
Oracle version: 10.2.0.5.0
Platform type: x86_64
Linking LIBOBK:
ln -s /usr/openv/netbackup/bin/libobk.so64 /s01/rac10g/lib/libobk.so
Done

接下来在rh2主机上进行备份测试:

[maclean@rh2 bin]$ rman target /

Recovery Manager: Release 10.2.0.5.0 - Production on Tue Jun 29 19:26:00 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: RACDB (DBID=720516428)

RMAN> run
2> { allocate channel c1 type sbt parms="ENV=(NB_ORA_SERV=nas,NB_ORA_POLICY=racdb,NB_ORA_CLIENT=rh2)";
3> backup current controlfile;
4> release channel c1;
5> }
RMAN> run
2> { allocate channel c1 type sbt parms="ENV=(NB_ORA_SERV=nas,NB_ORA_POLICY=racdb,NB_ORA_CLIENT=rh2)";
3> backup current controlfile;
4> release channel c1;
5> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=136 instance=racdb1 devtype=SBT_TAPE
channel c1: Veritas NetBackup for Oracle - Release 6.5 (2007072323)

Starting backup at 29-JUN-10
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
including current control file in backupset
channel c1: starting piece 1 at 29-JUN-10
channel c1: finished piece 1 at 29-JUN-10
piece handle=03lhfi11_1_1 tag=TAG20100629T192729 comment=API Version 2.0,MMS Version 5.0.0.0
channel c1: backup set complete, elapsed time: 00:00:37
Finished backup at 29-JUN-10

released channel: c1

如上所示成功备份了当前控制文件。
RMAN> run
2> {
3> allocate channel c1 type sbt parms="ENV=(NB_ORA_SERV=nas,NB_ORA_POLICY=racdb)";
4> backup archivelog all delete input;
5> release channel c1;
6> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=136 instance=racdb1 devtype=SBT_TAPE
channel c1: Veritas NetBackup for Oracle - Release 6.5 (2007072323)

Starting backup at 29-JUN-10
channel c1: starting archive log backupset
channel c1: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=2 stamp=722901460
input archive log thread=1 sequence=2 recid=4 stamp=722901476
input archive log thread=1 sequence=3 recid=5 stamp=722901499
input archive log thread=1 sequence=4 recid=6 stamp=722904852
input archive log thread=2 sequence=1 recid=1 stamp=722901426
input archive log thread=2 sequence=2 recid=3 stamp=722901470
input archive log thread=2 sequence=3 recid=7 stamp=722904852
channel c1: starting piece 1 at 29-JUN-10
channel c1: finished piece 1 at 29-JUN-10
piece handle=06lhfjqr_1_1 tag=TAG20100629T195819 comment=API Version 2.0,MMS Version 5.0.0.0
channel c1: backup set complete, elapsed time: 00:00:46
channel c1: deleting archive log(s)
archive log filename=/arch/1_1_722899663.dbf recid=2 stamp=722901460
archive log filename=/arch/1_2_722899663.dbf recid=4 stamp=722901476
archive log filename=/arch/1_3_722899663.dbf recid=5 stamp=722901499
archive log filename=/arch/1_4_722899663.dbf recid=6 stamp=722904852
archive log filename=/arch/2_1_722899663.dbf recid=1 stamp=722901426
archive log filename=/arch/2_2_722899663.dbf recid=3 stamp=722901470
archive log filename=/arch/2_3_722899663.dbf recid=7 stamp=722904852
Finished backup at 29-JUN-10

Starting Control File and SPFILE Autobackup at 29-JUN-10
piece handle=c-720516428-20100629-01 comment=API Version 2.0,MMS Version 5.0.0.0
Finished Control File and SPFILE Autobackup at 29-JUN-10

released channel: c1

沪ICP备14014813号-2

沪公网安备 31010802001379号