Oracle database 11g release2发布

万众期待的11g r2终于掀开了盖头来,作为先行军,linux平台无疑可以让最多专家和用户更好最广泛的测试新版本。

oracle甚至为其使用了独立的域名:www.oracledatabase11g.com,针对新产品使用独立域名的作为并不多见,而该域名目前的pr 值 及 alex排名均甚低。究其根本 可能是一种潮流,类似暴雪的游戏产品均拥有自己独立的域名, 我们可以猜想今后会出现 oracledatabase12[X].com 和 oracledatabase13[X].com。下为11g主页的横幅flash。

在《Oracle announces availability of Oracle Database 11g Release 2》中,Oracle展开了一贯的忽悠伎俩,提出了以下优势:更成熟的网格计算帮助企业减少成本,节约存储成本,更少的无用冗余,自动调优与扩展性(更强大的awr和memeory auto tuning )。

以上优势当然是我们希望的,但现实往往是残酷的。 看到11g 逐渐成熟 走向 更广泛的运用,  则DBA本身需要掌握的 新特性 ,以及 新版本中不同的 “古怪脾性”  ,相信有机会一一体验。。。。

书籍推荐:《Secrets of the Oracle Database》

Oracle数据库的小秘密,作者是Norbert Debes;

就内容而言并非最internal,换句话说对日常管理还是有用的,当然是对expert而言。

我的博客中翻译了他对AUDIT_SYSLOG_LEVEL解释的若干章节,今后会继续努力。

该书出版已经较长时间,但国内并无任何出版迹象;实际DBA专题类书的销量也不大,始终属于小众类的。

书的内容还是很有价值的,特别是对于几个参数的深入研究和使用perl 尝试相关实验的内容。

附上封面:

secrets of oracle database database

附上下载地址:Oracle Secrets.

我们以学习为目的…….

参数cluster_interconnect详细介绍

以下文本摘自metalink doc:

This note attempts to clarify the cluster_interconnects parameter and the
platforms on which the implementation has been made. A brief explanation on
the workings of the parameter has also been presented in this note.
This is also one of the most frequently questions related to cluster and RAC
installations on most sites and forms a part of the prerequisite as well.

ORACLE 9I RAC – Parameter CLUSTER_INTERCONNECTS
———————————————–

FREQUENTLY ASKED QUESTIONS
————————–
November 2002

CONTENTS
——–
1.  What is the parameter CLUSTER_INTERCONNECTS for ?
2.  Is the parameter CLUSTER_INTERCONNECTS available for all platforms ?
3.  How is the Interconnect recognized on Linux ?
4.  Where could I find more information on this parameter ?
5.  How to detect which interconnect is used ?
6.  Cluster_Interconnects is mentioned in the 9i RAC administration
    guide as a Solaris specific parameter, is this the only platform
    where this parameter is available ?
7.  Are there any side effects for this parameter, namely affecting normal
    operations ?
8.  Is the parameter OPS_INTERCONNECTS which was available in 8i similar
    to this parameter ?
9.  Does Cluster_interconnect allow failover from one Interconnect to another
    Interconnect ?
10. Is the size of messages limited on the Interconnect ?
11. How can you see which protocoll is being used by the instances ?
12. Can the parameter CLUSTER_INTERCONNECTS be changed dynamically during runtime ?

 
QUESTIONS & ANSWERS
——————-
1. What is the parameter CLUSTER_INTERCONNECTS for ?

Answer
——
This parameter is used to influence the selection of the network interface
for Global Cache Service (GCS) and Global Enqueue Service (GES) processing.

This note does not compare the other elements of 8i OPS with 9i RAC
because of substantial differences in the behaviour of both architectures.
Oracle 9i RAC has certain optimizations which attempt to transfer most of
the information required via the interconnects so that the number of disk
reads are minimized. This behaviour known as Cache fusion phase 2 is summarised
in Note 139436.1
The definition of the interconnnect is a private network which
will be used to transfer the cluster traffic and Oracle Resource directory
information and blocks to satisfy queries. The technical term for that is
cache fusion.

The CLUSTER_INTERCONNECTS should be used when
– you want to override the default network selection
– bandwith of a single interconnect does not meet the bandwith requirements of
  a Real Application Cluster database

The syntax of the parameter is:

CLUSTER_INTERCONNECTS = if1:if2:…:ifn
Where if<n> is an IP address in standard dotted-decimal format, for example,
144.25.16.214. Subsequent platform implementations may specify interconnects
with different syntaxes.
2. Is the parameter CLUSTER_INTERCONNECTS available for all platforms ?

Answer
——

This parameter is configurable on most platforms.
This parameter can not be used on Linux.

The following Matrix shows when the parameter was introduced on which platform:

Operating System    Available since
AIX                   9.2.0
HP/UX                 9.0.1
HP Tru64              9.0.1
HP OPenVMS            9.0.1
Sun Solaris           9.0.1

References
———-
Bug <2119403> ORACLE9I RAC ADMINISTRATION SAYS CLUSTER_INTERCONNECTS IS SOLARIS ONLY.
Bug <2359300> ENHANCE CLUSTER_INTERCONNECTS TO WORK WITH 9I RAC ON IBM
3.  How is the Interconnect recognized on Linux ?

Answer
——
Since Oracle9i 9.2.0.8 CLUSTER_INTECONNETCS can be used to change the interconnect.
A patch is also available for 9.2.0.7 under Patch 4751660.
Before 9.2.0.8 the Oracle implementation for the interface selection reads the ‘private hostname’
in the cmcfg.ora file and uses the corresponding ip-address for the interconnect.
If no private hostname is available the public hostname will be used.
4.  Where could I find information on this parameter ?

Answer
——

The parameter is documented in the following books:
Oracle9i Database Reference Release 2 (9.2)
Oracle9i Release 1 (9.0.1) New Features in Oracle9i Database Reference –
                   What’s New in Oracle9i Database Reference?
Oracle9i Real Application Clusters Administration Release 2 (9.2)
Oracle9i Real Application Clusters Deployment and Performance Release 2 (9.2)

Also port specific documentation may contain information about the usage of
the cluster_interconnects parameter.

Documentation can be viewed on
    http://tahiti.oracle.com
    http://otn.oracle.com/documentation/content.html
References:
———–
Note 162725.1: OPS/RAC VMS: Using alternate TCP Interconnects on 8i OPS
               and 9i RAC on OpenVMS

Note 151051.1: Init.ora Parameter “CLUSTER_INTERCONNECTS” Reference Note

5. How to detect which interconnect is used ?
    The following commands show which interconnect is used for UDP or TCP:
    sqlplus> connect / as sysdba
             oradebug setmypid
             oradebug ipc
             exit

    The corresponding trace can be found in the user_dump_dest directory and for
    example contains the following information in the last couple of lines:

           SKGXPCTX: 0x32911a8 ctx
           admno 0x12f7150d admport:
           SSKGXPT 0x3291db8 flags SSKGXPT_READPENDING     info for network 0
                 socket no 9     IP 172.16.193.1         UDP 43307
                 sflags SSKGXPT_WRITESSKGXPT_UP
                 info for network 1
                 socket no 0     IP 0.0.0.0      UDP 0
                 sflags SSKGXPT_DOWN
           context timestamp 0x1ca5
                 no ports
   Please note that on some platforms and versions (Oracle9i 9.2.0.1 on Windows)
   you might see an ORA-70 when the command oradebug ipc has not been
   implemented.

   When  other protocols such as LLT, HMP or RDG are used, then the trace file will not
   reveal an IP address.
6.  Cluster_Interconnects is mentioned in the 9i RAC administration
    guide as a Solaris specific parameter, is this the only platform
    where this parameter is available ?

Answer
—– 

This information that this parameter works on Solaris only is incorrect. Please
check the answer for question number 2 for the complete list of platforms for the same.

References:
———–
bug <2119403> ORACLE9I RAC ADMINISTRATION SAYS CLUSTER_INTERCONNECTS IS SOLARIS ONLY.
7.  Are there any side effects for this parameter, namely affecting normal
    operations ?

Answer
—–
When you set CLUSTER_INTERCONNECTS in cluster configurations, the
interconnect high availability features are not available. In other words,
an interconnect failure that is normally unnoticeable would instead cause
an Oracle cluster failure as Oracle still attempts to access the network
interface which has gone down. Using this parameter you are explicitly
specifying the interface or list of interfaces to be used.
 

8.  Is the parameter OPS_INTERCONNECTS which was available in 8i similar
    to this parameter ?

Answer
——
Yes, the parameter OPS_INTERCONNECTS was used to influence the network selection
for the Oracle 8i Parallel Server.

Reference
———
Note <120650.1> Init.ora Parameter “OPS_INTERCONNECTS” Reference Note
9.  Does Cluster_interconnect allow failover from one Interconnect to another
    Interconnect ?

Answer
——
Failover capability is not implemented at the Oracle level. In general this
functionality is delivered by hardware and/or Software of the operating system.
For platform details please see Oracle platform specific documentation
and the operating system documentation.
10. Is the size of messages limited on the Interconnect ?

Answer
——
The message size depends on the protocoll and platform.
UDP: In Oracle9i Release 2 (9.2.0.1) message size for UDP was limited to 32K.
     Oracle9i 9.2.0.2 allows to use bigger UDP message sizes depending on the
     platform. To increase throughput on an interconnect you have to adjust
     udp kernel parameters.
TCP: There is no need to set the message size for TCP.
RDG: The recommendations for RDG are documented in
        Oracle9i Administrator’s Reference – Part No. A97297-01
References
———-
Bug <2475236> RAC multiblock read performance issue using UDP IPC
11. How can you see which protocoll is being used by the instances ?

Answer
——
Please see the alert-file(s) of your RAC instances. During startup you’ll
   find a message in the alert-file that shows the protocoll being used.

      Wed Oct 30 05:28:55 2002
      cluster interconnect IPC version:Oracle UDP/IP with Sun RSM disabled
      IPC Vendor 1 proto 2 Version 1.0
12. Can the parameter CLUSTER_INTERCONNECT be changed dynamically during runtime ?

Answer
——
    No. Cluster_interconnects is a static parameter and can only be set in the
    spfile or pfile (init.ora)

使用Oracle RMAN脚本

为什么要使用脚本?

为什么要使用RMAN命令脚本呢? 这里有2个主要原因:

  1. 绝大多数RMAN操作都是批量的同时也是自动化的。举例来说,备份数据库是一个反反复复的操作而非每次执行都要费一番功夫写命令。
  2. 脚本提供了一致性。在临时性的抑或者说一次性的操作,譬如从备份中恢复数据库,一般都不适用自动化。但是,操作本身是相同的,无论DBA在何种环境下。

Oracle11g中有两种脚本形式

  1. 使用命令文件: 即文件系统上的一个文本文件,也叫平面文件。
  2. 使用存储的脚本,该脚本存储在Oracle恢复目录中,由RMAN命令行调用。

命令文件形式

Oracle 命令文件是一种文本解释文件类似于UNIX下的SHELL脚本和Windows中的批处理作业BAT文件。代码演示1中展示了一个非常简单的例子,用以备份USERS表空间。其中扩展名.rman是非必要的,但有助于帮您理清文件用途。

代码演示1:备份USERS表空间的脚本

connect target /

connect catalog rman/secretpass@rmancat

run {

allocate channel c1 type disk format ‘/orabak/%U’;

backup tablespace users;

}

你可以通过多种方式调用脚本,例如自RMAN提示行中:

RMAN> @backup_ts_users.rman

注意调用使用的符号@

您也可以在SHELL中调用脚本,例如:

rman @backup_ts_users.rman

这种方式十分有用,若不想使用@符号,用以下方式替代:

rman cmdfile=backup_ts_users.rman

注意CONNECT子句是放在backup_ts_users.rman命令文件中的,故再次无需提供用户名与密码,也保障了没有泄露密码的危险。

传递参数: backup_ts_users.rman命令文件运行地不错,但他太固定了。他将备份输出到某个特定目录且只备份一个表空间(USERS)。若你想要备份到另一处或其他表空间是,你需要创建新的脚本。

一个较好的策略是使用参数驱动的RMAN脚本。比起硬编码来,参数传递脚本灵活得多。代码演示2中展示了修改了的backup_ts_users .rman脚本。

代码演示2: 参数驱动脚本

connect target /

connect catalog rman/secretpass@rmancat

run {

allocate channel c1 type disk format ‘&1/%U’;

backup tablespace &2;

}

另一个SHELL脚本,名为backup_ts_generic.sh调用以上脚本,传递参数/tmp作为备份目录以及USERS为备份表空间对象。

$ORACLE_HOME/bin/rman <<EOF

@backup_ts_generic.rman “/tmp” USERS

EOF

你还可以使之更灵活,你可以将backup_ts_generic.sh第二行做以下修改:

@backup_ts_generic.rman “/tmp” $1

则您可以将参数传递给sh脚本,在以/tmp为固定备份目录的情况下。例如您要备份MYTS表空间:

backup_ts_generic.sh MYTS1

日志设置: 当你通过某种自动设置来运行RMAN脚本时,实际没有人在观察命令窗口,那我们如何找到RMAN的输出呢?这些输出日志使我们了解脚本的运行结果故之分重要,为了获取日志,我们可以使用log选项。

rman cmdfile=backup_ts_users.rman log=backup_ts_users.log

现在backup_ts_users.rman脚本的输出日志将会记录在名为backup_ts_users.log 的文件中而非出现在屏幕上。

存储脚本

虽然脚本文件可以满足大多数情况,但他们也有部分缺点。脚本文件总是存放在服务器上,这显得并不十分安全,因为只要有阅读该文件的权限就可以获取SYS等账号的密码。

解决方法就是使用RMAN的存储脚本,存储脚本保存在了Oracle恢复目录中,而非直接存放在服务器上。代码演示3展示了调用存储脚本的例子。

代码演示3

RMAN> run { execute script

backup_ts_users; }

C:\> rman

RMAN> connect target /

RMAN> connect catalog rman/secretpass@rmancat

RMAN> create script backup_ts_users

2> comment ‘Tablespace Users Backup’

3> {

4>      allocate channel c1 type disk format ‘c:\temp\%U’;

5>      backup tablespace users;

6> }

在代码演示3中存储脚本backup_ts_users仅在用户连接到目标数据库是可见。同时存储脚本也可以做到传递参数,如代码演示4

代码演示4:参数驱动的存储脚本:

RMAN> create script backup_ts_any

2> comment ‘Any Tablespace Backup’

3> {

4>      allocate channel c1 type disk format ‘c:\temp\%U’;

5>      backup tablespace &1;

Enter value for 1: users

users;

6> }

7>

调用参数驱动存储脚本时,我们需要使用USING 子句。如下例,使用存储脚本备份SYSTEM表空间:

run { execute script

backup_ts_any using ‘SYSTEM’; }

管理存储脚本:RMAN工具提供了管理存储脚本的必要方式。

可以用一下命令列出存储的脚本:

RMAN> list script names;

List of Stored Scripts in Recovery Catalog

Scripts of Target Database ARUPLAP1

Script Name

Description

————

backup_ts_any

Any Tablespace Backup

backup_ts_users

Tablespace Users Backup

以上命令会列出脚本的本地和全局名。

若只需要列出全局名,可以使用以下命令:

RMAN> list global script names;

若要显示某个脚本的具体内容,例如back_ts_any,可以使用以下命令:

RMAN> print global script

backup_ts_level1_any;

删除一个存储脚本:

RMAN> delete global script

backup_ts_level1_any;

你也可以通过一个脚本文件创建存储脚本,如以下:

RMAN> create script backup_ts_users

from file ‘backup_ts_users.rman’;

当然你也可以通过存储脚本还原出一个平面脚本文件:

RMAN> print script backup_ts_users

to file ‘backup_ts_users.rman’;

Rman 备份检验功能

RMAN 中 “backup validate database”  语法可以用来扫描数据库中的物理错误,实际验证期间并不产生备份集。

如果需要更多的错误检查,可以使用backup 命令的check logical 选项来配置备份执行逻辑讹误检查,示例如下:

backup validate check logical database;

示例中RMAN 仅执行逻辑数据库验证操作,而不产生实际备份集。

需要注意如果要在给定的错误数内仍然继续执行备份,需设置maxcorrupt 参数值。如下:

run {

set maxcorrupt for datafile 1,2,3,4 to 10;

backup validate check logical database;

}

关于Oracle中supplemental log的补充说明

在上一篇关于Oracle补全日志的介绍中漏写了关于最小补全日志(minimal supplemental log)与表级补全日志的关系;表级补全日志需要在最小补全日志打开的情况下才起作用,即若一个数据库没有开最小补全日志或之前drop supplemental log data操作则即便指定了表级补全日志,实际在重做日志输出的过程中描述的记录仍只记录rowid和相关列值。

打开最小补全日志的命令如下:

在上一篇关于Oracle补全日志的介绍中漏写了关于最小补全日志(minimal supplemental log)与表级补全日志的关系;表级补全日志需要在最小补全日志打开的情况下才起作用,即若一个数据库没有开最小补全日志或之前drop supplemental log data操作则即便指定了表级补全日志,实际在重做日志输出的过程中描述的记录仍只记录rowid和相关列值。

打开最小补全日志的命令如下:

Alter database add supplemental log data;

其次若如之前叙述的因表上的列数过多(超过200个),则应检查视图 dba_logstdby_not_unique, 该视图记录了在数据库中没有主键或没有唯一索引并且列非空的索引(tables in the primary database that do not have a primary key or unique index with NOT NULL columns)的表。如使用以下SQL:

select owner, table_name, bad_column

from dba_logstdby_not_unique

where table_name not in

(select table_name from dba_logstdby_unsupported);

TSMSYS    SRS$    Y
HTEST    TEST    N
HGET    GETMAXID    N
HGET    HUSER    N
SCOTT    BONUS    N
SCOTT    SALGRADE    N

其中bad_column列较为关键。若该字段为 Y,表示一个表列被使用大数据类型定义,例如CLOB或BLOB。sql apply尝试维护这些表,但是你必须要保证表中除这列外的其他列的单值性。就是说,注意,如果一个表中有两行除了LOB列外,其他的值完全相同,这样表 的改动就不能被逻辑备用数据库应用,sql apply会停止。N,表示表中包含足够的列信息,需要用来在逻辑备用数据库中维护表的。

针对前文叙述的在表上列较多的情况下(超过200个列),且不能添加主键和唯一非空索引的表,我们需要特别关注。但实际如果我们想了解一个段在一定段内产生的重做量却十分困难。(method :check  how much redo generated by one segment)

已知的研究方法例如logmnr工具,和dump redologs以及oradebug都无法提供足够的信息帮助统计。

仅有的方法是通过logmnr估算,v$logmnr_contents视图中记录的rbablk与rbabyte,为重做日志中的块偏移量(redo log中512byte为一个快)与字节偏移量,通过计算差值结合data_obj#列,可以大致估算某个段上一定时间内的重做量:

create table redo_analysis nologging as

select data_obj#,  oper, rbablk*512 + rbabyte curpos,

lead(rbablk*512+rbabyte,1,0) over (order by  rbasqn, rbablk, rbabyte)

nextpos

from

( select distinct data_obj#,  operation oper,

rbasqn, rbablk, rbabyte from v$logmnr_contents

order by rbasqn, rbablk, rbabyte );

select data_obj#, oper, obj_name, sum(redosize) total_redo

from

(

select data_obj#, oper, obj.name obj_name , nextpos-curpos-1 redosize

from redo_analysis redo1, sys.obj$ obj

where (redo1.data_obj# = obj.obj# or  redo1.data_obj# = obj.dataobj#)

and  nextpos !=0 — For the boundary condition

union all

select data_obj#, oper, ‘internal ‘ , nextpos-curpos  redosize

from redo_analysis redo1

where  redo1.data_obj#=0 and  redo1.data_obj# = 0

and nextpos!=0

)

group by data_obj#, oper, obj_name

order by 4

以上估算并不准确,在有手动切换(switch logfile)日志及其他特殊情况时误差较大。

Oracle恢复目录的管理使用简要

I. 使用恢复目录存储RMAN备份记录

  1. Oracle 官方建议把恢复目录建议于独立的数据库中。如果把恢复目录与其他一些数据混杂在某库中,若该库失败则恢复目录一起丢失,这将导致恢复异常困难。
  2. 在恢复目录中登记某个库被称作注册(registration).可以在恢复目录中注册多个目标库。举例来说,你可以注册数据库 prod1,prod2,和prod3在一个单独的由用户catowner拥有的目录中,而该目录位于一个叫catdb的数据库中。 因为RMAN通过DBID即数据库的身份证来分辨各个库。每个在恢复目录中注册过的目标库都有一个唯一的DBID.
  3. 恢复目录主要包括以下RMAN的使用情况信息:

l  数据文件和归档日志的备份集和备份片

l  数据文件的拷贝

l  归档日志及其拷贝

l  目标库中的表空间和数据文件

l  储存的脚本

l  RMAN的永久性配置

  1. 恢复目录保存了目标库控制文件中重要的RMAN操作原数据。同步恢复目录保证与控制文件中当前信息同步。
  2. RMAN 创建快照控制文件,即临时控制文件,当每次需要做全局同步时。快照临时文件保证了RMAN同步时的一致性读。数据库服务进程保证同时只有一个快照临时文件的存在,这对于保证RMAN操作不受其他进程干扰是必要的。
  3. 丢失恢复目录将导致严重的恢复问题。如何备份恢复目录可参考一般数据库的备份方式。
  4. 关于恢复目录的兼容性,可以通过查询恢复目录用户模式下的rcver表了解参与恢复目录使用端的版本号,示例:
SQL> SELECT * FROM rcver;

VERSION
------------
08.01.05.00
09.00.01.00
10.02.01.00

只要是8i之后版本一般不存在兼容性问题。

II 管理恢复目录

创建恢复目录

管理恢复目录中的目标库记录

同步恢复目录

恢复目录模式下的控制文件管理

备份恢复目录

导入和导出恢复目录

增强恢复目录可用性

查询恢复目录视图

更新恢复目录

删除恢复目录

  1. 创建恢复目录,创建恢复目录分成三步:
  • 配置恢复目录所在数据库
  • 创建恢复目录拥有者
  • 创建恢复目录本身

配置恢复目录数据库

若使用恢复目录,RMAN要求维护恢复目录所在模式。恢复目录储存在当前模式的默认表空间中,注意SYS不能是恢复目录的拥有者。我们强烈建议恢复目录数据库使用归档模式。同时必须分配足够的空间给恢复目录所在模式,恢复目录所占用的空间取决于使用恢复目录的目标数据库的数量。适当地为恢复目录库规划容量是必要的。应当保证恢复目录库和目标数据库的不占用同一磁盘。

创建目录拥有者

在合理配置恢复目录库后,我们来创建目录拥有者

使用目录库上的SYS帐号登录

假定当前有一个tool表空间来保存目录

使用temp临时表空间为用户默认临时表空间

如下步骤:

     CONNECT SYS/oracle@catdb AS SYSDBA
 SQL> CREATE USER rman IDENTIFIED BY cat
       TEMPORARY TABLESPACE temp
       DEFAULT TABLESPACE tools
    QUOTA UNLIMITED ON tools;

同时我们要授予 recovery_catalog_owner 权限给用户,该角色拥有管理创建恢复目录的权限。

   SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;

创建恢复目录

在创建恢复目录用户后,使用RMAN建立恢复目录,操作如下:

$ rman

RMAN> CONNECT CATALOG rman/cat@catdb   --以目录用户连接恢复目录库

RMAN> create catalog                         — 建议恢复目录

当然也可以指定使用的表空间:

RMAN> create catalog  tablespace users;

成功建立恢复目录后,可以查询目录下已经存在的目录使用的基表。

SQL>select table_name from user_tables;

2. 管理恢复目录中的目标库记录

ü  在恢复目录中注册目标数据库

ü  在恢复目录中注销目标数据库

ü  在恢复目录中重置数据库

ü  在恢复目录中移除已删除的记录

在恢复目录中注册目标数据库

首先确定恢复目录库已经打开,从目标库主机登录:

$ rman TARGET / CATALOG rman/cat@catdb

若目标库未启动,首先启动到加载模式:

RMAN> STARTUP MOUNT;

注册目标库:

RMAN> REGISTER DATABASE;

RMAN会自动在恢复目录中记录目标库的各种信息,将目标库控制文件中的

元信息复制到恢复目录中,可以使用以下命令确认注册情况:

RMAN> REPORT SCHEMA;

Report of database schema
File Size(MB)   Tablespace       RB segs Datafile Name
---- ---------- ---------------- ------- -------------------
1        307200 SYSTEM             NO    /oracle/oradata/trgt/system01.dbf
2         20480 UNDOTBS            YES   /oracle/oradata/trgt/undotbs01.dbf
3         10240 CWMLITE            NO    ...

在恢复目录中登记备份文件

若有备份文件未在控制文件或恢复目录中存在对应的记录,则需要登记该文件,此处的(control file 为目标数据库control file)。

示例:

RMAN> CATALOG DATAFILECOPY '/disk1/old_datafiles/01_01_2003/users01.dbf';
RMAN> CATALOG ARCHIVELOG '/disk1/arch_logs/archive1_731.dbf',
     '/disk1/arch_logs/archive1_732.dbf';
RMAN> CATALOG BACKUPPIECE '/disk1/backups/backup_820.bkp';

在恢复目录中登记多个目标库

可以在一个恢复目录中注册多个目标库,前提是目标库的DBID唯一。

在恢复目录中注销目标库

可以使用命令: unregister database 在RMAN中注销目标数据库。当数据库被

注销,所有的RMAN记录都会丢失,所以要小心操作。

在恢复目录中移除已经删除的记录

在9i之后版本,RMAN在删除备份文件的同时会删除在恢复目录中的对应物记

录,而9i以前版本则只将对应物记录标志为delete.可以通过运行脚本

prgrmanc.sql来删除对应物记录,该脚本储存在($ORACLE_HOME/rdbms/admin)

录下。示例如下:

% sqlplus rman/cat@catdb
SQL> @?/rdbms/admin/prgrmanc.sql删过期备份信息

同步恢复目录

当恢复目录当前状态晚于数据库控制文件中的备份信息时,则需要使用同步恢复

目录,这种情况只会出现在一段时间使用恢复目录而一段时间不使用恢复目录的

情况下,造成的时间段差异。RMAN会在您做某些操作时自动完成同步,例如

Backup命令,当然你也可以手动同步: resync catalog .

管理控制文件

数据库参数CONTROL_FILE_RECORD_KEEP_TIME决定了控制文件中记录可能被复

用的最短自然天数,因此你保证恢复目录在此期间完成同步,否则可能控制文件

中的记录丢失,则需要手动登记备份文件。CONTROL_FILE_RECORD_KEEP_TIME有

效期内需要定期同步。

备份恢复目录

备份恢复目录数据库十分重要,若恢复目录数据库丢失则所有的备份信息将丢

失,导致恢复十分困难。

备份恢复目录数据库与一般的数据库没有大的区别,以下为注意事项:

恢复目录数据库因该运行在归档模式下

使用备份策略冗余量大于一

在不同的介质上备份

不使用恢复目录记录备份信息

使用控制文件自动备份,rman中可以自动完成

结构图:

catalog

更新恢复目录

若您使用的恢复目录版本低于使用的客户端,则您需要更新恢复目录。举例来说

当前您使用了8.1版的客户端RMAN,而恢复目录是8.0版本的,则需要更新。

当恢复目录版本高于您使用的客户端,则upgrade catalog报错。更新操作实例如

下:

sqlplus> connect sys/oracle@catdb as sysdba;
sqlplus> grant TYPE to rman;
% rman TARGET / CATALOG rman/cat@catdb
UPGRADE CATALOG;

recovery catalog owner is rman

enter UPGRADE CATALOG command again to confirm catalog upgrade

UPGRADE CATALOG;

recovery catalog upgraded to version 09.02.00
DBMS_RCVMAN package upgraded to version 09.02.00
DBMS_RCVCAT package upgraded to version 09.02.00






删除恢复目录

当恢复目录不在需要时可以在所在数据库中彻底删除目录结构和数据,删除将丢

失所有注册过的备份信息,操作要小心。示例操作:

% rman TARGET / CATALOG rman/cat@catdb

Issue the DROP CATALOG command twice to confirm:

DROP CATALOG;

recovery catalog owner is rman
enter DROP CATALOG command again to confirm catalog removal

DROP CATALOG;

Oracle Supplemental 补全日志介绍

Oracle补全日志(Supplemental logging)特性因其作用的不同可分为以下几种:最小(Minimal),支持所有字段(all),支持主键(primary key),支持唯一键(unique),支持外键(foreign key)。包括LONG,LOB,LONG RAW及集合等字段类型均无法利用补全日志。

最小(Minimal)补全日志开启后可以使得logmnr工具支持链式行,簇表和索引组织表。可以通过以下SQL检查最小补全日志是否已经开启:

SELECT supplemental_log_data_min FROM v$database;

若结果返回YES或IMPLICIT则说明已开启最小补全日志,当使用ALL,PRIMARY,UNIQUE或FOREIGN补全日志时最小补全日志默认开启(即检查结果为IMPLICIT)。

一般情况下我们在使用逻辑备库时启用主键和惟一键的补全日志,而有时表上可能没有主键,惟一键或唯一索引;我们通过以下实验总结这种情况下Oracle的表现。

首先建立相关的测试表:

alter database add supplemental log data (primary key,unique index) columns ;

create table test (t1 int , t2 int ,t3 int ,t4 int );

alter table test add constraint pk_t1 primary key (t1); –添加主键

随后使用循环插入一定量的数据

update test set t2=10;       commit;   — 更新数据

使用LOGMNR工具分析之前的操作,可以看到REDO中记录的SQL形式如下:

update “SYS”.”TEST” set “T2” = ’10’ where “T1” = ’64’ and “T2” = ’65’ and ROWID = ‘AAAMiSAABAAAOhiAA/’;

其中where字句后分别记录了主键值,被修改字段的值和原行的ROWID。

现在我们将原表上的主键去掉来观察。

alter table test drop constraint pk_t1 ;

update test set t2=11;       commit;   — 更新数据

使用LOGMNR分析可以发现,REDO中的SQL记录如下:

update “SYS”.”TEST” set “T2” = ’11’ where “T1” = ‘1’ and “T2” = ’10’ and “T3” = ‘3’ and “T4” = ‘4’ and ROWID = ‘AAAMiSAABAAAOhiAAA’;

当没有主键的情况下,where子句后记录了所有列值和ROWID。

以下实验在存在唯一索引情况下的表现

create unique index pk_t1 on test(t1);

update test set t2=15; commit;

使用LOGMNR分析可以发现,REDO中的SQL记录如下:

update “SYS”.”TEST” set “T2” = ’15’ where “T1” = ‘9’ and “T2” = ’11’ and “T3” = ’11’ and “T4” = ’12’ and ROWID = ‘AAAMiSAABAAAOhiAAI’;

以上是t1列有唯一索引但不限定not null的情况,下面我们加上not null限制

alter table test modify t1 not null;

update test set t2=21; commit;

使用LOGMNR分析可以发现,REDO中的SQL记录如下:

update “SYS”.”TEST” set “T2” = ’21’ where “T1” = ‘2’ and “T2” = ’15’ and ROWID = ‘AAAMiSAABAAAOhiAAB’;

如以上SQL所示,在存在唯一索引的情况下where子句后仍记录了所有列和ROWID;在存在唯一索引和非空约束的情况下表现与存在主键的情况一致。

当某个表上的列数量较多时且没有主键或唯一索引和非空约束的情况下,开启补全日志可能导致重做日志总量大幅提高。

首先建立一个存在250列的表:

Drop table test;

create table test (

t1 varchar2(5),

t2 varchar2(5),

t3 varchar2(5),

t4 varchar2(5),  …t250 varchar2(5))

insert into test values (‘TEST’,’TEST’ ……);   commit; –将255个列填入数据

alter database drop supplemental log data (primary key,unique index) columns;  –关闭补全日志

set autotrace on;

update test set t2=’BZZZZ’ where t1=’TEST’; commit;

可以从自动跟踪信息中看到,本条更新产生了516的重做量。

alter database add supplemental log data (primary key,unique index) columns;  –重新开启补全日志

update test set t2=’FSDSD’ where t1=’TEST’;

跟踪信息显示产生了3044的重做量。

补全日志因作用域的不同又可分为数据库级的和表级的。表级补全日志又可以分为有条件的和无条件的。有条件限制的表级补全日志仅在特定列被更新时才会起作用,有条件限制的表级补全日志较少使用,这里我们不做讨论。

下面我们来观察无条件限制表级补全日志的具体表现:

alter database drop supplemental log data (primary key,unique index) columns;

alter table test add supplemental log data (primary key,unique index) columns;

update test set t2=’ZZZZZ’; commit;

使用LOGMNR工具查看redo中的SQL:
update “SYS”.”TEST” set “T2” = ‘ZZZZZ’ where “T1” = ‘TEST’ and “T2” = ‘AAAAA’ and “T3” = ‘TEST’………

可以发现where子句之后包含了所有列值。

delete test; commit;

使用LOGMNR工具查看redo中的SQL:

delete from “SYS”.”TEST” where “T1” = ‘TEST’ and “T2” = ‘ZZZZZ’ and “T3” = ‘TEST’ and “T4” = ‘TEST’ and “T5” ……

delete操作同样在where子句之后包含了所有列值。

又我们可以针对表上字段建立特定的补全日志组,以减少where子句后列值的出现。

alter table test drop supplemental log data (primary key,unique index) columns;  –关闭表上原先的补全日志

alter table test add supplemental log group test_lgp (t1 ,t2,t3,t4,t5,t6,t12,t250) always; –创建补全日志组

update test set t2=’XXXXX’ ; commit;

使用LOGMNR工具查看redo中的SQL:

update “SYS”.”TEST” set “T2” = ‘XXXXX’ where “T1” = ‘TEST’ and “T2” = ‘TEST’ and “T3” = ‘TEST’ and “T4” = ‘TEST’ and “T5” = ‘TEST’ and “T6” = ‘TEST’ and “T12” = ‘TEST’ and “T250” = ‘TEST’ and ROWID = ‘AAAMieAABAAAOhnAAA’;

如上所示重做日志中正确地显示了UPDATE操作中用户指定的字段值。

delete test;

使用LOGMNR工具查看redo中的SQL:

delete from “SYS”.”TEST” where “T1” = ‘TEST’ and “T2” = ‘XXXXX’ and “T3” = ‘TEST’ ……

delete操作在重做日志中仍然保留了所有列值。

针对字段较多的表,我们在能够以多个列保证数据唯一性且非空的情况下(即应用概念上的主键)来指定表上的补全日志组,以减少update操作时所产生的重做日志,而对于delete操作则无法有效改善。

Script to Collect RAC Diagnostic Information (racdiag.sql)

Script:

-- NAME: RACDIAG.SQL
-- SYS OR INTERNAL USER, CATPARR.SQL ALREADY RUN, PARALLEL QUERY OPTION ON
-- ------------------------------------------------------------------------
-- AUTHOR:
-- Michael Polaski - Oracle Support Services
-- Copyright 2002, Oracle Corporation
-- ------------------------------------------------------------------------
-- PURPOSE:
-- This script is intended to provide a user friendly guide to troubleshoot
-- RAC hung sessions or slow performance scenerios. The script includes
-- information to gather a variety of important debug information to determine
-- the cause of a RAC session level hang. The script will create a file
-- called racdiag_.out in your local directory while dumping hang analyze
-- dumps in the user_dump_dest(s) and background_dump_dest(s) on all nodes.
--
-- ------------------------------------------------------------------------
-- DISCLAIMER:
-- This script is provided for educational purposes only. It is NOT
-- supported by Oracle World Wide Technical Support.
-- The script has been tested and appears to work as intended.
-- You should always run new scripts on a test instance initially.
-- ------------------------------------------------------------------------
-- Script output is as follows:

set echo off
set feedback off
column timecol new_value timestamp
column spool_extension new_value suffix
select to_char(sysdate,'Mondd_hhmi') timecol,
'.out' spool_extension from sys.dual;
column output new_value dbname
select value || '_' output
from v$parameter where name = 'db_name';
spool racdiag_&&dbname&×tamp&&suffix
set lines 200
set pagesize 35
set trim on
set trims on
alter session set nls_date_format = 'MON-DD-YYYY HH24:MI:SS';
alter session set timed_statistics = true;
set feedback on
select to_char(sysdate) time from dual;

set numwidth 5
column host_name format a20 tru
select inst_id, instance_name, host_name, version, status, startup_time
from gv$instance
order by inst_id;

set echo on

-- Taking Hang Analyze dumps
-- This may take a little while...
oradebug setmypid
oradebug unlimit
oradebug -g all hanganalyze 3
-- This part may take the longest, you can monitor bdump or udump to see if
-- the file is being generated.
oradebug -g all dump systemstate 267

-- WAITING SESSIONS:
-- The entries that are shown at the top are the sessions that have
-- waited the longest amount of time that are waiting for non-idle wait
-- events (event column). You can research and find out what the wait
-- event indicates (along with its parameters) by checking the Oracle
-- Server Reference Manual or look for any known issues or documentation
-- by searching Metalink for the event name in the search bar. Example
-- (include single quotes): [ 'buffer busy due to global cache' ].
-- Metalink and/or the Server Reference Manual should return some useful
-- information on each type of wait event. The inst_id column shows the
-- instance where the session resides and the SID is the unique identifier
-- for the session (gv$session). The p1, p2, and p3 columns will show
-- event specific information that may be important to debug the problem.
-- To find out what the p1, p2, and p3 indicates see the next section.
-- Items with wait_time of anything other than 0 indicate we do not know
-- how long these sessions have been waiting.
--
set numwidth 10
column state format a7 tru
column event format a25 tru
column last_sql format a40 tru
select sw.inst_id, sw.sid, sw.state, sw.event, sw.seconds_in_wait seconds,
sw.p1, sw.p2, sw.p3, sa.sql_text last_sql
from gv$session_wait sw, gv$session s, gv$sqlarea sa
where sw.event not in
('rdbms ipc message','smon timer','pmon timer',
'SQL*Net message from client','lock manager wait for remote message',
'ges remote message', 'gcs remote message', 'gcs for action', 'client message',
'pipe get', 'null event', 'PX Idle Wait', 'single-task message',
'PX Deq: Execution Msg', 'KXFQ: kxfqdeq - normal deqeue',
'listen endpoint status','slave wait','wakeup time manager')
and sw.seconds_in_wait > 0
and (sw.inst_id = s.inst_id and sw.sid = s.sid)
and (s.inst_id = sa.inst_id and s.sql_address = sa.address)
order by seconds desc;

-- EVENT PARAMETER LOOKUP:
-- This section will give a description of the parameter names of the
-- events seen in the last section. p1test is the parameter value for
-- p1 in the WAITING SESSIONS section while p2text is the parameter
-- value for p3 and p3 text is the parameter value for p3. The
-- parameter values in the first section can be helpful for debugging
-- the wait event.
--
column event format a30 tru
column p1text format a25 tru
column p2text format a25 tru
column p3text format a25 tru
select distinct event, p1text, p2text, p3text
from gv$session_wait sw
where sw.event not in ('rdbms ipc message','smon timer','pmon timer',
'SQL*Net message from client','lock manager wait for remote message',
'ges remote message', 'gcs remote message', 'gcs for action', 'client message',
'pipe get', 'null event', 'PX Idle Wait', 'single-task message',
'PX Deq: Execution Msg', 'KXFQ: kxfqdeq - normal deqeue',
'listen endpoint status','slave wait','wakeup time manager')
and seconds_in_wait > 0
order by event;

-- GES LOCK BLOCKERS:
-- This section will show us any sessions that are holding locks that
-- are blocking other users. The inst_id will show us the instance that
-- the session resides on while the sid will be a unique identifier for
-- the session. The grant_level will show us how the GES lock is granted to
-- the user. The request_level will show us what status we are trying to
-- obtain.  The lockstate column will show us what status the lock is in.
-- The last column shows how long this session has been waiting.
--
set numwidth 5
column state format a16 tru;
column event format a30 tru;
select dl.inst_id, s.sid, p.spid, dl.resource_name1,
decode(substr(dl.grant_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as grant_level,
decode(substr(dl.request_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as request_level,
decode(substr(dl.state,1,8),'KJUSERGR','Granted','KJUSEROP','Opening',
'KJUSERCA','Canceling','KJUSERCV','Converting') as state,
s.sid, sw.event, sw.seconds_in_wait sec
from gv$ges_enqueue dl, gv$process p, gv$session s, gv$session_wait sw
where blocker = 1
and (dl.inst_id = p.inst_id and dl.pid = p.spid)
and (p.inst_id = s.inst_id and p.addr = s.paddr)
and (s.inst_id = sw.inst_id and s.sid = sw.sid)
order by sw.seconds_in_wait desc;

-- GES LOCK WAITERS:
-- This section will show us any sessions that are waiting for locks that
-- are blocked by other users. The inst_id will show us the instance that
-- the session resides on while the sid will be a unique identifier for
-- the session. The grant_level will show us how the GES lock is granted to
-- the user. The request_level will show us what status we are trying to
-- obtain.  The lockstate column will show us what status the lock is in.
-- The last column shows how long this session has been waiting.
--
set numwidth 5
column state format a16 tru;
column event format a30 tru;
select dl.inst_id, s.sid, p.spid, dl.resource_name1,
decode(substr(dl.grant_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as grant_level,
decode(substr(dl.request_level,1,8),'KJUSERNL','Null','KJUSERCR','Row-S (SS)',
'KJUSERCW','Row-X (SX)','KJUSERPR','Share','KJUSERPW','S/Row-X (SSX)',
'KJUSEREX','Exclusive',request_level) as request_level,
decode(substr(dl.state,1,8),'KJUSERGR','Granted','KJUSEROP','Opening',
'KJUSERCA','Cancelling','KJUSERCV','Converting') as state,
s.sid, sw.event, sw.seconds_in_wait sec
from gv$ges_enqueue dl, gv$process p, gv$session s, gv$session_wait sw
where blocked = 1
and (dl.inst_id = p.inst_id and dl.pid = p.spid)
and (p.inst_id = s.inst_id and p.addr = s.paddr)
and (s.inst_id = sw.inst_id and s.sid = sw.sid)
order by sw.seconds_in_wait desc;

-- LOCAL ENQUEUES:
-- This section will show us if there are any local enqueues. The inst_id will
-- show us the instance that the session resides on while the sid will be a
-- unique identifier for. The addr column will show the lock address. The type
-- will show the lock type. The id1 and id2 columns will show specific
-- parameters for the lock type.
--
set numwidth 12
column event format a12 tru
select l.inst_id, l.sid, l.addr, l.type, l.id1, l.id2,
decode(l.block,0,'blocked',1,'blocking',2,'global') block,
sw.event, sw.seconds_in_wait sec
from gv$lock l, gv$session_wait sw
where (l.sid = sw.sid and l.inst_id = sw.inst_id)
and l.block in (0,1)
order by l.type, l.inst_id, l.sid;

-- LATCH HOLDERS:
-- If there is latch contention or 'latch free' wait events in the WAITING
-- SESSIONS section we will need to find out which proceseses are holding
-- latches. The inst_id will show us the instance that the session resides
-- on while the sid will be a unique identifier for. The username column
-- will show the session's username. The os_user column will show the os
-- user that the user logged in as. The name column will show us the type
-- of latch being waited on. You can search Metalink for the latch name in
-- the search bar. Example (include single quotes):
-- [ 'library cache' latch ]. Metalink should return some useful information
-- on the type of latch.
--
set numwidth 5
select distinct lh.inst_id, s.sid, s.username, p.username os_user, lh.name
from gv$latchholder lh, gv$session s, gv$process p
where (lh.sid = s.sid and lh.inst_id = s.inst_id)
and (s.inst_id = p.inst_id and s.paddr = p.addr)
order by lh.inst_id, s.sid;

-- LATCH STATS:
-- This view will show us latches with less than optimal hit ratios
-- The inst_id will show us the instance for the particular latch. The
-- latch_name column will show us the type of latch. You can search Metalink
-- for the latch name in the search bar. Example (include single quotes):
-- [ 'library cache' latch ]. Metalink should return some useful information
-- on the type of latch. The hit_ratio shows the percentage of time we
-- successfully acquired the latch.
--
column latch_name format a30 tru
select inst_id, name latch_name,
round((gets-misses)/decode(gets,0,1,gets),3) hit_ratio,
round(sleeps/decode(misses,0,1,misses),3) "SLEEPS/MISS"
from gv$latch
where round((gets-misses)/decode(gets,0,1,gets),3) < .99
and gets != 0
order by round((gets-misses)/decode(gets,0,1,gets),3);

-- No Wait Latches:
--
select inst_id, name latch_name,
round((immediate_gets/(immediate_gets+immediate_misses)), 3) hit_ratio,
round(sleeps/decode(immediate_misses,0,1,immediate_misses),3) "SLEEPS/MISS"
from gv$latch
where round((immediate_gets/(immediate_gets+immediate_misses)), 3) < .99 and immediate_gets + immediate_misses > 0
order by round((immediate_gets/(immediate_gets+immediate_misses)), 3);

-- GLOBAL CACHE CR PERFORMANCE
-- This shows the average latency of a consistent block request.
-- AVG CR BLOCK RECEIVE TIME should typically be about 15 milliseconds
-- depending on your system configuration and volume, is the average
-- latency of a consistent-read request round-trip from the requesting
-- instance to the holding instance and back to the requesting instance. If
-- your CPU has limited idle time and your system typically processes
-- long-running queries, then the latency may be higher. However, it is
-- possible to have an average latency of less than one millisecond with
-- User-mode IPC. Latency can be influenced by a high value for the
-- DB_MULTI_BLOCK_READ_COUNT parameter. This is because a requesting process
-- can issue more than one request for a block depending on the setting of
-- this parameter. Correspondingly, the requesting process may wait longer.
-- Also check interconnect badwidth, OS tcp settings, and OS udp settings if
-- AVG CR BLOCK RECEIVE TIME is high.
--
set numwidth 20
column "AVG CR BLOCK RECEIVE TIME (ms)" format 9999999.9
select b1.inst_id, b2.value "GCS CR BLOCKS RECEIVED",
b1.value "GCS CR BLOCK RECEIVE TIME",
((b1.value / b2.value) * 10) "AVG CR BLOCK RECEIVE TIME (ms)"
from gv$sysstat b1, gv$sysstat b2
where b1.name = 'global cache cr block receive time' and
b2.name = 'global cache cr blocks received' and b1.inst_id = b2.inst_id
or b1.name = 'gc cr block receive time' and
b2.name = 'gc cr blocks received' and b1.inst_id = b2.inst_id ;

-- GLOBAL CACHE LOCK PERFORMANCE
-- This shows the average global enqueue get time.
-- Typically AVG GLOBAL LOCK GET TIME should be 20-30 milliseconds. the
-- elapsed time for a get includes the allocation and initialization of a
-- new global enqueue. If the average global enqueue get (global cache
-- get time) or average global enqueue conversion times are excessive,
-- then your system may be experiencing timeouts. See the 'WAITING SESSIONS',
-- 'GES LOCK BLOCKERS', GES LOCK WAITERS', and 'TOP 10 WAIT EVENTS ON SYSTEM'
-- sections if the AVG GLOBAL LOCK GET TIME is high.
--
set numwidth 20
column "AVG GLOBAL LOCK GET TIME (ms)" format 9999999.9
select b1.inst_id, (b1.value + b2.value) "GLOBAL LOCK GETS",
b3.value "GLOBAL LOCK GET TIME",
(b3.value / (b1.value + b2.value) * 10) "AVG GLOBAL LOCK GET TIME (ms)"
from gv$sysstat b1, gv$sysstat b2, gv$sysstat b3
where b1.name = 'global lock sync gets' and
b2.name = 'global lock async gets' and b3.name = 'global lock get time'
and b1.inst_id = b2.inst_id and b2.inst_id = b3.inst_id
or b1.name = 'global enqueue gets sync' and
b2.name = 'global enqueue gets async' and b3.name = 'global enqueue get time'
and b1.inst_id = b2.inst_id and b2.inst_id = b3.inst_id;

-- RESOURCE USAGE
-- This section will show how much of our resources we have used.
--
set numwidth 8
select inst_id, resource_name, current_utilization, max_utilization,
initial_allocation
from gv$resource_limit
where max_utilization > 0
order by inst_id, resource_name;

-- DLM TRAFFIC INFORMATION
-- This section shows how many tickets are available in the DLM. If the
-- TCKT_WAIT columns says "YES" then we have run out of DLM tickets which
-- could cause a DLM hang. Make sure that you also have enough TCKT_AVAIL.
--
set numwidth 5
select * from gv$dlm_traffic_controller
order by TCKT_AVAIL;

-- DLM MISC
--
set numwidth 10
select * from gv$dlm_misc;

-- LOCK CONVERSION DETAIL:
-- This view shows the types of lock conversion being done on each instance.
--
select * from gv$lock_activity;

-- TOP 10 WRITE PINGING/FUSION OBJECTS
-- This view shows the top 10 objects for write pings accross instances.
-- The inst_id column shows the node that the block was pinged on. The name
-- column shows the object name of the offending object. The file# shows the
-- offending file number (gc_files_to_locks). The STATUS column will show the
-- current status of the pinged block. The READ_PINGS will show us read
-- converts and the WRITE_PINGS will show us objects with write converts.
-- Any rows that show up are objects that are concurrently accessed across
-- more than 1 instance.
--
set numwidth 8
column name format a20 tru
column kind format a10 tru
select inst_id, name, kind, file#, status, BLOCKS,
READ_PINGS, WRITE_PINGS
from (select p.inst_id, p.name, p.kind, p.file#, p.status,
count(p.block#) BLOCKS, sum(p.forced_reads) READ_PINGS,
sum(p.forced_writes) WRITE_PINGS
from gv$ping p, gv$datafile df
where p.file# = df.file# (+)
group by p.inst_id, p.name, p.kind, p.file#, p.status
order by sum(p.forced_writes) desc)
where rownum < 11
order by WRITE_PINGS desc;

-- TOP 10 READ PINGING/FUSION OBJECTS
-- This view shows the top 10 objects for read pings. The inst_id column shows
-- the node that the block was pinged on. The name column shows the object
-- name of the offending object. The file# shows the offending file number
-- (gc_files_to_locks). The STATUS column will show the current status of the
-- pinged block. The READ_PINGS will show us read converts and the WRITE_PINGS
-- will show us objects with write converts. Any rows that show up are objects
-- that are concurrently accessed across more than 1 instance.
--
set numwidth 8
column name format a20 tru
column kind format a10 tru
select inst_id, name, kind, file#, status, BLOCKS,
READ_PINGS, WRITE_PINGS
from (select p.inst_id, p.name, p.kind, p.file#, p.status,
count(p.block#) BLOCKS, sum(p.forced_reads) READ_PINGS,
sum(p.forced_writes) WRITE_PINGS
from gv$ping p, gv$datafile df
where p.file# = df.file# (+)
group by p.inst_id, p.name, p.kind, p.file#, p.status
order by sum(p.forced_reads) desc)
where rownum < 11
order by READ_PINGS desc;

-- TOP 10 FALSE PINGING OBJECTS
-- This view shows the top 10 objects for false pings. This can be avoided by
-- better gc_files_to_locks configuration. The inst_id column shows the node
-- that the block was pinged on. The name column shows the object name of the
-- offending object. The file# shows the offending file number
-- (gc_files_to_locks). The STATUS column will show the current status of the
-- pinged block. The READ_PINGS will show us read converts and the WRITE_PINGS
-- will show us objects with write converts. Any rows that show up are objects
-- that are concurrently accessed across more than 1 instance.
--
set numwidth 8
column name format a20 tru
column kind format a10 tru
select inst_id, name, kind, file#, status, BLOCKS,
READ_PINGS, WRITE_PINGS
from (select p.inst_id, p.name, p.kind, p.file#, p.status,
count(p.block#) BLOCKS, sum(p.forced_reads) READ_PINGS,
sum(p.forced_writes) WRITE_PINGS
from gv$false_ping p, gv$datafile df
where p.file# = df.file# (+)
group by p.inst_id, p.name, p.kind, p.file#, p.status
order by sum(p.forced_writes) desc)
where rownum < 11
order by WRITE_PINGS desc;

-- INITIALIZATION PARAMETERS:
-- Non-default init parameters for each node.
--
set numwidth 5
column name format a30 tru
column value format a50 wra
column description format a60 tru
select inst_id, name, value, description
from gv$parameter
where isdefault = 'FALSE'
order by inst_id, name;

-- TOP 10 WAIT EVENTS ON SYSTEM
-- This view will provide a summary of the top wait events in the db.
--
set numwidth 10
column event format a25 tru
select inst_id, event, time_waited, total_waits, total_timeouts
from (select inst_id, event, time_waited, total_waits, total_timeouts
from gv$system_event where event not in ('rdbms ipc message','smon timer',
'pmon timer', 'SQL*Net message from client','lock manager wait for remote message',
'ges remote message', 'gcs remote message', 'gcs for action', 'client message',
'pipe get', 'null event', 'PX Idle Wait', 'single-task message',
'PX Deq: Execution Msg', 'KXFQ: kxfqdeq - normal deqeue',
'listen endpoint status','slave wait','wakeup time manager')
order by time_waited desc)
where rownum < 11 order by time_waited desc; -- SESSION/PROCESS REFERENCE: -- This section is very important for most of the above sections to find out -- which user/os_user/process is identified to which session/process. --  set numwidth 7 column event format a30 tru column program format a25 tru column username format a15 tru select p.inst_id, s.sid, s.serial#, p.pid, p.spid, p.program, s.username, p.username os_user, sw.event, sw.seconds_in_wait sec from gv$process p, gv$session s, gv$session_wait sw where (p.inst_id = s.inst_id and p.addr = s.paddr) and (s.inst_id = sw.inst_id and s.sid = sw.sid) order by p.inst_id, s.sid; -- SYSTEM STATISTICS: -- All System Stats with values of > 0. These can be referenced in the
-- Server Reference Manual
--
set numwidth 5
column name format a60 tru
column value format 9999999999999999999999999
select inst_id, name, value
from gv$sysstat
where value > 0
order by inst_id, name;

-- CURRENT SQL FOR WAITING SESSIONS:
-- Current SQL for any session in the WAITING SESSIONS list
--
set numwidth 5
column sql format a80 wra
select sw.inst_id, sw.sid, sw.seconds_in_wait sec, sa.sql_text sql
from gv$session_wait sw, gv$session s, gv$sqlarea sa
where sw.sid = s.sid (+)
and sw.inst_id = s.inst_id (+)
and s.sql_address = sa.address
and sw.event not in ('rdbms ipc message','smon timer','pmon timer',
'SQL*Net message from client','lock manager wait for remote message',
'ges remote message', 'gcs remote message', 'gcs for action', 'client message',
'pipe get', 'null event', 'PX Idle Wait', 'single-task message',
'PX Deq: Execution Msg', 'KXFQ: kxfqdeq - normal deqeue',
'listen endpoint status','slave wait','wakeup time manager')
and sw.seconds_in_wait > 0
order by sw.seconds_in_wait desc;

-- Taking Hang Analyze dumps
-- This may take a little while...
oradebug setmypid
oradebug unlimit
oradebug -g all hanganalyze 3
-- This part may take the longest, you can monitor bdump or udump to see
-- if the file is being generated.
oradebug -g all dump systemstate 267

set echo off

select to_char(sysdate) time from dual;

spool off

-- ---------------------------------------------------------------------------
Prompt;
Prompt racdiag output files have been written to:;
Prompt;
host pwd
Prompt alert log and trace files are located in:;
column host_name format a12 tru
column name format a20 tru
column value format a60 tru
select distinct i.host_name, p.name, p.value
from gv$instance i, gv$parameter p
where p.inst_id = i.inst_id (+)
and p.name like '%_dump_dest'
and p.name != 'core_dump_dest';

Sample Output:

TIME
--------------------
AUG-11-2001 12:06:36

1 row selected.

INST_ID INSTANCE_NAME    HOST_NAME            VERSION        STATUS  STARTUP_TIME
------- ---------------- -------------------- -------------- ------- ------------
      1 V9201            opcbsol1             9.2.0.1.0      OPEN    AUG-01-2002
      2 V9202            opcbsol2             9.2.0.1.0      OPEN    JUL-09-2002

2 rows selected.

SQL>
SQL> -- Taking Hanganalyze Dumps
SQL> -- This may take a little while...
SQL> oradebug setmypid
Statement processed.
SQL> oradebug unlimit
Statement processed.
SQL> oradebug setinst all
Statement processed.
SQL> oradebug -g def hanganalyze 3
Hang Analysis in /u02/32bit/app/oracle/admin/V9232/bdump/v92321_diag_29495.trc
SQL>
SQL> -- WAITING SESSIONS:
SQL> -- The entries that are shown at the top are the sessions that have
SQL> -- waited the longest amount of time that are waiting for non-idle wait
SQL> -- events (event column).  You can research and find out what the wait
SQL> -- event indicates (along with its parameters) by checking the Oracle
SQL> -- Server Reference Manual or look for any known issues or documentation
SQL> -- by searching Metalink for the event name in the search bar.  Example
SQL> -- (include single quotes): [ 'buffer busy due to global cache' ].
SQL> -- Metalink and/or the Server Reference Manual should return some useful
SQL> -- information on each type of wait event.  The inst_id column shows the
SQL> -- instance where the session resides and the SID is the unique identifier
SQL> -- for the session (gv$session).  The p1, p2, and p3 columns will show
SQL> -- event specific information that may be important to debug the problem.
SQL> -- To find out what the p1, p2, and p3 indicates see the next section.
SQL> -- Items with wait_time of anything other than 0 indicate we do not know
SQL> -- how long these sessions have been waiting.
SQL> --

SCRIPT TO GENERATE SQL*LOADER CONTROL FILE

This script prepares a SQL*Loader control file for a table already existing in the database. The script accepts
the table name and automatically creates a file with the table name and extension ‘ctl’.  This is specially
useful if you have the DDL statement to create a particular table and have a free-format ASCII-delimited file but
have not yet created a SQL*Loader control file for the loading operation.

Default choices for the file are as follows (alter to your needs):

Delimiter:              comma (‘,’)
INFILE file extension:  .dat
DATE format:            ‘MM/DD/YY’

You may define the Loader Data Types of the other Data Types by revising the DECODE function pertaining
to them.

Please note:
The name of the table to be unloaded needs to be provided when the script is executed as follows:

Script:

set echo off
set heading off
set verify off
set feedback off
set show off
set trim off
set pages 0
set concat on
set lines 300
set trimspool on
set trimout on

spool &1..ctl

select 'LOAD DATA'||chr (10)||
       'INFILE '''||lower (table_name)||'.dat'''||chr (10)||
       'INTO TABLE '||table_name||chr (10)||
       'FIELDS TERMINATED BY '','''||chr (10)||
       'TRAILING NULLCOLS'||chr (10)||'('
from   all_tables
where  table_name = upper ('&1');

select decode (rownum, 1, '   ', ' , ')||
       rpad (column_name, 33, ' ')||
       decode (data_type,
               'VARCHAR2', 'CHAR NULLIF ('||column_name||'=BLANKS)',
               'FLOAT',    'DECIMAL EXTERNAL NULLIF('||column_name||'=BLANKS)',
               'NUMBER',   decode (data_precision, 0,
                           'INTEGER EXTERNAL NULLIF ('||column_name||
                           '=BLANKS)', decode (data_scale, 0,
                           'INTEGER EXTERNAL NULLIF ('||
                           column_name||'=BLANKS)',
                           'DECIMAL EXTERNAL NULLIF ('||
                           column_name||'=BLANKS)')),
               'DATE',     'DATE "MM/DD/YY"  NULLIF ('||column_name||'=BLANKS)',
               null)
from   user_tab_columns
where  table_name = upper ('&1')
order  by column_id;

select ')'
from sys.dual;
spool off

Sample Output:

LOAD DATA
INFILE 'tv.dat'
INTO TABLE TV
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(  T1                               INTEGER EXTERNAL NULLIF (T1=BLANKS)
 , T2                               CHAR NULLIF (T2=BLANKS)
 , T3                               CHAR NULLIF (T3=BLANKS)
)

沪ICP备14014813号-2

沪公网安备 31010802001379号