DROP TABLE的Oracle数据恢复 PRM-DUL

D公司的应用开发人员在ASM存储环境下,在没有任何备份的情况下DROP了系统中一张核心应用表,此时第一时间采用PRM可以恢复该DROP掉数据表的绝大部分数据。10g以后提供了 recyclebin回收站特性,可以首先通过查询DBA_RECYCLEBINS视图来确定被DROP掉的表是否在回收站中,如果在则优先通过回收站flashback to before drop,如果回收站中也没有了,则第一时间使用PRM恢复。

 

恢复简要流程如下:

  1. 首先将被DROP掉的数据表所在的表空间OFFLINE
  2. 通过查询数据字典或者LOGMINER找到被DROP掉数据表的DATA_OBJECT_ID,如果此步骤中得不到这个DATA_OBJECT_ID,则需要在NON-DICT非字典模式下
  3. 启动PRM,进入NON-DICT非字典模式,并加入被DROP掉数据表所在的表空间的所有数据文件,之后SCAN DATABASE+SCAN TABLE from Extent MAP
  4. 通过DATA_OBJECT_ID定位到展开对象树形图中对应的数据表,采用DataBridge模式插回到源数据库中

 

 

 

SQL> select count(*) from "MACLEAN"."TORDERDETAIL_HIS"; 

  COUNT(*)
----------
    984359

SQL> 
SQL> create table maclean.TORDERDETAIL_HIS1 as select * from  maclean.TORDERDETAIL_HIS;

Table created.

SQL> drop table maclean.TORDERDETAIL_HIS;

Table dropped.


 

 

可以通过logminer或者《恢复场景9》中提供的方法得到大致的DATA_OBJECT_ID,使用LOGMINER则大致的脚本如下:

 

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.ADDFILE);

Execute DBMS_LOGMNR.START_LOGMNR(DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG+DBMS_LOGMNR.COMMITTED_DATA_ONLY);

SELECT * FROM V$LOGMNR_CONTENTS ;

EXECUTE DBMS_LOGMNR.END_LOGMNR;




即便这里得不到DATA_OBJECT_ID,在数据表不多的情况下还是可以通过人工识别数据来定位我们需要恢复的数据表。

 

首先将被DROP掉的数据表所在的表空间OFFLINE

 

SQL> select tablespace_name from dba_segments where segment_name='TPAYMENT';

TABLESPACE_NAME
------------------------------
USERS

SQL> select file_name from dba_data_files where tablespace_name='USERS';

FILE_NAME
----------------------------------------------------------------
+DATA1/parnassus/datafile/users.263.843694795

SQL> alter tablespace users offline;

Tablespace altered.

 

 

 

 

启动PRM到NON-DICT模式,并加入对应的数据文件选择SCAN DATABASE+SCAN TABLE From Extents:

 

 

prm-dul-drop-table1

prm-dul-drop-table2

 

 

由于是在非字典模式下所有需要输入必要的字符集信息:

 

prm-dul-drop-table3

 

 

选择被DROP掉的表所在的数据文件即可,多余的数据文件可不选择,并点击SCAN:

 

prm-dul-drop-table4

 

prm-dul-drop-table5

 

 

点中生成的数据库名,并右键选择scan tables from extents:

prm-dul-drop-table6

prm-dul-drop-table7

 

通过人工识别发现DATA_OBJECT_ID=82641的数据对应于被DROP掉的TORDERDETAIL_HIS表,通过DataBridge技术将其传输回源库别的表空间中。

 

 

prm-dul-drop-table8

 


prm-dul-drop-table9

 

prm-dul-drop-table10

 

Dropping Very Large Table In Oracle

这是一张550G的大表,表上还包括了CLOB和BLOB对象;我们来观察下Oracle drop这样一个大表时的具体表现:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> show parameter recyclebin

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
recyclebin                           string      OFF

/* 为了避免被flashback table骚扰,关闭了recyclebin回收站功能 */

SQL> conn maclean/maclean
Connected.

SQL> col segment_name for a20
SQL> select segment_name,bytes/1024/1024/1024 "size in Gbytes"  from user_segments where segment_name='TV';

SEGMENT_NAME         size in Gbytes
-------------------- --------------
TV                           547.25

SQL> select NUM_ROWS,BLOCKS from user_tables where table_name='TV';

  NUM_ROWS     BLOCKS
---------- ----------
 859150100   65649786

SQL> desc tv;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 OBJECT_NAME                                        VARCHAR2(128)
 SUBOBJECT_NAME                                     VARCHAR2(30)
 OBJECT_ID                                          NUMBER
 DATA_OBJECT_ID                                     NUMBER
 OBJECT_TYPE                                        VARCHAR2(19)
 CREATED                                            DATE
 LAST_DDL_TIME                                      DATE
 TIMESTAMP                                          VARCHAR2(19)
 STATUS                                             VARCHAR2(7)
 TEMPORARY                                          VARCHAR2(1)
 GENERATED                                          VARCHAR2(1)
 SECONDARY                                          VARCHAR2(1)
 SPARE1                                             CLOB
 SPARE2                                             CLOB
 SPARE3                                             CLOB
 SPARE4                                             BLOB

/* 该大表包含CLOB、BLOB2种大对象,共859150100行数据,占用65649786个块
    其所在是一个大文件表空间(bigfile tablespace),本地区间管理方式,区间大小统一为128MB 
*/

SQL> col tablespace_name for a2
SQL> select relative_fno,header_block,owner,tablespace_name from dba_segments where segment_name='TV';
RELATIVE_FNO HEADER_BLOCK OWNER                          TA
------------ ------------ ------------------------------ --
        1024           82 MACLEAN                        BF

/* 因为是用bigfile tablespace技术,故数据段所在相对数据文件号为1024 */


SQL> col segment_name for a30
SQL> col owner for a10
SQL> select owner,segment_name,segment_type,header_block from dba_segments where relative_fno=1024;

OWNER      SEGMENT_NAME                   SEGMENT_TYPE       HEADER_BLOCK
---------- ------------------------------ ------------------ ------------
MACLEAN    TV                             TABLE                        82
MACLEAN    SYS_IL0000057409C00014$$       LOBINDEX                  32850
MACLEAN    SYS_IL0000057409C00015$$       LOBINDEX                  65618
MACLEAN    SYS_IL0000057409C00016$$       LOBINDEX                  98386
MACLEAN    SYS_IL0000057409C00017$$       LOBINDEX                 131154
MACLEAN    SYS_LOB0000057409C00014$$      LOBSEGMENT                16466
MACLEAN    SYS_LOB0000057409C00015$$      LOBSEGMENT                49234
MACLEAN    SYS_LOB0000057409C00016$$      LOBSEGMENT                82002
MACLEAN    SYS_LOB0000057409C00017$$      LOBSEGMENT               114770

9 rows selected.

/* 该bigfile数据文件1024上的数据段如上包括TV表本身以及LOB对象和LOB索引 */

SQL> select current_scn from v$database;
CURRENT_SCN
-----------
   44989856

/* 获取当前scn以便闪回数据库 */

SQL> alter session set events '10046 trace name context forever,level 12';
Session altered.

SQL> set timing on;

SQL> drop table tv;
Table dropped.
Elapsed: 00:00:01.21  

/* 虽然是550G的大表,但drop也仅耗时1.21s再次证明了drop仅仅是修改数据字典 */

通过tkprof分析的trace文件信息:
drop table tv

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.12       0.29         28          9      30163           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.12       0.29         28          9      30163           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        28        0.00          0.00
  rdbms ipc reply                                10        0.01          0.06
  reliable message                                8        0.00          0.00
  enq: RO - fast object reuse                     8        0.00          0.00
  write complete waits                            9        0.04          0.10
  log file sync                                   1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        7.22          7.22   

以上可以看到少量的等待事件,drop要求所被drop对象先做object对象级别的checkpoint检查点,以便将脏块写出;故而会出现Write complete waits(A user wants to modify a block that is part of DBWRÂ’s current write batch. When DBWR grabs buffers to write, it marks them as ‘being written’. All the collected buffers are then written to disk. The wait ‘write complete waits’ implies we wanted a buffer while this flag was set. The flags are cleared as each buffer is written)。同时要求获取enq: RO – fast object reuse(快速重用对象队列锁)。

整个drop流程还包括以下修改数据字典的操作:

delete from object_usage
 where obj# in (select a.obj#
                  from object_usage a, ind$ b
                 where a.obj# = b.obj#
                   and b.bo# = :1)
/*  删除该表在对象使用情况基表(object_usage)中的纪录 */

delete from sys.cache_stats_1$ where dataobj# = :1
/* Jonathan Lewis的CBO fundamentals对该基本做了寥寥几笔的描述,该基表用于监控缓存统计信息  
    --which is used to monitor lifetime caching statistics 
*/
delete com$ where obj# = :1
/* 删除对象相关的comment,com$ --comment table */

delete from hist_head$ where obj# = :1
delete from histgrm$ where obj# = :1
delete from dependency$ where d_obj# = :1
delete from source$ where obj# = :1

delete from idl_ub1$
  where obj# = :1
    and part = :2

delete from idl_char$
  where obj# = :1
    and part = :2

delete from idl_ub2$
  where obj# = :1
    and part = :2

delete from ncomp_dll$ where obj# = :1 returning dllname into :2

delete from idl_sb4$
 where obj# = :1
   and part = :2

delete from objauth$ where obj# = :1
delete from col$ where obj# = :1
delete from icol$ where bo# = :1
delete from icoldep$ where obj# in (select obj# from ind$ where bo# = :1)

delete from jijoin$
  where obj# in (select obj#
                   from jijoin$
                  where tab1obj# = :1
                     or tab2obj# = :1)

delete from jirefreshsql$
  where iobj# in (select iobj# from jirefreshsql$ where tobj# = :1)

delete from ccol$ where obj# = :1
delete from ind$ where bo# = :1
delete from cdef$ where obj# = :1
delete from tab$ where obj# = :1
delete coltype$ where obj# = :1
delete from subcoltype$ where obj# = :1
delete ntab$ where obj# = :1
delete lob$ where obj# = :1
delete refcon$ where obj# = :1
delete from opqtype$ where obj# = :1

之后将出现多次update seg$(数据段基表)的操作:


 update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=
  :9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),
  groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=
  DECODE(:17,0,NULL,:17),scanhint=:18 

/* 通过将trace文件中的bind variable还原可以看到Oracle实际执行的update语句 */

update seg$
   set type#     = 3,
       blocks    = 16384,
       extents   = 1,
       minexts   = 1,
       maxexts   = 2147483645,
       extsize   = 16384,
       extpct    = 0,
       user#     = 64,
       iniexts   = 16384,
       lists     = decode(0, 65535, NULL, :13),
       groups    = decode(0, 6 5535, NULL, :14),
       cachehint = :0,
       hwmincr   = 57411,
       spare1    = DECODE(164161, 0, NULL, :17),
       scanhint  = 0
 where ts# = 12
   and file# = 1024
   and block# = 82

/*  以上update语句被多次执行,每次变化的仅有block#变量,依次为114770,82002,49234...82,
     与上述的1024RFN数据文件上的各数据段的header_block头块对应 
*/

/*  由此可知drop过程中Oracle所需要做的是对段在seg$基表上的纪录做修改,
     将type由原值修改为3,也就是临时段的标记;同时extents重置为1。
*/

sql.bsq文件纪录了seg$基表上type#列的含义:
/*  1 = UNDO, 2 = SAVE UNDO, 3 = TEMPORARY, 4 = CACHE, 5 = DATA, 6 = INDEX 
     7 = SORT  8 = LOB   9 = Space Header 10 = System Managed Undo      */

delete from obj$ where obj# = :1
/*  每次update完成后,都会伴随有以上删除obj$基表中对应对象纪录的语句 */

delete from seg$ where ts#=:1 and file#=:2 and block#=:3
/* 若关闭了recyclebin(回收站,flashback table闪回表特性),则在最后此前台进程还会删除已经
    被置为type#=3的临时段在seg$字典基表上的纪录;如果打开了回收站则不会有此delete from seg$操作;
*/

SQL> select block#,type#,extents from sys.seg$  where file#=1024;
no rows selected

可以看到drop数据表作为一种DDL语句,其所需要完成的主要工作是完整删除数据字典中该表相关信息,并在seg$字典基表上将原有段类型(type)和大小修改为临时段和仅有1个区间,之后在obj$对象字典基表上将该对象的纪录彻底删除,此外还将释放数据文件头的区间位图信息。

SQL> alter system flush buffer_cache;
System altered.

SQL> alter system dump datafile '/g01/bf.dbf' block 3;
System altered.

File Space Bitmap Block:
BitMap Control:
RelFno: 1024, BeginBlock: 17, Flag: 0, First: 0, Free: 63488
0000000000000000 0000000000000000 0000000000000000 0000000000000000

沪ICP备14014813号-2

沪公网安备 31010802001379号