COLD RMAN backup fails: Ora-19602 even after SHUTDOWN IMMEDIATE and STARTUP MOUNT

SYMPTOMS

Symptom 1 :- Database was Shutdown Clean using Shutdown Immediate.

Attempt to do a COLD backup via RMAN in mount stage fails:

RMAN-03009: failure of backup command on c1 channel at 06/15/2006 22:21:45
ORA-19602: cannot backup or copy active file in NOARCHIVELOG mode

The failure occurs despite the fact that the database has been previously shutdown with IMMEDIATE option and restarted using STARTUP MOUNT.

The database is running in NOARCHIVELOG mode.

 

Symptom 2:-  Database was Aborted because  Shutdown immediate got terminated in Initial Phase/OS got restarted.

Attempt to do a COLD backup via RMAN in mount stage fails:

RMAN-03009: failure of backup command on c1 channel at 06/15/2006 22:21:45
ORA-19602: cannot backup or copy active file in NOARCHIVELOG mode.

A Cold backup can be taken against a MOUNTED database only when the  datafiles/database does need recovery and are consistent.

The database is running in NOARCHIVELOG mode.

 

 

CAUSE

There is at least one file that is in need of recovery.
A COLD backup for a database running in NOARCHIVELOG mode has to be a CONSISTENT backup as it cannot be recovered after restore.

To confirm, mount the database and run the following SQL:

SQL> select distinct checkpoint_change# from v$datafile_header;

This will return >1 row, indicating that files are at different points in time.

 

Or

Run

Select * from v$recover_file ;

This give information about file which are offline and in need of Recovery.
.

SOLUTION

You will not be able to take a cold backup until the database is consistent.

Solution for symptom 1

Recover the datafile(s) or, if the redo has been lost (due to the online redologs having recycled) and the file is no longer needed the tablespace can be dropped.

1. To recover the datafile:

SQL> recover datafile n;
SQL> alter database datafile n online;

2. To drop the datafile:

SQL> alter database datafile n offline drop;
SQL> alter database open;
SQL> drop tablespace X including contents;

If you cannot recover the datafile and need the data content:

a. Use Note 223543.1 How to Recover From a DROP / TRUNCATE / DELETE TABLE with RMAN
to restore the tablespace from its most recent backup to an auxiliary instance where the data can be exported

b. If you do not have a backup then your only option is to  request PRM-DUL  consultancy to extract the data prior to drop and recreation of the tablespace.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

 

 

Solutions for symptom 2

 

Select controlfile_type from v$database ;

It would show controlfile_type as Current

Run the below query to find the current redo log information.

Query1

SQL>Select b.member,a.group#,a.sequence# from v$log a ,v$logfile b where a.group#=b.group#

Check if the datafiles are in Fuzzy status by querying v$datafile_header

SQL> Select status,file#,fuzzy,checkpoint_change# from v$datafile_header;

Example :-

STATUS       FILE# FUZ CHECKPOINT_CHANGE#
——- ———- — ——————
ONLINE           1 YES            2083769
ONLINE           2 YES            2083769
ONLINE           3 YES            2083769
ONLINE           4 YES            2083769

 

 

Or

Example :-

SQL>  Select count(*),fuzzy from  v$datafile_header group by fuzzy;

COUNT(*) FUZ
———- —
4 YES

YES value indicates files are fuzzy and need recovery.

 

If redo log files(Current/Active shown query 1) are available

SQL>Recover database ;
Re-run

SQL>  Select count(*),fuzzy from  v$datafile_header group by fuzzy;

COUNT(*) FUZ
———- —
NO

 

 

Now take the backup.

Oracle ORA-15196 INVALID ASM BLOCK HEADER [KFR.C:6086] [ENDIAN_KFBH]

——–
CRS 10.2.0.3
ASM 10.2.0.3
RDBMS 10.2.0.2
I have a customer  has 2 nodes cluster and ASM doesn’t mount the diskgroup

 

Alert.log ASM1
————–
Thu Feb 7 21:52:31 2008
NOTE: starting recovery of thread=1 ckpt=56.6552 group=1
ORA-15196: invalid ASM block header [kfr.c:6086] [endian_kfbh] [3] [6558] [0
!= 1]
NOTE: cache initiating offline of disk 0 group 1
WARNING: offlining disk 0.3916356260 (DATA_0000) with mask 0x3
NOTE: PST update: grp = 1, dsk = 0, mode = 0x6

Thu Feb 7 21:52:31 2008
@Errors in file /opt/oracle/admin/+ASM/bdump/+asm1_b000_17753.trc:
ORA-15001: diskgroup “DATA” does not exist or is not mounted
Alert.log ASM2
————–
Thu Feb 7 21:52:31 2008
@Errors in file /opt/oracle/admin/+ASM/udump/+asm2_ora_4385.trc:
ORA-17503: ksfdopn:DGOpenFile05 Failed to open file
+DATA/dwdev/parameterfile/spfiledwdev.ora
ORA-17503: ksfdopn:2 Failed to open file
+DATA/dwdev/parameterfile/spfiledwdev.ora
ORA-15001: diskgroup “DATA” does not exist or is not mounted
Patch 5554692 was applied to prevent problem in the future
.
Ct doesn’t have a valid backup
.
DISK Header
———–
dd if=/dev/sdc1 bs=8192 count=1 | od -c
0000000 001 202 001 001 \0 \0 \0 \0 \0 \0 \0 200 367 354 327 \
0000020 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000040 O R C L D I S K D A T A \0 \0 \0 \0
0000060 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000100 \0 \0 020 \n \0 \0 001 003 D A T A _ 0 0 0
0000120 0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000140 \0 \0 \0 \0 \0 \0 \0 \0 D A T A \0 \0 \0 \0
0000160 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000200 \0 \0 \0 \0 \0 \0 \0 \0 D A T A _ 0 0 0
0000220 0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000240 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
.
.
dd if=/dev/sdi1 bs=8192 count=1 | od -c
1+0 records in
1+0 records out
0000000 001 202 001 001 \0 \0 \0 \0 \0 \0 \0 200 367 354 327 \
0000020 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000040 O R C L D I S K D A T A \0 \0 \0 \0
0000060 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000100 \0 \0 020 \n \0 \0 001 003 D A T A _ 0 0 0
0000120 0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000140 \0 \0 \0 \0 \0 \0 \0 \0 D A T A \0 \0 \0 \0
0000160 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000200 \0 \0 \0 \0 \0 \0 \0 \0 D A T A _ 0 0 0
0000220 0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \

 

 

“alert_ASM1.log-new2.txt”:
.
last time DATA was mounted successfully:
.
Thu Feb 7 19:15:34 2008
NOTE: cache mounting group 2/0x554FBB0C (DATA) succeeded
SUCCESS: diskgroup DATA was mounted
.

.
Reconfiguration complete
Thu Feb 7 19:44:23 2008
Starting background process ASMB
ASMB started with pid=24, OS id=2278
Thu Feb 7 19:44:41 2008
WARNING: cache failed to read fn=453 indblk=0 from disk(s): 0
ORA-15196: invalid ASM block header [kfc.c:7997] [endian_kfbh] [453]
[2147483648] [0 != 1]
NOTE: a corrupted block was dumped to the trace file
System State dumped to trace file
/opt/oracle/admin/+ASM/udump/+asm1_ora_4636.trc
NOTE: cache initiating offline of disk 0 group 2
WARNING: offlining disk 0.3916384773 (DATA_0000) with mask 0x3
NOTE: PST update: grp = 2, dsk = 0, mode = 0x6
Thu Feb 7 19:44:42 2008
ERROR: too many offline disks in PST (grp 2)
.
.
alert_ASM2.log:
.
Thu Feb 7 19:50:04 2008
NOTE: cache mounting group 2/0x457D4099 (DATA) succeeded
SUCCESS: diskgroup DATA was mounted
.
.
econfiguration complete
Thu Feb 7 20:17:04 2008
Starting background process ASMB
ASMB started with pid=21, OS id=13439
Thu Feb 7 20:17:15 2008
WARNING: cache failed to read fn=453 indblk=0 from disk(s): 0
ORA-15196: invalid ASM block header [kfc.c:7997] [endian_kfbh] [453]
[2147483648] [0 != 1]
NOTE: a corrupted block was dumped to the trace file
System State dumped to trace file
/opt/oracle/admin/+ASM/udump/+asm2_ora_14194.trc
NOTE: cache initiating offline of disk 0 group 2
WARNING: offlining disk 0.3937251428 (DATA_0000) with mask 0x3
NOTE: PST update: grp = 2, dsk = 0, mode = 0x6
Thu Feb 7 20:17:18 2008
ERROR: too many offline disks in PST (grp 2)
Thu Feb 7 20:17:18 2008
NOTE: halting all I/Os to diskgroup DATA
NOTE: active pin found: 0x0x659fe548
Thu Feb 7 20:17:18 2008
ERROR: PST-initiated MANDATORY DISMOUNT of group DATA
.
.
1). Customer is using external redundancy. Since it’s external redundancy, ct
should check with their storage vendor to see if they can restore it. From
oracle perspetive, because we don’t store copy of data with exteranl
redundancy, the only way is to do diskgroup restore.
2). Please upload bdump/udump of all ASM instances
3). please provide the kfed output of disk 0 (DATA_0000)
4). please provide the first 80 M of DATA_0000
dd if=<target disk> of=<file> bs=4096 count=20480

.
[oracle@drac1 bin]$ kfed read /dev/databases |more
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj: 2147483648 ; 0x008: TYPE=0x8 NUMB=0x0
kfbh.check: 1557654775 ; 0x00c: 0x5cd7ecf7
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr: ORCLDISKDATA ; 0x000: length=12
kfdhdb.driver.reserved[0]: 1096040772 ; 0x008: 0x41544144
kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000

kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000
kfdhdb.compat: 168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum: 0 ; 0x024: 0x0000
kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname: DATA_0000 ; 0x028: length=9
kfdhdb.grpname: DATA ; 0x048: length=4
kfdhdb.fgname: DATA_0000 ; 0x068: length=9
kfdhdb.capname: ; 0x088: length=0
kfdhdb.crestmp.hi: 32900398 ; 0x0a8: HOUR=0xe DAYS=0x9 MNTH=0x1
YEAR=0x7d8
kfdhdb.crestmp.lo: 224448512 ; 0x0ac: USEC=0x0 MSEC=0x34 SECS=0x16
MINS=0x3
kfdhdb.mntstmp.hi: 32901363 ; 0x0b0: HOUR=0x13 DAYS=0x7 MNTH=0x2
YEAR=0x7d8
kfdhdb.mntstmp.lo: 3359888384 ; 0x0b4: USEC=0x0 MSEC=0xf5 SECS=0x4
MINS=0x32
kfdhdb.secsize: 512 ; 0x0b8: 0x0200
kfdhdb.blksize: 4096 ; 0x0ba: 0x1000
kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize: 5722043 ; 0x0c4: 0x00574fbb
kfdhdb.pmcnt: 52 ; 0x0c8: 0x00000034
kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001
kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000
kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi: 32900398 ; 0x0e4: HOUR=0xe DAYS=0x9 MNTH=0x1
YEAR=0x7d8
kfdhdb.grpstmp.lo: 223643648 ; 0x0e8: USEC=0x0 MSEC=0x122
SECS=0x15 MINS=0x3
kfdhdb.ub4spare[0]: 0 ; 0x0ec: 0x00000000
kfdhdb.ub4spare[1]: 0 ; 0x0f0: 0x00000000
kfdhdb.ub4spare[2]: 0 ; 0x0f4: 0x00000000
kfdhdb.ub4spare[3]: 0 ; 0x0f8: 0x00000000
kfdhdb.ub4spare[4]: 0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[5]: 0 ; 0x100: 0x00000000
kfdhdb.ub4spare[6]: 0 ; 0x104: 0x00000000
kfdhdb.ub4spare[7]: 0 ; 0x108: 0x00000000
kfdhdb.ub4spare[8]: 0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[9]: 0 ; 0x110: 0x00000000
.
.
[oracle@drac1 bin]$ kfed read /dev/raw/raw1
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 4290772992 ; 0x004: T=1 NUMB=0x7fc00000
kfbh.block.obj: 0 ; 0x008: TYPE=0x0 NUMB=0x0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 44045 ; 0x010: 0x0000ac0d
kfbh.fcn.wrap: 4096 ; 0x014: 0x00001000
kfbh.spare1: 51147 ; 0x018: 0x0000c7cb
kfbh.spare2: 2054913149 ; 0x01c: 0x7a7b7c7d
[oracle@drac1 bin]$ kfed read /dev/raw/raw2
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 34 ; 0x001: 0x22
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 4290772992 ; 0x004: T=1 NUMB=0x7fc00000
kfbh.block.obj: 0 ; 0x008: TYPE=0x0 NUMB=0x0
kfbh.check: 0 ; 0x00c: 0x00000000
kfbh.fcn.base: 59301 ; 0x010: 0x0000e7a5
kfbh.fcn.wrap: 512 ; 0x014: 0x00000200
kfbh.spare1: 409189 ; 0x018: 0x00063e65
kfbh.spare2: 2054913149 ; 0x01c: 0x7a7b7c7d

alert_+ASM1.log:
.
Thu Feb 7 19:44:23 2008
Starting background process ASMB
ASMB started with pid=24, OS id=2278
Thu Feb 7 19:44:41 2008

WARNING: cache failed to read fn=453 indblk=0 from disk(s): 0
ORA-15196: invalid ASM block header [kfc.c:7997] [endian_kfbh] [453]
[2147483648] [0 != 1]
NOTE: a corrupted block was dumped to the trace file
System State dumped to trace file
/opt/oracle/admin/+ASM/udump/+asm1_ora_4636.trc
NOTE: cache initiating offline of disk 0 group 2
WARNING: offlining disk 0.3916384773 (DATA_0000) with mask 0x3
NOTE: PST update: grp = 2, dsk = 0, mode = 0x6
Thu Feb 7 19:44:42 2008
ERROR: too many offline disks in PST (grp 2)
.
asm1_ora_4636.trc:
.
*** SERVICE NAME:() 2008-02-07 19:44:41.691
*** SESSION ID:(100.102) 2008-02-07 19:44:41.690
OSM metadata block dump:
kfbh.endian: 0 ; 0x000: 0x00
kfbh.hard: 162 ; 0x001: 0xa2
kfbh.type: 0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt: 0 ; 0x003: 0x00
kfbh.block.blk: 114084224 ; 0x004: T=0 NUMB=0x6ccc980
kfbh.block.obj: 0 ; 0x008: TYPE=0x0 NUMB=0x0
kfbh.check: 83951616 ; 0x00c: 0x05010000
kfbh.fcn.base: 26700 ; 0x010: 0x0000684c
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
—– Abridged Call Stack Trace —–
kfcReadBlk()+1181
kfcReadBuffer()+917
kfcGet0()+14974
kfcGet1Priv()+161
kfcGetPriv()+178
kffSendMap()+2050
kffIdentify()+1552
kfnsFileIdentify()+595
kfnDispatch()+86
opiodr()+984
ttcpip()+1235
opitsk()+1298
Cannot find symbol
Cannot find symbol
opiino()+1028
opiodr()+984
opidrv()+547
sou2o()+114
opimai_real()+163
main()+116
Cannot find symbol
<0x3b49e1c3fb>

It seems that we have two block corruptions since the last time diskgroup was
mounted.
.
[32BIT oracle@orcl bdump]$ grep -n ‘15196’ alert_+ASM1.log | sort |
uniq
ORA-15196: invalid ASM block header [kfc.c:7997] [endian_kfbh] [453]
[2147483648] [0 != 1]
ORA-15196: invalid ASM block header [kfr.c:6086] [endian_kfbh] [3] [6558] [0
!= 1]
.
Usually this type of corruption would require restore of diskgroup if the
redundancy is external
You may want to check if DUL can help in this case.
Check http://www.parnassusdata.com/en/emergency-services

ORACLE ASM DISK HEADER ERASED – NEED TO EXTRACT DATA

I have a customer run dd command on one of his ASM disk.
As no database backup exists, customer would need help to extract data from diskgroup.
He tried:
amdu -diskstring ‘/dev/oracleasm/disks/*’ -ausize 1048576 -blksize 4096
-baddisks DGDATA01 -extract ‘DGDATA01.xxxx’
but he got:
************************ EXTRACTING FILE DGDATA01.404
************************
AMDU-00208: File directory block not found. Cannot extract file DGDATA01.404
** FILE DIRECTORY BLOCK NOT FOUND. CANNOT EXTRACT FILE DGDATA01.404 **
******************************* END OF REPORT
********************************
DIAGNOSTIC ANALYSIS:
——————–
—————————– DISK REPORT N0001
——————————
Disk Path: /dev/oracleasm/disks/DGDATA01
Unique Disk ID:
Disk Label:
Physical Sector Size: 512 bytes
Disk Size: 2097146 megabytes
Group Name: DGDATA01
Disk Name:
Failure Group Name:
Disk Number: 0
Header Status: 0
Disk Creation Time: 0000/00/00 00:00:00.000000
Last Mount Time: 0000/00/00 00:00:00.000000
Compatibility Version: 0x00000000(0)
Disk Sector Size: 512 bytes
Disk size in AUs: 2097146 AUs
Group Redundancy: 2
Metadata Block Size: 4096 bytes
AU Size: 1048576 bytes
Stride: 0 AUs

 

Group Creation Time: 0000/00/00 00:00:00.000000
File 1 Block 1 location: AU 0
OCR Present: NO
** NOT A VALID ASM DISK HEADER. BAD VALUE IN FIELD blksize_kfdhdb **

 

Bug 5061821 OS UTILITIES CAN DESTROY ASM DISK HEADER
fixed 11.2.0.1, 11.1.0.7, 10.2.0.5 and higher.

From that one learns.
Lately there is an extra copy of the asm disk header.

This copy can be used to fix the real header using kfed with the repair
option.

This copy is stored as the last block of the PST. That means it is in
the last block of allocation unit 1 (the original is block 0 of au 0).

The default sizes for an allocation unit is 1M and for the
meta data block size is 4K, meaning 256 blocks in each au.
So typically the copy is in au 1 block 254. (ASM counts from zero, the
original is in allocation unit 0 block 0)

Provided you established that the only problem is with the lost/corrupt
disk header,
the fix is as simple as:
$ kfed repair <disk name>
.
If the AU size is non-standard, the above will fail with something l
ike:KFED-00320: Invalid block num1 = [3], num2 = [1], error = [type_kfbh]

But that is expected and no harm is done.
All you need to do is specify the correct AU size.

E.g. for 4MB AU the command would be:$ kfed repair <disk name> ausz=4194304

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

 

Oracle RECOVER A DATAFILE WITH MISSING ARCHIVELOGS

Symptoms

Database cannot be opened, because a datafile checkpoint is lagging behind from the rest of the datafiles.

Cause A datafile was restored from a previous backup, but archivelogs required to recover the said datafile are missing.

Solution There are 3 options available, as shown below:

 

Option#1: Restore the database from the same backupset, and then recover it by applying up to the last available archivelog to roll it forward, but any updates to the database after the point-in-time of recovery will be lost.

 

Option#2: Force open the database by setting the _ALLOW_RESETLOGS_CORRUPTION=TRUE in the init.ora. But there is no 100% guarantee that we can open the database. However, once the database is opened, then you must immediately rebuild the database. Database rebuild means doing the following, namely: (1) perform a full-database export, (2) create a new and separate database, and finally (3) import the recent export dump. Note: When exporting after opening the database with the allow corruption parameters, you should set the Event 10231 before export to extract data from all non-corrupt > blocks in order to be able to import the data successfully. For additional information about Event 10231, please review Note 21205.1 : EVENT: 10231 “skip corrupted blocks on _table_scans_”.

 

Option#3: Manually extract the data using the Oracle’s Data Unloader (DUL),

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

Data Salvage Using Oracle DUL

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

 

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

 

DUL Introduction

  • Developed by Bernard Van Duijnen
  • Is a stand-alone disaster recovery utility, and requires no other Oracle software to run
  • Unloads data from data files of a crashed database
  • Creates SQL*Loader control files and data files

A useful utility has been developed in the Netherlands, called the Data Recovery Unloader (DUL). DUL is a stand-alone C program that directly retrieves rows from tables in data files. The RDBMS is not necessary.

DUL is intended to retrieve data from the database that cannot otherwise be retrieved. The database may be corrupted, but an individual data block to be used by DUL must be 100% correct.

DUL unloads data from files containing table and cluster data. It creates no scripts for triggers, procedures, tables, and views; it can only read the definitions from the data dictionary tables.

DUL can create SQL*Loader files or export files.

When to Use DUL

You can use DUL in disaster scenarios to provide your customer more current data than the last backup or export.

Before using DUL, you will need to learn more about the ACID principle (Atomicity, Consistency, Isolation, and Durability), Oracle’s transaction internals, block structures (headers and footers), file structures, and OS file systems. These topics are discussed in more detail in the DSI402 and DSI402e courses.

 

Also, make sure that your database is really unrecoverable. Recovery is discussed in the DSI403 and DSI403e courses. Maybe your database can also be rescued by using utilities like BBED and ORAPATCH, as discussed in DSI401.

DUL Requirements

You need the most current data files, preferably all of them from the same time frame. You need a knowledgeable resource to discuss OS, database, and schema details. You also need a new instance or location to put everything back together, with a lot of free disk space (up to three times the size of your database.)

Good Things to Have

  • Data file listing
  • DDL scripts to re-create objects
  • Any export of the instance; ancient is acceptable
  • Idea of how much data is to be retrieved
  • Patience and luck

DUL Features

 

DUL Features

 

 

  • Available on all Oracle versions since version 6, on many platforms
  • Not an alternative to standard recovery methods
  • Not a supported utility
  • Used as a last resort only, as the ultimate solution for an otherwise dead-end situation
  • Reads data dictionary if the files are available, but can also work without dictionary files
  • Can unload individual tables and schemas, or the complete database
  • Supports cross-platform unloading

DUL version 3 is implemented and tested for Oracle7. The most recent version is DUL version 9, which is implemented for Oracle9i Release 9.0.1.

DUL is available on the following platforms: Sequent/ptx, VAX/VMS, Alpha VMS, MVS, HP9000/8xx, IBM AIX, SCO Unix, Linux, Alpha OSF/1, Intel Windows NT, and Windows95.

DUL is not a supported utility, that is, there is no guarantee for bug fixes.

DUL should be used as a last resort after all other recovery alternatives have been exhausted. This includes the use of destructive events. However, to use these destructive events, the database must be in reasonably good shape to be opened and to remain open while an export is being performed. There are examples of databases that were restored from two disparate backups taken two weeks apart. In such situations it is often very difficult to successfully export, even if the database can be opened.

If the system tablespace files are available, DUL reads the data dictionary information and unloads the user$, obj$, tab$, and col$ tables. Old system tablespace backup files from the same database are better than nothing, and can also be used.

If the system data files are not available, DUL scans the data files for segments and extents, and scans the found segments for rows. Then you unload the found segments with the UNLOAD TABLE command. To do this, you must have a good knowledge of the database structure, such as column and table storage information.

Databases can be unloaded on platforms other than the DUL host platform. Simply copy the data files, and modify the configuration files appropriately.

DUL Support

 

The  following standard constructs are supported:

  • Chained or migrated rows
  • Hash or index clusters
  • NULLs, trailing NULL columns
  • LONGs, RAWs, DATEs, NUMBERs, ROWIDs (extended and restricted)
  • Multiple free list groups
  • Unlimited extents
  • Partitioned tables, index-organized tables
  • Multibyte character sets
  • LOBs

DUL supports the standard data types for Oracle9i, including features such as index- organized tables and partitioned tables.

Multibyte character sets are supported, but with workarounds for the DUL command parser. In particular, it is problematic to specify table names and user names in a multibyte character set. DUL is essentially a single-byte application. The command parser does not understand multibyte characters.

The current version of DUL also supports LOBs, internal ones (CLOBs, BLOBs) better than external ones (BFILEs, CFILEs).

Note: Internal LOBs are only supported with the SQL*Loader format.

One of the most problematic scenarios is having LOBs without a system tablespace.

 

DUL Restrictions and Dangers

  • The database can be corrupt, but individual blocks must be
  • DUL unloads LONG RAWs, but LONG RAW data cannot be loaded using SQL*Loader.
  • There is no support for:
    • MLSLABELs
    • VARRAYs, objects, and nested tables
  • DUL performs dirty reads;

no data consistency or validity guaranteed.

The database may be corrupted, but an individual data block that is used must be correct. During all unloading, blocks are checked to verify that they are not corrupted and they belong to the correct segment. If a corrupted block is detected, an error message is generated to the loader file.

There is no suitable format in SQL*Loader to preserve all LONG RAWs. Use the export mode instead, or write a Pro*C program to load the data.

Multilevel security labels (MLSLABELs) are not supported. VARRAYs, objects, and nested tables are not yet supported.

DUL performs dirty reads from the data files. Data will likely be logically inconsistent, and therefore will have to be revalidated.

DUL Expectations

It is important to understand what the limitations and dangers of DUL are. Be sure to set clients’ expectations about what the service is, and what their role is. Make sure they can get back to recovery operations, preferably semi-production status.

“If you are there, it is a bad situation already, so your help is their best hope.”

Before you start using DUL, understand what brought you to this procedure. Be aware that all transactional integrity is thrown out the window; file and consistency checks are ignored.

Look at the current backup and recovery strategies, and find out how they failed.

 

 

Configuring DUL

Two configuration files are  necessary:

  • init.dul
  • control.dul

here are two configuration files for DUL:

  • dul contains all configuration parameters, such as size of caches, details of header layout, Oracle block size, and output file format.
  • In the dul file, the data file names and the Oracle file numbers must be specified.

Note: The following slides explain the usage of DUL version 9.0.1.0.0.

 

init.dul Parameters

 

Database  configuration parameters:

  • db_block_size
  • compatible

Dictionary cache information:

  • dc_columns
  • dc_tables
  • dc_objects
  • dc_users

Database Configuration Parameters

The init.dul file contains database configuration parameters, such as

db_block_size and compatible.

The compatible parameter identifies the database version; valid values are 6, 7, 8, or 9. This parameter must be specified. Most other DUL parameters have default values.

Dictionary Cache Information

The DUL cache must be large enough to hold all entries from the col$, tab$, obj$, and user$ dictionary tables. If the DUL cache is configured too small, earlier releases of DUL did not start, and reported the following error:

  • DUL: Error: parameter dc_xxxxxx too low

This meant that the dc_xxxxxx parameter had to be be increased and DUL to be restarted. However, the current release of DUL will automatically raise the dc_xxxxxx parameter in such cases, give a warning, and continue operation.

Note: To give the dc_xxxxxx parameters a historical perspective: Oracle version 6 had more than twenty different dc_ initialization parameters. These parameters were replaced in Oracle7 by the new shared_pool_size parameter.

 

init.dul Parameters

OS-specific     parameters:     Other common parameters:

  • osd_big_endian_flag  control_file
  • osd_dba_file_bits  ldr_enclose_char
  • osd_c_struct_alignment  ldr_phys_rec_size
  • osd_file_leader_size buffer
  • osd_word_size  export_mode
    • filesize_in_mb

OS-Specific Parameters

osd_big_endian_flag (Boolean)

Determines whether or not the platform is byte-swapped. Current versions of DUL set the default according to the machine it is running on. HP, Sun, and mainframes generally are big endian (true); Intel and DEC are little endian (false).

osd_dba_file_bits

The number of bits in a dba used for the low order part of the file number. For example, you should set osd_dba_file_bits on Sun Sparc Solaris to 10 (Oracle8i and higher).

osd_c_struct_alignment

The C structure member alignment (0,16, or 32), and must be set to 32 for most ports

osd_file_leader_size

Contains the number of bytes/blocks added before the real Oracle file header block

osd_word_size

The size of a machine word; always 32, except for MS-DOS (16)

 

Other Common Parameters

 

control_file

The name of the DUL control file (default: control.dul)

ldr_enclose_char

Contains the character to enclose fields in SQL*Loader mode

ldr_phys_rec_size

The physical record size for the generated loader data file:

0: No fixed records, each record is terminated with a newline

>2: Fixed record size

buffer

Indicates the row output buffer size (default: 64 KB)

export_mode

This boolean parameter determines whether the output mode is the export format or SQL*Loader format (default: FALSE; that is, SQL*Loader mode).

filesize_in_mb

This parameter specifies the maximum output file size. Output files will be split by DUL if necessary (default value is 0; that is, the output file will not be split).

 

 

init.dul Example

 

 

control.dul

 

  • This file contains the mapping of file numbers to file names (see v$datafile).
  • The format for Oracle8i and Oracle9i files is:

ts#  relative_file#  data_file_name [optional extra leader offset] [startblock block#] [endblock block#]

 

  • Each entry is located on a separate line and can specify a part of a data file using the startblock and endblock

 

 

A DUL control file (default name: control.dul) is used to translate the file numbers to file names. The order is important. The first three fields (tablespace number, relative file number, and data file number) are mandatory; the other three fields are optional. The optional extra leader offset is an extra byte offset to be added to all lseek() operations for that data file. For example, this makes it possible to skip over the extra block for AIX on raw devices or unload from fragments of a data file.

You can use the following script [03_control_dul.sql] to generate a valid DUL control file:

SQL> connect / as sysdba

SQL> startup mount

SQL> set trimspool on pagesize 0 linesize 256 feedback off

SQL> column name format a200

SQL> spool control.dul

SQL> select ts#, rfile#, name from v$datafile;

SQL> spool off

Example

 

0 1 D:\ORACLE\ORADATA\LHAAN\SYSTEM01.DBF
1 2 D:\ORACLE\ORADATA\LHAAN\UNDOTBS01.DBF
2 3 D:\ORACLE\ORADATA\LHAAN\CWMLITE01.DBF
3 4 D:\ORACLE\ORADATA\LHAAN\DRSYS01.DBF
4 5 D:\ORACLE\ORADATA\LHAAN\EXAMPLE01.DBF
5 6 D:\ORACLE\ORADATA\LHAAN\INDX01.DBF
7 7 D:\ORACLE\ORADATA\LHAAN\TOOLS01.DBF
8 8 D:\ORACLE\ORADATA\LHAAN\USERS01.DBF

 

 

Using DUL

 

 

  1. Create appropriate dul and control.dul files.
  2. Invoke DUL to unload dictionary
  3. Unload the data
  4. Rebuild the database
  5. Validate the data 

 

During startup, DUL goes through the following steps:

  • The init.dul parameter file is processed.
  • The DUL control file (default control.dul) is
  • Try to load dumps of the user$, obj$, tab$, and col$ tables (if available) into DUL’s data dictionary
  • Try to load SEG.dat and EXT.dat.
  • Accept DDL statements or run the DDL script specified as first

Note: DUL assumes all files to be located in the current (working) directory.

DUL Output

The DUL output depends on the export_mode parameter in the control.dul file. If you have the DDL for the objects, you should use them to re-create the objects in your new instance. Note that the export format produces more compact files, which might be important in case of huge output files.

SYSTEM Tablespace

When unloading data from the wrecked data files, you have two options:

  • Available system tablespace data file
  • No system tablespace data file

Note: An old system tablespace data file is better than none at all!

 

With System Tablespace Files

  1. Create dul and control.dul files.
  2. Start DUL and unload the dictionary:

$ dul

DUL> bootstrap;

 

  1. Unload the objects, for example:

DUL> unload table <username.table>;
DUL> unload user <username>;
DUL> unload database;

 

Commonly Used DUL Commands
In case you have experience with earlier releases of DUL, you should know that the current version of DUL automatically generates and runs the dictv7.ddl and dictv8.ddl dictionary scripts when you issue the bootstrap command, so you don’t need to run those scripts anymore.
The most commonly used DUL commands are:

DUL> unload table table hr.employees
. unloaded  107  rows

DUL> unload user hr;
About to unload HR’s tables …
. unloading table REGIONS 4 rows unloaded
. unloading (index organized) table COUNTRIES 25 rows unloaded
. unloading table LOCATIONS 23 rows unloaded
. unloading table DEPARTMENTS 27 rows unloaded
. unloading table JOBS 19 rows unloaded

DUL> unload database;
This unloads the entire database.

 

Without System Tablespace Files

 

1.      Create init.dul and control.dul files.

2.      Start DUL and scan the database for segment headers and extents:

DUL>  scan database;

3.      Restart DUL and scan the found tables for column statistics:

DUL> scan tables;

4.      Identify the scanned tables.

5.      Unload the identified tables.

In this case, you need in-depth knowledge about the application and the application tables; the unloaded data will be meaningless if you do not know from which table it came. These are some potential challenges:

  • Data types can be guessed by DUL, but table and column names are lost; the guessed column types can be
  • DUL does not find trailing columns that only contain NULL values, because trailing NULL columns are not stored in the
  • Tables that have been dropped can be seen. When a table is dropped, only the table definition is removed from the data
  • Empty tables (without rows) will go

Step 2

DUL> scan database;

DUL: Warning: Recreating file “EXT.dat” DUL: Warning: Recreating file “SEG.dat”

DUL: Warning: Recreating file “COMPATSEG.dat” tablespace 0, data file 1: 83199 blocks scanned

tablespace 1, data file 2: 51199 blocks scanned

tablespace 2, data file 3: 5119 blocks scanned

tablespace 3, data file 4: 5119 blocks scanned

tablespace 4, data file 5: 39039 blocks scanned

 

tablespace 5, data file 6: 6399 blocks scanned

tablespace 7, data file 7: 2559 blocks scanned

tablespace 8, data file 8: 11199 blocks scanned Loaded 3131 entries from EXT.dat

Sorted 3131 entries

Loaded 2170 entries from SEG.dat Loaded 1 entries from COMPATSEG.dat

As you can see, the following two files are generated by the scan database

command:

  • dat : Information about found segment headers
  • dat : Information about contiguous table or cluster data blocks

Step 3

The third step tells DUL that it should use its own generated extent map rather than the segment header information, and scan all tables in all data segments using the commands:

DUL> alter session set use_scanned_extent_map=true;

Parameter altered DUL> scan tables; You can also use:

DUL> scan extents;

Step 4

Hopefully, the output from the previous step looks familiar. Use this information and your knowledge of the data to identify the lost tables. Note that this is the most difficult (and mainly manual) part of using DUL in the absence of the SYSTEM tablespace data files.

Note: at the end of this lesson, you will see a link pointing to two fully documented DUL cases, including all DUL screen output and corresponding explanations.

Step 5

Unload the identified tables, for example, with the following command:

DUL> unload table departments

  • (department_id number
  • ,department_name varchar2(30)
  • ,manager_id number
  • ,location_id number)
  • storage (objno 31444);

. unloading table DEPARTMENTS    27 rows unloaded

 

 

Rebuilding the Database

  • Rebuild phase:
    • Load the database by using
    • Load the database by using SQL*Loader.
  • Validation phase:
    • Data consistency check
    • Back up your database
    • Root cause analysis: what happened, and how to prevent it from happening again

 

To rebuild the database, load the unloaded information with the Import utility or the SQL*Loader utility.

Load the Database by Using Import

If you use the export mode to unload the data, DUL generates a separate import file for each table with minimal information in it. That is, grants, storage clauses, or triggers are not included. The output file name generated by DUL is ownername_tablename.dmp; for example, HR_EMPLOYEES.dmp.

Load the Database by Using SQL*Loader

For SQL*Loader output formats, the columns are separated by spaces and enclosed in double quotes. The output file names generated by DUL are:

ownername_tablename.dat for the data file

ownername_tablename.ctl for the control file

Now What?

When you have finished, these are typically the remaining steps:

  • Data should be checked by the customer for consistency
  • Once data is validated, it should be backed
  • Root cause analysis is sometimes requested. A clear description of the environment and the facts behind the event is

 

DUL Case Studies

Click the link below to visit two case studies about using DUL:

  • In the presence of syste m tablespace files
  • In the absence of system tablespace files

Note: to open the case studies in a different browser window, use right-click and select the appropriate choice.

 

dul download oracle data unloader

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

ORACLE PRM-DUL Download: http://zcdn.parnassusdata.com/DUL5108.zip

DUL Allows recovery of data from raw datafiles

Download DataUnLoader_Toolkit.zip

Download DataUnLoader_Labs.zip

Download executable for your desired platform

Best Practice: If going on-site, download all platforms ahead of time in case internal Oracle access is later prohibited.

Download current exe zip/gz file for platform

DUL Utility (Note: date effective copy protection)

DUL Manual

Oracle DUL Data Unloader data recovery tool information summary

 

Oracle DUL is the internal database recovery tool of Oracle, developed by Dutch Oracle Support, Bernard van Duijnen:

  • DUL is not a product of Oracle
  • DUL is not supported by Oracle
  • DUL is strictly limited to Oracle Support after-sales support department for internal use
  • Using DUL abroad should go through Oracle’s internal approval, before using you need to purchase Oracle’s standard services PS first, otherwise it is not even eligible to use DUL
  • One reason DUL is strictly controlled is its use of the Oracle source code

 

Start from DUL 9, Bernard van Duijnen set a software time lock for DUL to prevent the external use of DUL. He compiled DUL compiled on different platforms (DUL base on the C language) and uploaded to ORACLE internal DUL workspace (base on stbeehive space) periodically, and then Oracle Support downloaded it by using the internal VPN login. For example, Bernard.van.duijnen released a version on October 1. The date lock is 30 days. This version basically will become invalid by the time of November 1. And DUL does not read OS time, so changing OS time is useless. In fact, it reads a current time recorded by Oracle’s data file. And normal users won’t change the time to use DUL.

 

Note that Bernard van Duijnen does not provide DUL on HP-UX, so no DUL version can be applied to HP-UX.

 

Also note that earlier versions of DUL cannot be used in the current version 10g, 11g, 12c database, because it’s too old. Using DUL in the United States is restricted. In China, basically only the Oracle ACS customer service department provides external use, and the price of ORACLE ACS field services is fairly expensive.

 

The attached document introduces DUL service provided by Oracle ACS (of course the original field service is relatively expensive, and provided that the user has purchased the PS standards each year, otherwise you cannot even buy ACS Advanced Field Service):

DUL – DATA RECOVERY UNLOADER DataSheet

https://www.askmac.cn/wp-content/uploads/2014/01/DUL.pdf

 

 

DUL 10 English version of user manual:

DUL User’s and Configuration Guide V10.2.4.27

https://www.askmac.cn/wp-content/uploads/2014/01/DUL-Users-and-Confi…

 

The following is the download link DUL 10, but because of the lock, it will fail regularly.

DUL FOR LINUX platform (updated to PRM-DUL)

DUL FOR Windows platform (updated to PRM-DUL)

 

 

DUL can extracted data from a severely damaged database. DUL can directly scan Oracle Datafile, and identifies the header block segment header, have access to Extent information, and read data from the actual row. Then, it can generate import file in SQLLDR or DMP file in EXP.

 

If SYSTEM tablespace data files exist, DUL reads Oracle data dictionary. Otherwise DUL uses the form of actual read rows, and determines the field type and length depends on the internal algorithm.

 

DUL can basically process all of the common row types, including conventional row, migration row, chain row, multi Extents and clustered tables without additional manual intervention. Cross-platform extraction is also supported. DUL extract data directly from Oracle Datafile, without the need for Oracle database instance. It implements dirty reads , assuming that each transaction have already submitted. DUL does not detect whether to do media recovery, even damaged data block can also be read out. Support DMT and LMT table space. Because of its dirty reads, it is generally recommended to verify data by the application after DUL’s data recovery.

 

In terms of compatibility, DUL can process data filed copied from different operating systems. It supported most of the databases structure: chain row, migrate row, hash / index cluster, LONG, RAW, ROWID, DATE, Number, multi FreeList, high water level, NULL, and so on. DUL is compatible with ORACLE 6,7,8 and 9 and 10g 11g 12c.

 

 

 

 

 

Parnassusdata Software (where Maclean is) has developed similar products, PRM-DUL. The product is built on DUL and introduces graphical user interface GUI and DataBridge (data doesn’t need to be SQLLDR files, it can be directly transferred to the target database as DBLINK) and other functions on the basis of DUL; and because the PRM-DUL is written in JAVA, it can cover all platforms, including HP-UX.

PRM-DUL free version download:

http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_…

PRM-DUL’s manual

http://www.parnassusdata.com/sites/default/files/ParnassusData%20Recover… 8B% E5% 86% 8C% 20v0.3.pdf

 

The free version of PRM-DUL, by default, can only extracts a million rows of data from each table. If your database is no more than ten thousand rows of data table, you can directly use the free PRM-DUL. If your database is large and contains important data, then you can consider buying the Enterprise Edition of PRM-DUL, PRM-DUL Enterprise provides a license for a set of databases. Each License costs 7500 yuan (including 17 %VAT).

Meanwhile PRM-DUL also provide some free License:

Free open several PRM-DUL Enterprise Edition License Key

 

 

 

 

If your Oracle database recovery case still fails after using DUL, you may consider the service recovery:

Parnassusdata Software now offers almost all Oracle recovery cases, including database does not open, the table was mistakenly DROP, TRUNCATE, DELETE, etc. ASM Diskgroup cannot MOUNT etc.

Oracle critical problem (system.dbf crash, no backup, no metadata)

Sometimes our customer’s DB is crash via some reason or working as a DBA.

I know Oracle dul very well, but I’m not Oracle Korea’s employee or engineer.

So I cann’t get Oracle dul binary anytime if I want.

 

There was a critical problem (system.dbf crash, no backup, no metadata) at monday(25, May)
one of my big customers instance.
So I used dul without system dbf and unloading unitl now.
As I said, I am not Oracle employee, so it’s very hard to ask latest dul binary to someone…
And it spent lots of time….
There is another two question.
1. Is there any distribute or partner in Korea to sell your product ?
2. Can I buy your product for support our customer’s databases recovery ?
Answer:
Thanks for your interesting . Till now ,we have no partner in Korean . We sell PRM as as product , but we also provide recovery service for some case like yours . If system dbf is not lost ,but just corrupted , we can fix problem and open database without PRM or DUL . The recovery service can be approached by remote control or onsite. By this way , we can recovery from trouble as soon as possible , for we just patch the database , extracting & restoring data is no longer needed . The service fee is up to case .PRM price is 1500 US dollar Enterprise License per Database , anyone can buy and pay via Credit card or Paypal.

Oracle DUL Help in Reconstructing R12 11g database catalog due to corruption in the database

I have a customer who’s database has been corrupted and need support from experts.Pls find below the bullet points below.

Can we use golden gate in the below case.Pls advise

 

At
this time, Support is unable to offer a solution, the only
alternative is to extract the data from the database and migrate it into a fresh install 12.1.3 which is similar to an implementation project

 

Answer:

The best utility to use with a badly damaged database is DUL.
DUL can get data out of pretty much anything as far as I recall.

http://www.parnassusdata.com/en/node/270

Oracle DUL : Help for extracting a table from an incomplete EXPDP dump

We have a customer who is trying all means to get 1 very critical table (data) from a incomplete export data pump. Below is the scenario
EXPDP
after exporting 56 tables had failed due to space crunch issues.

The log file master table was not unloaded cleanly

Now

while importing 1 table , it fails with  ORA39246:cannot locate master table within provided dump files

 

ORA39246:
cannot locate master table within provided dump files

The
table which they are trying to import back is one of the tables that was fully exported in the failed expdp above
Is
there a Possibility to use DUL – UNEXP/UNPUMP to unload only that tables data from the dump file, with the below considerations
 OS Solaris 5.10 , DB Oracle 10.2.0.3
 The table to be restored is a partition table
 The expdp taken was with parallel/multiple dump files
I had downloaded the latest user guide, but does not have much details on the UNPUMP. Could you please let us know the steps/procedure
needed for the UNPUMP if fesible in the above scenario.

 

This is a complicated and new scenario. Maybe we need some new tricks, especially for support of parallel unpump.
The best explanation of the unpump command is the in the wiki:

http://www.parnassusdata.com/en/node/270
You would typicially start with an “unpump scan” to locate the stream headers. Then continue with “unpump stream
header” for each header.
Is it possible to get a copy of those files on the oracle network? Then I can inspect them myself, and that could save us
some round trips.

How to estimate the unload time of Oracle DUL

I have a customer who want to unload data from a corrupted database with about 1.4TB data by DUL, how much time can we finish
the whole steps?

 

 

This database is corrupted due to some storage problem and cannot startup, but all datafiles are in the same SCN, so I
believe data dictionary is usable and we don’t need to scan database/segment/extents, an easy “unload database” may
work.

 

But we are not sure how much time will it cost to unload a database for 1.4TB data ?

Anyone can provide similar experience ?

 

Answer:

There is no simple rule, typically most time is spent setting up the new database and loading the extracted data into this new database. Extraction is single threaded and not really optimized for speed, but is is seldom the biggest bottleneck.

You do not need rac to be able to run multiple DUL sessions concurrently, you can do that on any system with sufficient bandwidth, as long as you run them from different directories. But typically the unloading is not the biggest bottleneck.

 

沪ICP备14014813号-2

沪公网安备 31010802001379号