I have a customer who want to unload data from a corrupted database with about 1.4TB data by DUL, how much time can we finish
the whole steps?
This database is corrupted due to some storage problem and cannot startup, but all datafiles are in the same SCN, so I
believe data dictionary is usable and we don’t need to scan database/segment/extents, an easy “unload database” may
work.
But we are not sure how much time will it cost to unload a database for 1.4TB data ?
Anyone can provide similar experience ?
Answer:
There is no simple rule, typically most time is spent setting up the new database and loading the extracted data into this new database. Extraction is single threaded and not really optimized for speed, but is is seldom the biggest bottleneck.
You do not need rac to be able to run multiple DUL sessions concurrently, you can do that on any system with sufficient bandwidth, as long as you run them from different directories. But typically the unloading is not the biggest bottleneck.
Comment