Know about Oracle High Water Mark

there’s no HWM for datafiles, it’s just a virtual term to describe the last block containing data in the data file, which is the minimum size allowed for sizing down the data file.

This article intends to provide an SQL script to find tables which are fragmented (i.e Data is much lower then High Water Mark),so that we can target those segments (tables) for recreation.

Software Requirements/Prerequisites

Execution Environment    :SQL, SQL*Plus

Access Privileges              :Requires dba privileges as script is to be run as   the owner SYS or SYSTEM

Prerequisites                     :Do an Analyze  with compute statistics on all tables present in the Users schema

i.e Analyze table <table_name> compute statistics

Usage                                :Sqlplus username/<password>

SQL> @fragment.sql

Advisory                            Will not work on Compressed tables, may return negative numbers.

 

Configuring the Script

1.User needs dba privileges to access dba_tables  .

2.Statistics needs to be collected on all the tables using  compute statistics

option for the input schema before fragment.sql is   run.

Running the Script

Step 1:- Copy this script to a file named fragment.sql.

Step 2:- Connect as user SYS or SYSTEM.

Step 3:- Run Analyze on all the tables present in the schema  for which you want to find the fragmented table.

SQL> Analyze table <table_name> compute statistics ;

Step 4:- Execute the fragment.sql script.Note the script will prompt for Schema name.

SQL> @fragment.sql

 

Caution

This script is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it. Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an executable state when you first receive it. Check over the script to ensure that errors of this type are corrected.

Script

REM This is an example SQL*Plus Script to find tables fragmentated below high water mark

set heading off verify off echo off
Spool fragment.sql

REM The below queries gives information about the size of the table with respect to the High water Mark
REM note that BLOCKS*8192 is BLOCKS times the block size: 8192.  Substitue your DB blocksize.
REM SELECT BLOCKS*8192/1024/1024 FROM  DBA_TABLES WHERE  TABLE_NAME='<TABLE_NAME>'  and    owner='<owner>'   ;
REM The below queries gives the actual size in MB used by the table in terms of data .
REM SELECT NUM_ROWS*AVG_ROW_LEN/1024/1024 FROM  DBA_TABLES WHERE TABLE_NAME='<TABLE_NAME>' and  owner='<owner'
REM
REM You can use the difference of the two sql statements specified above to get the table which
REM has fragementation below high water mark prompt Enter name(s) of schema for which you want to find
REM fragemented object.
PROMPT Please enter the schema name

SELECT TABLE_NAME ,  (BLOCKS *8192 / 1024/1024 ) - (NUM_ROWS*AVG_ROW_LEN/1024/1024)
"Data lower than HWM in MB"   FROM  DBA_TABLES WHERE  UPPER(owner) =UPPER('&OWNER') order by 2 desc;

Spool off

Goal

This article explains, with examples, how to view the high water mark and when the high water mark is reset. The queries given in this article applies when the segment , whose high water mark has to be determined ,  is in one datafile and is not spawned across multiple datafiles .

Solution

The high water mark is the boundary between used and unused space in a segment. As requests for new free blocks that cannot be satisfied by existing free lists are received, the block to which the high water mark points to becomes a used block, and the high water mark is advanced to the next block. In other words, the segment space to the left of the high water mark is used, and the space to the right of it is unused.

The high-water mark is the level at which blocks have never been formatted to receive data.

When a table is created in a tablespace, some initial number of blocks / extents are allocated to the table. Later, as the number of rows inserted increases, extents are allocated accordingly.

To find out how many blocks / extents are allocated to the table, query DBA_SEGMENTS for ‘blocks’ and ‘extents’.

For example:

SQL>create table test1 (num number) tablespace tbsp1;

Table created

SQL>select blocks, extents from dba_segments where segment_name=’TEST1′

BLOCKS EXTENTS
——– ————-
8 1

Now, to view the high water mark, perform an:

SQL> analyze table test1 compute statistics;

Querying dba_tables for ‘Blocks’ and ‘Empty_blocks’ should give the high water mark.

Blocks — > Number blocks that has been formatted to recieve data
Empty_blocks —> Among the allocated blocks, the blocks that were never used

SQL> select blocks,empty_blocks,num_rows from dba_tables where table_name=’TEST1′

BLOCKS EMPTY_BLOCKS NUM_ROWS
————– ————————– ——————–
0 7 0

If you insert some rows, then the output of the above query returns:

BLOCKS EMPTY_BLOCKS NUM_ROWS
————– ————————– ——————–
1 6 8

blocks+Empty_blocks=1+6=7 (but not 8) because 1 block is for segment header.

Insert some more rows into table ‘TEST1’ to increase the number of extents allocated so that
DBA_SEGMENTS will show :

BLOCKS EXTENTS
————– —————
32 4

And dba_tables (after analyze table) shows:

BLOCKS EMPTY_BLOCKS NUM_ROWS
————– ————————– ——————–
28 3 14338

Deleting the records doesn’t lower the high water mark. Therefore, deleting the records doesn’t raise the ‘Empty_blocks’. After deleting the records, if you query dba_segments or dba_tables, there would be no change. Even an ‘Alter table test1 deallocate unused;’ will not bring the high water mark down.

To determine the exact number of blocks that contain data i.e. space used by table below the high water mark, query rowid and get the first and last used block from the rowid.

SQL> select count (distinct dbms_rowid.rowid_block_number(rowid)) “used blocks” from TEST1;

This works fine if only one file is used for the segment. If more files, we need to include the file number in some way, for instance:

SQL> select count (distinct dbms_rowid.rowid_block_number(rowid)||’-‘||dbms_rowid.rowid_relative_fno (rowid)) “used blocks” from TEST1

USED_BLOCKS
———————–
22

From this we can conclude that for table ‘TEST1’, 32 blocks are allocated out of which 28 blocks are formatted to receive data but only 22 blocks contain the actual data.

The high water mark can be reset with a truncate table or if the table is moved to another tablespace.  Additionally, in 10g the following option to shrink a segment was introduced to reset the high water mark. eg. ALTER TABLE <tablename> SHRINK SPACE;

When the table is created with CTAS from another table, the high water mark of the latter table is not reflected in the new table. If the tablespace is moved back to the same tablespace, the high water mark is reset. In this case, query on obj#, dataobj# of obj$. Obj# remains the same but dataobj# changes.

 

PURPOSE
This article describes how to find out how many blocks are really being
used within a table ie. are not empty. Please note that this article does
not cover what to do when chaining is taking place.

SCOPE & APPLICATION
For DBA’s needing to determine how many blocks within a table are
empty blocks.

How many blocks contain data (are not empty)
——————————————–
Each row in the table has pseudocolumn called ROWID.
This pseudo contains information about physical location
of the row in format
block_number.row.file

If the table is stored in a tablespace which has one
datafile, all we have to do is to get DISTINCT
number of block_number from ROWID column of this table.

But if the table is stored in a tablespace with more than one
datafile then you can have the same block_number but in
different datafiles so we have to get DISTINCT number of
block_number+file from ROWID.

The SELECT statements which give us the number of “really used”
blocks is below. They are different for ORACLE 7 and ORACLE 8
because of different structure of ROWID column in these versions.

For ORACLE 7:

SELECT COUNT(DISTINCT SUBSTR(rowid,15,4)||
SUBSTR(rowid,1,8)) “Used”
FROM schema.table;

For ORACLE 8+:

SELECT COUNT (DISTINCT
DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)||
DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)) “Used”
FROM schema.table;
or

SELECT COUNT (DISTINCT SUBSTR(rowid,1,15)) “Used”
FROM schema.table;

You could ask why the above information could not be determined
by using the ANALYZE TABLE command. The ANALYZE TABLE command only
identifies the number of ‘ever’ used blocks or the high water mark
for the table.

What is the High Water Mark?
—————————-
All Oracle segments have an upper boundary containing the data within
the segment. This upper boundary is called the “high water mark” or HWM.
The high water mark is an indicator that marks blocks that are allocated
to a segment, but are not used yet. This high water mark typically bumps
up at 5 data blocks at a time. It is reset to “zero” (position to the start
of the segment) when a TRUNCATE command is issued. So you can have empty
blocks below the high water mark, but that means that the block has been
used (and is probably empty caused by deletes). Oracle does not move the
HWM, nor does it *shrink* tables, as a result of deletes. This is also
true of Oracle8. Full table scans typically read up to the high water mark.

Data files do not have a high water mark; only segments do have them.

How to determine the high water mark
————————————
To view the high water mark of a particular table::

ANALYZE TABLEESTIMATE/COMPUTE STATISTICS;

This will update the table statistics. After generating the statistics,
to determine the high water mark:

SELECT blocks, empty_blocks, num_rows
FROM user_tables
WHERE table_name =;

BLOCKS represents the number of blocks ‘ever’ used by the segment.
EMPTY_BLOCKS represents only the number of blocks above the ‘HIGH WATER MARK’
.
Deleting records doesn’t lower the high water mark. Therefore, deleting
records doesn’t raise the EMPTY_BLOCKS figure.

Let us take the following example based on table BIG_EMP1 which
has 28672 rows (Oracle 8.0.6):

SQL> connect system/manager
Connected.

SQL> SELECT segment_name,segment_type,blocks
2> FROM dba_segments
3> WHERE segment_name=’BIG_EMP1′;
SEGMENT_NAME SEGMENT_TYPE BLOCKS EXTENTS
—————————– —————– ———- ——-
BIG_EMP1 TABLE 1024 2
1 row selected.

SQL> connect scott/tiger

SQL> ANALYZE TABLE big_emp1 ESTIMATE STATISTICS;
Statement processed.

SQL> SELECT table_name,num_rows,blocks,empty_blocks
2> FROM user_tables
3> WHERE table_name=’BIG_EMP1′;
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS
—————————— ———- ———- ————
BIG_EMP1 28672 700 323
1 row selected.

Note: BLOCKS + EMPTY_BLOCKS (700+323=1023) is one block less than
DBA_SEGMENTS.BLOCKS. This is because one block is reserved for the
segment header. DBA_SEGMENTS.BLOCKS holds the total number of blocks
allocated to the table. USER_TABLES.BLOCKS holds the total number of
blocks allocated for data.

SQL> SELECT COUNT (DISTINCT
2> DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)||
3> DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)) “Used”
4> FROM big_emp1;
Used
———-
700
1 row selected.

SQL> DELETE from big_emp1;
28672 rows processed.

SQL> commit;
Statement processed.

SQL> ANALYZE TABLE big_emp1 ESTIMATE STATISTICS;
Statement processed.

SQL> SELECT table_name,num_rows,blocks,empty_blocks
2> FROM user_tables
3> WHERE table_name=’BIG_EMP1′;
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS
—————————— ———- ———- ————
BIG_EMP1 0 700 323
1 row selected.

SQL> SELECT COUNT (DISTINCT
2> DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)||
3> DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)) “Used”
4> FROM big_emp1;
Used
———-
0
1 row selected.

SQL> TRUNCATE TABLE big_emp1;
Statement processed.

SQL> ANALYZE TABLE big_emp1 ESTIMATE STATISTICS;
Statement processed.

SQL> SELECT table_name,num_rows,blocks,empty_blocks
2> FROM user_tables
3> WHERE table_name=’BIG_EMP1′;
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS
—————————— ———- ———- ————
BIG_EMP1 0 0 511
1 row selected.

SQL> connect system/manager
Connected.

SQL> SELECT segment_name,segment_type,blocks
2> FROM dba_segments
3> WHERE segment_name=’BIG_EMP1′;
SEGMENT_NAME SEGMENT_TYPE BLOCKS EXTENTS
—————————– —————– ———- ——-
BIG_EMP1 TABLE 512 1
1 row selected.

Note: TRUNCATE has also deallocated the space from the deleted rows.
To retain the space from the deleted rows allocated to the table use:
TRUNCATE TABLE big_emp1 REUSE STORAGE

PGA Usage Larger than PGA_AGGREGATE_TARGET setting?

pga_aggregate_target is a target, as opposed to a hard limit – so it isn’t unusual to go above that.
13G above that, now that’s unusual though! There IS an enhancement request in,
to make a hard-limit setting, but that does not currently exist.
There is a known bug in 10203 with certain statements burning up memory – bug 5947623 – however,
the 10203/aix version of this patch is 64-bit, and the SR header says you are on 32-bit, so that isn’t
an option….and 10203 is old enough that I can’t get a new version of the patch made.

As I was unable to see any errors (e.g., ORA-4030) thre does not seem to be any problem with the operation of the database.

PGA_AGGREGATE_TARGET does not set a hard limit on pga usage. It is only a target value used to dynamically size the process work areas. It also does not affect other areas of the pga that are allowed to grow beyond this limit.

There are certain areas of pga that cannot be controlled by initialization parameters. Such areas include pl/sql memory collections such as pl/sql tables and varrays.

Depending on the programming code and amount of data being handled these areas can grow very large (up to 20G internal limit on 10) and can consume large amounts of memory. This memory growth can be controlled by good programming practices. As an example, use LIMIT clause with BULK COLLECT.

Additionally, programming mistakes can also lead to excessive memory usage.

You can take steps to control the size of a process. However, from within the database framework you cannot place a hard limit on the size of a process by setting any initialization parameters or database configuration.

You can limit the size of a process from the OS side by setting kernel limits or user shell limits but this leads to the ORA-4030 and will cause transaction rollback.

As noted in bug 7279150, “… this is not a hard limit and that we will exceed it when it is undersized and the workload increases rapidly, such as when they start the workload for their testing or when they spawn a new set of sessions from their application servers.”

As the DBA you need to get confirmation from your operating system administrator that the amount of memory reported as being in use by a process includes or does not include shared memory. If shared memory is included in the value displayed by the operating system utility, then the shared pool size must be deducted from that value to know how much private memory the process is actually using.

See note 174555.1 “UNIX Determining the Size of an Oracle Process”.

If an RDBMS user process is using more private memory than expected, then the DBA has three options:

– Do nothing
– Monitor the RDBMS iuser session to find out what SQL statements are being performed or were being performed by that RDBMS session. Using the SQL*Trace functionality of the database would normally be done if information from the end user cannot be obtained directly as to what they were doing since the memory usage was higher than expected or what they are doing right now.
– Kill that RDBMS user session.

Gather DBMS_STATS Default parameter

What are the default parameter values ?

   select dbms_stats.get_param('cascade') from dual;
   select dbms_stats.get_param('degree') from dual;
   select dbms_stats.get_param('estimate_percent') from dual;
   select dbms_stats.get_param('method_opt') from dual;
   select dbms_stats.get_param('no_invalidate') from dual;
   select dbms_stats.get_param('granularity') from dual;


DEFAULT PARAMETER

DBMS_STATS.AUTO_CASCADE
NULL
DBMS_STATS.AUTO_SAMPLE_SIZE
FOR ALL COLUMNS SIZE AUTO
DBMS_STATS.AUTO_INVALIDATE
AUTO

Oracle常用诊断事件清单

事件 说明 例子
Event 10013 – Monitor Transaction Recovery 在Startup时跟踪事务恢复 ALTER SESSION SET EVENTS ‘10013 trace name context forever, level 1’;
Event 10015 – Dump Undo Segment Headers- 在事务恢复后做Dump回退段头信息 ALTER SESSION SET EVENTS ‘10015 trace name context forever, level 1’;
Event 10032 – Dump Sort Statistics Dump排序的统计信息 ALTER SESSION SET EVENTS ‘10032 trace name context forever, level 10’;
Event 10033 – Dump Sort Intermediate Run Statistics 排序过程中,内存排序区和临时表空间的交互情况 ALTER SESSION SET EVENTS ‘10033 trace name context forever, level 10’;
Event 10045 – Trace Free List Management Operations FREELIST的管理操作 ALTER SESSION SET EVENTS ‘10045 trace name context forever, level 1’;
Event 10046 – Enable SQL Statement Trace 跟踪SQL,有执行计划,邦定变量和等待的统计信息,level 12最详细。 ALTER SESSION SET EVENTS ‘10046 trace name context forever, level 12’; 

LEVEL定义如下:

1:SQL 语句,执行计划和执行状态

4:1的内容加上绑定变量信息

8:1的信息加上等待事件信息

12:1+4+8

Event 10053 – Dump Optimizer Decisions 在分析SQL语句时,Dump出优化器所做的选择,级别level 1最详细 ALTER SESSION SET EVENTS ‘10053 trace name context forever, level 1’; 

LEVEL定义如下:

1:状态和估算信息

2:只显示估算信息

Event 10060 – Dump Predicates DUMP SQL语句中的断语信息。需要在需要DUMP的用户下创建以下表 

CREATE TABLE kkoipt_table
(c1 INTEGER,

c2 VARCHAR2(80));

断语信息会写入该表

ALTER SESSION SET EVENTS ‘10060 trace name context forever, level 1’;
Event 10065 – Restrict Library Cache Dump Output for State Object Dumps 限制对象状态DUMP的时候LIBRARY CACHE信息的详细程度
1 Address of library object only 

2 As level 1 plus library object lock details

3 As level 2 plus library object handle and library object

缺省是LEVEL 3

ALTER SESSION SET EVENTS ‘10065 trace name context forever, level level’;
Event 10079 – Dump SQL*Net Statistics- Dump SQL*NeT的统计信息 ALTER SESSION SET EVENTS ‘10079 trace name context forever, level 2’;
Event 10081 – Trace High Water Mark Changes HWM的改变 ALTER SESSION SET EVENTS ‘10081 trace name context forever, level 1’;
Event 10104 – Dump Hash Join Statistics HASH JOIN的统计信息 ALTER SESSION SET EVENTS ‘10104 trace name context forever, level 10’;
Event 10128 – Dump Partition Pruning Information 分区表调整信息 ALTER SESSION SET EVENTS ‘10128 trace name context forever, level level’; 

Level取值:

1   Dump pruning descriptor for each partitioned object

0x0002 Dump partition iterators

0x0004 Dump optimizer decisions about partition-wise joins

0x0008 Dump ROWID range scan pruning information

在9.0.1或者后面的版本,在level 2后还需要建立如下的表:

CREATE TABLE kkpap_pruning

(

partition_count    NUMBER,

iterator           VARCHAR2(32),

partition_level    VARCHAR2(32),

order_pt         VARCHAR2(12),

call_time        VARCHAR2(12),

part#             NUMBER,

subp#              NUMBER,

abs#               NUMBER

);

事件 说明 例子
Event 10200 – Dump Consistent Reads DUMP一致读的信息 ALTER SESSION SET EVENTS ‘10200 trace name context forever, level 1’;
Event 10201 – Dump Consistent Read Undo Application DUMP一致性读涉及UNDO信息的内容 ALTER SESSION SET EVENTS ‘10201 trace name context forever, level 1’;
Event 10220 – Dump Changes to Undo Header Dump出Undo头信息的改变 ALTER SESSION SET EVENTS ‘10220 trace name context forever, level 1’;
Event 10221 – Dump Undo Changes Dump Undo的改变 ALTER SESSION SET EVENTS ‘10221 trace name context forever, level 7’;
Event 10224 – Dump Index Block Splits / Deletes 索引块的分裂和D删除信息 ALTER SESSION SET EVENTS ‘10224 trace name context forever, level 1’;
Event 10225 – Dump Changes to Dictionary Managed Extents DUMP字段管理的扩展变化 ALTER SESSION SET EVENTS ‘10225 trace name context forever, level 1’;
Event 10231 全表扫描时跳过坏块,在有坏块的情况下做数据拯救时很有用 ALTER SYSTEM SET EVENTS ‘10231 trace name context forever,level 10’;
Event 10241 – Dump Remote SQL Execution 远程SQL语句的执行信息 ALTER SESSION SET EVENTS ‘10241 trace name context forever, level 1’;
Event 10246 – Trace PMON Process 跟踪PMON进程 只能修改参数,不能用ALTER SYSTEM 

event = “10246 trace name context forever, level 1”

Event 10248 – Trace Dispatcher Processes 跟踪DISPATCHER的工作情况 event = “10248 trace name context forever, level 10”
Event 10249 – Trace Shared Server (MTS) Processes- 跟踪共享服务器的工作情况 event = “10249 trace name context forever, level 10”
Event 10270 – Debug Shared Cursors 跟踪共享CURSORS的情况 event = “10270 trace name context forever, level 10”
Event 10299 – Debug Prefetching 跟踪表数据块和索引数据块的PREFETCHING event = “10299 trace name context forever, level 1”
Event 10357 – Debug Direct Path ALTER SESSION SET EVENTS ‘10357 trace name context forever, level 1’;
Event 10390 – Dump Parallel Execution Slave Statistics 跟踪并行操作中的SLAVE的状态 ALTER SESSION SET EVENTS ‘10390 trace name context forever, level 1;
Event 10391-Dump Parallel Execution Granule Allocation 跟踪并行操作的粒度 ALTER SESSION SET EVENTS ‘10391 trace name context forever, level 2’;
Event 10393 – Dump Parallel Execution Statistics 跟踪并行操作的状态(每个SLAVE单独列出状态) ALTER SESSION SET EVENTS ‘10393 trace name context forever, level 1’;
Event 10500 – Trace SMON Process 跟踪SMON进程 event = “10500 trace name context forever, level 1”
Event 10608 – Trace Bitmap Index Creation 跟踪BITMAP索引创建的详细过程 ALTER SESSION SET EVENTS ‘10608 trace name context forever, level 10’;
Event 10704 – Trace Enqueues 跟踪锁的使用情况 ALTER SESSION SET EVENTS ‘10704 trace name context forever, level 1’;
Event 10706 – Trace Global Enqueue Manipulation 跟踪全局锁的使用情况 ALTER SESSION SET EVENTS ‘10706 trace name context forever, level 1’;
Event 10708 – Trace RAC Buffer Cache 跟踪RAC环境下的BUFFER CACHE ALTER SESSION SET EVENTS ‘10708 trace name context forever, level 10’;
事件 说明 例子
Event 10710 – Trace Bitmap Index Access 跟踪位图索引的访问情况 ALTER SESSION SET EVENTS ‘10710 trace name context forever, level 1’;
Event 10711 – Trace Bitmap Index Merge Operation 跟踪位图索引合并操作 ALTER SESSION SET EVENTS ‘10711 trace name context forever, level 1’;
Event 10712 – Trace Bitmap Index OR Operation 跟踪位图索引或操作情况 ALTER SESSION SET EVENTS ‘10712 trace name context forever, level 1’;
Event 10713 – Trace Bitmap Index AND Operation 跟踪位图索引与操作 ALTER SESSION SET EVENTS ‘10713 trace name context forever, level 1’;
Event 10714 – Trace Bitmap Index MINUS Operation 跟踪位图索引minus操作 ALTER SESSION SET EVENTS ‘10714 trace name context forever, level 1’;
Event 10715 – Trace Bitmap Index Conversion to ROWIDs Operation 跟踪位图索引转换ROWID操作 ALTER SESSION SET EVENTS ‘10715 trace name context forever, level 1’;
Event 10716 – Trace Bitmap Index Compress/Decompress 跟踪位图索引压缩和解压缩情况 ALTER SESSION SET EVENTS ‘10716 trace name context forever, level 1’;
Event 10717 – Trace Bitmap Index Compaction ALTER SESSION SET EVENTS ‘10717 trace name context forever, level 1’;
Event 10719 – Trace Bitmap Index DML 跟踪位图索引列的DML操作(引起位图索引改变的DML操作) ALTER SESSION SET EVENTS ‘10719 trace name context forever, level 1’;
Event 10730 – Trace Fine Grained Access Predicates 跟踪细粒度审计的断语 ALTER SESSION SET EVENTS ‘10730 trace name context forever, level 1’;
Event 10731 – Trace CURSOR Statements 跟踪CURSOR的语句情况 ALTER SESSION SET EVENTS ‘10731 trace name context forever, level level’; 

LEVEL定义

1     Print parent query and subquery

2     Print subquery only

Event 10928 – Trace PL/SQL Execution 跟踪PL/SQL执行情况 ALTER SESSION SET EVENTS ‘10928 trace name context forever, level 1’;
Event 10938 – Dump PL/SQL Execution Statistics 跟踪PL/SQL执行状态。使用前需要执行rdbms/admin下的tracetab.sql ALTER SESSION SET EVENTS ‘10938 trace name context forever, level 1’;
flush_cache 刷新BUFFER CACHE ALTER SESSION SET EVENTS ‘immediate trace name flush_cache’;
DROP_SEGMENTS 手工删除临时段。当这些临时段无法自动清除的时候可以手工清除 alter session set events ‘immediate trace name DROP_SEGMENTS level ts#+1’; 

ts#是指要删除临时段的表空间的ts#

 

Script:Datafile Report

以下脚本用于列出Oracle中数据文件的状况:

REM Datafile Report

set linesize 120 pagesize 1400;

SELECT t.tablespace_name,
       'Datafile' file_type,
       t.status tablespace_status,
       d.status file_status,
       ROUND((d.bytes - NVL(f.sum_bytes, 0)) / 1048576) used_mb,
       ROUND(NVL(f.sum_bytes, 0) / 1048576) free_mb,
       t.initial_extent,
       t.next_extent,
       t.min_extents,
       t.max_extents,
       t.pct_increase,
       d.file_name,
       d.file_id,
       d.autoextensible,
       d.maxblocks,
       d.maxbytes,
       nvl(d.increment_by, 0) increment_by,
       t.block_size
  FROM (SELECT tablespace_name, file_id, SUM(bytes) sum_bytes
          FROM DBA_FREE_SPACE
         GROUP BY tablespace_name, file_id) f,
       DBA_DATA_FILES d,
       DBA_TABLESPACES t
 WHERE t.tablespace_name = d.tablespace_name
   AND f.tablespace_name(+) = d.tablespace_name
   AND f.file_id(+) = d.file_id
 GROUP BY t.tablespace_name,
          d.file_name,
          d.file_id,
          t.initial_extent,
          t.next_extent,
          t.min_extents,
          t.max_extents,
          t.pct_increase,
          t.status,
          d.bytes,
          f.sum_bytes,
          d.status,
          d.AutoExtensible,
          d.maxblocks,
          d.maxbytes,
          d.increment_by,
          t.block_size
UNION ALL
SELECT h.tablespace_name,
       'Tempfile',
       ts.status,
       t.status,
       ROUND(SUM(NVL(p.bytes_used, 0)) / 1048576),
       ROUND(SUM((h.bytes_free + h.bytes_used) - NVL(p.bytes_used, 0)) /
             1048576),
       -1, -- initial extent
       -1, -- initial extent
       -1, -- min extents
       -1, -- max extents
       -1, -- pct increase
       t.file_name,
       t.file_id,
       t.autoextensible,
       t.maxblocks,
       t.maxbytes,
       nvl(t.increment_by, 0) increment_by,
       ts.block_size
  FROM sys.V_$TEMP_SPACE_HEADER h,
       sys.V_$TEMP_EXTENT_POOL  p,
       sys.DBA_TEMP_FILES       t,
       sys.dba_tablespaces      ts
 WHERE p.file_id(+) = h.file_id
   AND p.tablespace_name(+) = h.tablespace_name
   AND h.file_id = t.file_id
   AND h.tablespace_name = t.tablespace_name
   and ts.tablespace_name = h.tablespace_name
 GROUP BY h.tablespace_name,
          t.status,
          t.file_name,
          t.file_id,
          ts.status,
          t.autoextensible,
          t.maxblocks,
          t.maxbytes,
          t.increment_by,
          ts.block_size
 ORDER BY 1, 5 DESC
/

Know Oracle Lock Mode

Value   Name(s)                    Table method (TM lock)
    0   No lock                    n/a

    1   Null lock (NL)             Used during some parallel DML operations (e.g. update) by
                                   the pX slaves while the QC is holding an exclusive lock.

    2   Sub-share (SS)             Until 9.2.0.5/6 "select for update"
        Row-share (RS)             Since 9.2.0.1/2 used at opposite end of RI during DML
                                   Lock table in row share mode
                                   Lock table in share update mode

    3   Sub-exclusive(SX)          Update (also "select for update" from 9.2.0.5/6)
        Row-exclusive(RX)          Lock table in row exclusive mode
                                   Since 11.1 used at opposite end of RI during DML

    4   Share (S)                  Lock table in share mode
                                   Can appear during parallel DML with id2 = 1, in the PX slave sessions
                                   Common symptom of "foreign key locking" (missing index) problem

    5   share sub exclusive (SSX)  Lock table in share row exclusive mode
        share row exclusive (SRX)  Less common symptom of "foreign key locking" but likely to be more
                                   frequent if the FK constraint is defined with "on delete cascade."

    6   Exclusive (X)              Lock table in exclusive mode

Summary of Locks Obtained by DML Statements

SQL Statement Row Locks Table Lock Mode RS RX S SRX X
SELECTFROM table... none Y Y Y Y Y
INSERT INTO table Yes SX Y Y N N N
UPDATE table Yes SX Y* Y* N N N
MERGE INTO table Yes SX Y Y N N N
DELETE FROM table Yes SX Y* Y* N N N
SELECTFROM table FOR UPDATE OF Yes SX Y* Y* N N N
LOCK TABLE table IN
ROW SHARE MODE SS Y Y Y Y N
ROW EXCLUSIVE MODE SX Y Y N N N
SHARE MODE S Y N Y N N
SHARE ROW EXCLUSIVE MODE SSX Y N N N N
EXCLUSIVE MODE X N N N N N
* Yes, if no conflicting row locks are held by another transaction. Otherwise, waits occur.

 

mode 1: NL Null N
mode 2: SS RS Row-S Row Share(d) SubShare Intended Share (IS) L
mode 3: SX RX Row-X Row Exclusive SubExclusive Intended Exclusive (IX) R
mode 4: S Share S
mode 5: SSX SRX S/Row-X Share(d) Row Exclusive Share-SubExclusive C
mode 6: X Exclusive X



compatible ? SS,RS SX,RX S SSX,SRX X
SS,RS yes yes yes yes no
SX,RX yes yes no no no
S yes no yes no no
SSX, SRX yes no no no no
X no no no no no

GES (global enqueue resources) enqueues having different values for the lock mode:

#define KJUSERNL 0          /* no permissions */    (Null)
#define KJUSERCR 1          /* concurrent read */   (Row-S (SS))
#define KJUSERCW 2          /* concurrent write */  (Row-X (SX))
#define KJUSERPR 3          /* protected read */    (Share)
#define KJUSERPW 4          /* protected write */   (S/Row-X (SSX))
#define KJUSEREX 5          /* exclusive access */  (Exclusive)

Global Wait-For-Graph(WFG) at ddTS[0.db] :
BLOCKED 0xd876a630 5 wq 2 cvtops x1 TX 0x70015.0x81e(ext 0x2,0x0)[2B000-0001-0000057A] inst 1
BLOCKER 0xd8767a10 5 wq 1 cvtops x28 TX 0x70015.0x81e(ext 0x2,0x0)[2E000-0001-00000347] inst 1
BLOCKED 0xd876ab70 5 wq 2 cvtops x1 TX 0x40008.0x7d9(ext 0x2,0x0)[2E000-0001-00000347] inst 1
BLOCKER 0xd876a7f0 5 wq 1 cvtops x28 TX 0x40008.0x7d9(ext 0x2,0x0)[2B000-0001-0000057A] inst 1

5 means KJUSEREX ,cross instance "TX mode 6" locks

Fixed X$ Tables in ASM

From Vinod Haval‘s <Inside Overview of ASM Metadata>
These Views helps in understanding the following metrics

  • Physical Mapping
  • Provides Undocumented Information
  • 18 X$ Tables (May be more)
TABLE NAME DESCRIPTION
X$KFALS This table gives the details about aliases 

created in ASM

X$KFCBH This is similar to x$kfbh and have same number 

of rows as x$kfbh

X$KFCCE This table helps to locate the particular block
X$KFBH This table gives more physical block level info
X$KFDSK_STAT This table provides the usage metrics data which can be used for performance analysis
X$KFGRP This table provides the disk groups info in ASM
X$KFGRP_STAT This table gives the usage metrics data for all the disk groups within the ASM
X$KFGMG This table provides the details about ASM operations
TABLE NAME DESCRIPTION
X$KFKID This table provides the info about ASM disks
X$KFNCL This is similar to x$kfbh and have same number 

of rows as x$kfbh

X$KFTMTA This table provides the info about DB instance 

connected to ASM instance

X$KFFIL This table gives more physical block level info
X$KFFXP This table provides the physical extent allocation mapping info within ASM files
X$KFDAT
X$KFDPARTNER
X$KFCLLE

VIEW:X$KCCRS-Controlfile Record Section directory (8.0 – 8.1)

View:   X$KCCRS
          [K]ernel [C]ache [C]ontrolfile management
             controlfile [R]ecord [S]ection directory

  Column      Type           Description
  --------    ----           -----------
  ADDR        RAW(4)         address of this row/entry in the SGA

  INDX        NUMBER         control file record type
    The following are the non-circular-reuse record types:
       KCCDEDBI     0             DataBase Info record
       KCCDECKP     1             Checkpoint progress
       KCCDERTH     2             Redo THread record
       KCCDELOG     3             LOgFile record
       KCCDEDBF     4             DataBase File record
       KCCDENAM     5             file NAMe record
       KCCDETBS     6     8.x     TaBleSpace record
       KCCDERS1     7     8.0     reserved for future use. non-circular re-use
       KCCDETFL     7     8.1     Temporary File record
       KCCDERS2     8     8.x     reserved for future use. non-circular re-use
       KCCDERMC     8     9.x     RMan Configuration record

    The following are the circular-Reuse record types:
       KCCDELHR     9     8.x     Log History Record
       KCCDEORR    10     8.x     Offline Range Record
       KCCDEALR    11     8.x     Archived Log Record
       KCCDEBSR    12     8.x     Backup Set Record
       KCCDEBPR    13     8.x     Backup Piece Record
       KCCDEBFR    14     8.x     Backup dataFile Record
       KCCDEBLR    15     8.x     Backup redoLog Record
       KCCDEDCR    16     8.x     Datafile Copy Record
       KCCDEFCR    17     8.x     backup dataFile Corruption Record
       KCCDECCR    18     8.x     datafile Copy Corruption Record
       KCCDEDLR    19     8.x     DeLeted object Record
       KCCDERS3    20     8.0      reserved for future use. circular re-use.
       KCCDEPCR    20     8.1     proxy copy record
       KCCDERS4    21     8.x     reserved for future use. circular re-use.
       KCCDENEN     6     7.3     actual # entry types in control file
       KCCDEMEN    10     7.3     max possible # entry types in control file
       KCCDEMNR     9     8.x     MiNimum circular-Reuse record type
       KCCDEMXR    21     8.x     MaXimum circular-Reuse record type
       KCCDEMAX    22     8.x     MAX # record types in current format

  INST_ID     NUMBER         oracle instance number
  RSLBN       NUMBER         Logical Blk Number (base 1) of section start
  RSRSZ       NUMBER         Record SiZe in bytes
  RSNUM       NUMBER         NUMber of usable record slots in section
  RSNUS       NUMBER         circ-reuse: Number of in-USe slots in section
                             non-circ-reuse: highest USed slot Number
  RSIOL       NUMBER         circ-reuse: Index (base 1) of OLdest (init 0)
  RSILW       NUMBER         circ-reuse: Index of Last Written    (init 0)
  RSRLW       NUMBER         circ-reuse: Recid of Last Written    (init 0)
                             non-circ-reuse: incr'd by kccicr()   (init 0)

EVENT 10051:"trace OPI calls"

Error:  ORA 10051
Text:   trace OPI calls
——————————————————————————-
Explanation:
This is NOT an error but is a special EVENT code.
It should *NOT* be used unless explicitly requested by RD support.

Event 10051 allows you to track OPI calls on the server side.
This can be useful to home in on what sequence of events lead
to a problem. It complements SQL*Net trace and <Event:10046>
trace. You can quickly see where FAST UPI etc.. is in use.

Levels:    The event is just either on or off.

Output: The output is simply of the form:

OPI CALL: type= 2 argc= 2 cursor=  0 name=OPEN

where:    type     = the OPI call type (program interface function call)
argc     = Argument count
cursor     = the cursor number the call is being made against
name       = description of the program interface function call.

Articles:
Interpreting DUMP LOGFILE Output                      <Note:29726.1>

EVENT 10235:"check memory manager internal structures"

Event:10235                     
~~~~~~~~~~~ 

Version/Use: 

  7.0 - 10.1.X   Check memory manager internal structures. 

  7.0 - 10.1.X "Check memory manager internal structures" 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

  NOTE: Events should NEVER be set by customers unless advised to do so by


      Oracle Support Services. Read [NOTE:75713.1] before setting any event. 

Summary Syntax: 

~~~~~~~~~~~~~~~ 

  EVENT="10235 trace name context forever,  level LL" 

  (Always comment exactly when and why this event is being set) 

   ** IMPORTANT: Do **NOT** use ALTER SESSION SET EVENTS or ORADEBUG 

                syntax to set this event in sessions. This can cause 

                lots of ORA-600 errors against SGA heaps as not all  

                sessions using the SGA heaps will be using the same event 

                level. This applies to ALL levels except level 65536. 

Levels: 

~~~~~~~ 

  The event being set at all causes certain heap checks to be performed. 

   ***  WARNING *********************************************************** 

    ***   This event should only EVER be set at the request of Oracle Support. 

    ***   It can impact performance on most types of system. 

    ***   Level 2 and above can impact latch contention. 

    ***   Level 3 and above can have a *SEVERE* impact on performance. 

  ************************************************************************ 

  The bottom 3 bits of the level cause the following checks to occur: 

     ~~~~~         ~~~~~~~~~~~~ 

     Level         Description 

     ~~~~~         ~~~~~~~~~~~~ 

        1             Fast check on heap free (kghfrh) 

        2             Do 1 AND fill memory with junk on alloc / free 

        3             Do 2 AND ensure the chunk belongs to given heap on free 

        4             Do 3 AND make permanent chunks freeable so they can  

                      also be checked 

                       This level can give rise to increased memory use 

                       and can trigger false ORA-4030 and false ORA-4031 

                       errors. 

 

  Oracle 9205 onwards only: 

    65536             This is introduced by the diagnostic enhancement in 

                      bug 3293155. It is a totally independent bit setting 

                      which has minimal impact on performance (unless ORed  

                      with other levels). When this is set Oracle tries to 

                      keep comments with "permanent" memory allocations 

                      which can be useful for memory leak problems if the 

                      leaked memory appears to be a leak of "perm" memory. 

                      This level can be set/unset dynamically but will only  

                      store comments in "perm" memory allocated when the  

                      event is set. 

 

  There are additional values which Oracle Support can use. 

 

Description/Steps: 

~~~~~~~~~~~~~~~~~~ 

  This event may be used to try to catch HEAP corruption problems closer  

  to when they occur.  Typically level 12 is required to get close to the 

  corruption but this can impact performance too much to be useful. 

 

Example Output / Interpreting Output: 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

  The event should cause an ORA-600 and heapdump to be produced if an 

  error is detected.  

Related: 

~~~~~~~~ 

 

沪ICP备14014813号-2

沪公网安备 31010802001379号