如何公开Oracle trace文件?

隐式参数_trace_files_public决定了Oracle产生的trace文件是否公开,该参数默认值为FALSE,也就是非DBA/OINSTALL组的用户是没有权限读取数据库产生的trace文件的;在某些场合中我们需要让非DBA组的用户也能访问trace文件,就可以通过修改该参数实现。请看下面的例子:

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

SQL> col name for a20
SQL> col value for a20
SQL> col describ for a40
SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ
  2   FROM SYS.x$ksppi x, SYS.x$ksppcv y
  3   WHERE x.inst_id = USERENV ('Instance')
  4   AND y.inst_id = USERENV ('Instance')
  5   AND x.indx = y.indx
  6  AND x.ksppinm LIKE '%_trace_files_public%'
  7  order by x.ksppinm;

NAME                 VALUE                DESCRIB
-------------------- -------------------- ----------------------------------------
_trace_files_public  FALSE                Create publicly accessible trace files

SQL> oradebug setmypid;
Statement processed.
SQL> oradebug ipc;
Information written to trace file.
SQL> oradebug tracefile_name;
/s01/10gdb/admin/YOUYUS/udump/youyus_ora_10268.trc
SQL> !ls -l /s01/10gdb/admin/YOUYUS/udump/youyus_ora_10268.trc
-rw-r----- 1 maclean oinstall 4206 Aug 11 20:51 /s01/10gdb/admin/YOUYUS/udump/youyus_ora_10268.trc
/*所产生的trace文件权限为640,非oinstall组用户无权限读取该文件*/

SQL> alter system set "_trace_files_public"=true;
alter system set "_trace_files_public"=true
                 *
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
/*修改该参数需要重启实例*/

SQL> alter system set "_trace_files_public"=true scope=spfile;

System altered.

SQL> startup force;
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2084264 bytes
Variable Size             922747480 bytes
Database Buffers          637534208 bytes
Redo Buffers               14692352 bytes
Database mounted.
Database opened.

SQL> oradebug setmypid;
Statement processed.
SQL> oradebug ipc;
Information written to trace file.
SQL> oradebug tracefile_name;
/s01/10gdb/admin/YOUYUS/udump/youyus_ora_10430.trc
SQL> ! ls -l /s01/10gdb/admin/YOUYUS/udump/youyus_ora_10430.trc
-rw-r--r-- 1 maclean oinstall 5471 Aug 11 20:54 /s01/10gdb/admin/YOUYUS/udump/youyus_ora_10430.trc
/*other组用户也具有了读权限*/

SQL> ! ls -l /s01/10gdb/admin/YOUYUS/
total 24
drwxr-x--- 2 maclean oinstall 4096 Aug 11 20:56 adump
drwxr-x--- 2 maclean oinstall 4096 Aug 11 20:54 bdump
drwxr-x--- 2 maclean oinstall 4096 Aug  5 21:35 cdump
drwxr-x--- 2 maclean oinstall 4096 Aug  5 21:36 dpdump
drwxr-x--- 2 maclean oinstall 4096 Aug  5 21:37 pfile
drwxr-x--- 2 maclean oinstall 4096 Aug 11 20:54 udump
/*请注意修改_trace_files_public为true,并不会修改trace所在目录的权限,Oracle默认建立bdump/udump等trace目录时分配的权限为750,other组用户无法进入这些目录,需要修改目录权限为755,即o+r+x*/

SQL> ! chmod o+r+x /s01/10gdb/admin/YOUYUS/*dump

SQL>  ! ls -l /s01/10gdb/admin/YOUYUS/
total 24
drwxr-xr-x 2 maclean oinstall 4096 Aug 11 20:56 adump
drwxr-xr-x 2 maclean oinstall 4096 Aug 11 20:54 bdump
drwxr-xr-x 2 maclean oinstall 4096 Aug  5 21:35 cdump
drwxr-xr-x 2 maclean oinstall 4096 Aug  5 21:36 dpdump
drwxr-x--- 2 maclean oinstall 4096 Aug  5 21:37 pfile
drwxr-xr-x 2 maclean oinstall 4096 Aug 11 20:54 udump

/*需要注意的另一点是修改_trace_files_public参数并不会引起既有的trace文件的权限被修改,典型的例子是alert log告警日志*/
[maclean@rh2 bdump]$ ls -l
total 20
-rw-r----- 1 maclean oinstall 12971 Aug 11 21:17 alert_YOUYUS.log
-rw-r--r-- 1 maclean oinstall   690 Aug 11 21:12 youyus_lgwr_10514.trc

SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ
  2   FROM SYS.x$ksppi x, SYS.x$ksppcv y
  3   WHERE x.inst_id = USERENV ('Instance')
  4   AND y.inst_id = USERENV ('Instance')
  5   AND x.indx = y.indx
  6  AND x.ksppinm LIKE '%_trace_files_public%'
  7  order by x.ksppinm;

NAME                 VALUE                DESCRIB
-------------------- -------------------- ----------------------------------------
_trace_files_public  FALSE                Create publicly accessible trace files

SQL> alter system set "_trace_files_public"=true scope=spfile;

System altered.

SQL> startup force;
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size                  2084264 bytes
Variable Size             922747480 bytes
Database Buffers          637534208 bytes
Redo Buffers               14692352 bytes
Database mounted.
Database opened.
SQL> !ls -l
total 32
-rw-r----- 1 maclean oinstall 21189 Aug 11 21:20 alert_YOUYUS.log
-rw-r--r-- 1 maclean oinstall   690 Aug 11 21:12 youyus_lgwr_10514.trc
-rw-r--r-- 1 maclean oinstall   690 Aug 11 21:20 youyus_lgwr_11136.trc

11g中AWR新快照视图

DBA_HIST_IOSTAT_DETAIL视图记录了不同类型和组件功能所作IO的统计数据。这个视图的数据来自于V$IOSTAT_FILE和V$IOSTAT_FUNCTION 2个动态视图的快照。值得一提的是V$IOSTAT_FILE,它是11g中新引入的动态性能视图:

SQL> select filetype_name, asynch_io, access_method, retries_on_error
  2    from v$iostat_file;

FILETYPE_NAME                ASYNCH_IO ACCESS_METH RETRIES_ON_ERROR
---------------------------- --------- ----------- ----------------
Other                        ASYNC_OFF OS_LIB                     0
Control File                 ASYNC_OFF                            0
Log File                     ASYNC_OFF                            0
Archive Log                  ASYNC_OFF                            0
Data File Backup             ASYNC_OFF                            0
Data File Incremental Backup ASYNC_OFF                            0
Archive Log Backup           ASYNC_OFF                            0
Data File Copy               ASYNC_OFF                            0
Flashback Log                ASYNC_OFF                            0
Data Pump Dump File          ASYNC_OFF                            0
Data File                    ASYNC_ON  OS_LIB                     0
/*ASYNCH_IO列很好地标示了Oracle对于该类型文件是否启用了异步IO,这样我们就无需通过SYSTEM CALL TRACE来确定这一点了;
 retries_on_error累计了物理读取的失败次数,对我们发现磁盘坏道有一定帮助。目前的11.2.0.1官方Reference没有ACCESS_METHOD列的资料,猜想可能分为OS_LIB和ASM及Exdata等多种情况,有待验证。

我们回过头来继续讨论DBA_HIST_IOSTAT_DETAIL视图!

SQL> desc dba_hist_iostat_detail;
Name                  Type         Nullable Default Comments
--------------------- ------------ -------- ------- --------
SNAP_ID               NUMBER
DBID                  NUMBER
INSTANCE_NUMBER       NUMBER
FUNCTION_ID           NUMBER
FUNCTION_NAME         VARCHAR2(30)
FILETYPE_ID           NUMBER
FILETYPE_NAME         VARCHAR2(30)
SMALL_READ_MEGABYTES  NUMBER
SMALL_WRITE_MEGABYTES NUMBER
LARGE_READ_MEGABYTES  NUMBER
LARGE_WRITE_MEGABYTES NUMBER
SMALL_READ_REQS       NUMBER
SMALL_WRITE_REQS      NUMBER
LARGE_READ_REQS       NUMBER
LARGE_WRITE_REQS      NUMBER
NUMBER_OF_WAITS       NUMBER
WAIT_TIME             NUMBER

SQL> col SMALL_READ_MEGABYTES for 999999;
SQL> col SMALL_WRITE_MEGABYTES for 999999;
SQL> col LARGE_READ_MEGABYTES for 999999;
SQL> col LARGE_WRITE_MEGABYTES for 999999;
SQL> col FILETYPE_NAME for a25;
SQL> select snap_id,
  2         function_name,
  3         filetype_name,
  4         SMALL_READ_MEGABYTES,
  5         SMALL_WRITE_MEGABYTES,
  6         LARGE_READ_MEGABYTES,
  7         LARGE_WRITE_MEGABYTES,
  8         WAIT_TIME
  9    from dba_hist_iostat_detail
 10   where rownum < 16;

   SNAP_ID FUNCTION_NAME                  FILETYPE_NAME             SMALL_ SMALL_ LARGE_ LARGE_  WAIT_TIME
---------- ------------------------------ ------------------------- ------ ------ ------ ------ ----------
         1 DBWR                           Control File                   1      0      0      0        310
         1 LGWR                           Control File                   0      0      0      0         94
         1 Others                         Control File                 122     37      0      0      71635
         2 DBWR                           Control File                   1      0      0      0        310
         2 LGWR                           Control File                   0      0      0      0         94
         2 Others                         Control File                 251     77      0      0     159025
         3 Others                         Control File                  14      7      0      0      10339
         4 Others                         Control File                  96     60      0      0      87516
         5 Others                         Control File                 151     96      0      0     139796
         6 DBWR                           Control File                   0      0      0      0         15
         6 Others                         Control File                 210    135      0      0     189114
         7 DBWR                           Control File                   0      0      0      0         15
         7 Others                         Control File                 269    174      0      0     239640
         8 DBWR                           Control File                   0      0      0      0         15
         8 Others                         Control File                 328    213      0      0     288425

15 rows selected

其中SMALL_READ/WRITE代表单块读写,LARGE_READ/WRITE代表多块读写,wait_time的单位是千分之一秒(ms)。

SQL> select snap_id,
  2         function_name,
  3         filetype_name,
  4         SMALL_READ_MEGABYTES,
  5         SMALL_WRITE_MEGABYTES,
  6         LARGE_READ_MEGABYTES,
  7         LARGE_WRITE_MEGABYTES,
  8         WAIT_TIME
  9    from dba_hist_iostat_detail
 10   where filetype_name = 'Control File'
 11     and (LARGE_READ_REQS > 0 or LARGE_WRITE_REQS > 0);

未选定行
/*Oracle对控制文件只做单块读写*/

SQL> col FILETYPE_NAME for a15;
SQL> col function_name for a10;
SQL> select function_name,
  2         filetype_name,
  3         small_read_reqs,
  4         small_write_reqs,
  5         large_read_reqs,
  6         large_write_reqs
  7    from dba_hist_iostat_detail
  8   where filetype_name = 'Log File'
  9  and (SMALL_READ_REQS > 0 or LARGE_READ_REQS > 0);

FUNCTION_N FILETYPE_NAME   SMALL_READ_REQS SMALL_WRITE_REQS LARGE_READ_REQS LARGE_WRITE_REQS
---------- --------------- --------------- ---------------- --------------- ----------------
LGWR       Log File                      4             9140               0               86
LGWR       Log File                      4             2762               0              218
LGWR       Log File                      8             3512               0              222
LGWR       Log File                      8             4304               0              226
LGWR       Log File                      4             1996               0              210
LGWR       Log File                      8             5296               0              252
LGWR       Log File                      8             6016               0              254
LGWR       Log File                      8             7224               0              274
LGWR       Log File                      4            11536               0              232
LGWR       Log File                      8            13320               0              256

SQL> select snap_id,
  2         function_name,
  3         filetype_name,
  4         SMALL_READ_MEGABYTES,
  5         SMALL_WRITE_MEGABYTES,
  6         LARGE_READ_MEGABYTES,
  7         LARGE_WRITE_MEGABYTES,
  8         WAIT_TIME
  9    from dba_hist_iostat_detail
 10   where filetype_name = 'Log File';

   SNAP_ID FUNCTION_NAME                  FILETYPE_NAME             SMALL_ SMALL_ LARGE_ LARGE_  WAIT_TIME
---------- ------------------------------ ------------------------- ------ ------ ------ ------ ----------
         2 LGWR                           Log File                       0     60      0     27         30
        35 LGWR                           Log File                       0     23      0     48          0
         6 LGWR                           Log File                       0     12      0      8          0
         7 LGWR                           Log File                       0     15      0     10          0
         8 LGWR                           Log File                       0     18      0     12          0
        13 LGWR                           Log File                       0      5      0      4          0
        15 LGWR                           Log File                       0      6      0      2          0
        17 LGWR                           Log File                       0      4      0      3          0
        19 LGWR                           Log File                       0     11      0      7          0
        20 LGWR                           Log File                       0     14      0      8          0
        37 LGWR                           Log File                       0     13      0     27          0
        38 LGWR                           Log File                       0     15      0     29          0
        41 LGWR                           Log File                       0     13      0    152         46
        42 LGWR                           Log File                       0     16      0    153         61
        43 LGWR                           Log File                       0     19      0    155         61
        49 LGWR                           Log File                       0      2      0      0          0
         3 LGWR                           Log File                       0      1      0      0          0
         4 LGWR                           Log File                       0      7      0      4          0
         5 LGWR                           Log File                       0      9      0      6          0
        12 LGWR                           Log File                       0      1      0      0          0
/*日志文件在写出时会伴有少量的日志单块读操作,日志文件的写兼有单块写和多块写2种操作*/

另外,你大概和我一样不太喜欢使用OEM界面,不过OEM界面有个很方便的功能就是显示短期内的Avg Active Session Count(AAS)。什么?那你从来没看到过这种东西?下面这张截图大概可以勾起你的回忆:


早在10g时代就有人写了一个显示短期内AAS的脚本,在11g中该脚本得到了简化了:

set echo off;
set verify off;
alter session set nls_date_format='HH24:MI';
select *
  from (select nvl(wait_class, 'CPU') activity,
               trunc(sample_time, 'MI') time
          from v$active_session_history) v pivot(count(*) for activity in('CPU' as
                                                                          "CPU",
                                                                          'Concurrency' as
                                                                          "Concurrency",
                                                                          'System I/O' as
                                                                          "System I/O",
                                                                          'User I/O' as
                                                                          "User I/O",
                                                                          'Administrative' as
                                                                          "Administrative",
                                                                          'Configuration' as
                                                                          "Configuration",
                                                                          'Application' as
                                                                          "Application",
                                                                          'Network' as
                                                                          "Network",
                                                                          'Commit' as
                                                                          "Commit",
                                                                          'Scheduler' as
                                                                          "Scheduler",
                                                                          'Cluster' as
                                                                          "Cluster",
                                                                          'Queueing' as
                                                                          "Queueing",
                                                                          'Other' as
                                                                          "Other"))
 where time > sysdate - interval '&last_min' minute
 order by time;

我们也可以通过ASH查找Oracle中的TOP SESSION和TOP SQL:

/*找出短期内TOP SQL的sql_id和活动历史*/
select ash.SQL_ID,
       sum(decode(ash.session_state, 'ON CPU', 1, 0)) "CPU",
       sum(decode(ash.session_state, 'WAITING', 1, 0)) -
       sum(decode(ash.session_state,
                  'WAITING',
                  decode(en.wait_class, 'User I/O', 1, 0),
                  0)) "WAIT",
       sum(decode(ash.session_state,
                  'WAITING',
                  decode(en.wait_class, 'User I/O', 1, 0),
                  0)) "IO",
       sum(decode(ash.session_state, 'ON CPU', 1, 1)) "TOTAL"
  from v$active_session_history ash, v$event_name en
 where SQL_ID is not NULL
   and en.event# = ash.event#
 group by sql_id
 order by sum(decode(session_state, 'ON CPU', 1, 1)) desc;

SQL_ID               CPU       WAIT         IO      TOTAL
------------- ---------- ---------- ---------- ----------
a01hp0psv0rrh          0          2          7          9
24g90qj2b7ywk          0          5          1          6
2amsp6skc6tjv          0          0          5          5
46quk68k7akpa          0          3          1          4
2ufrf9vk4kcwj          0          0          3          3
1w8m6dwy66ttn          0          0          3          3
8uxr3scz9bmxd          0          0          3          3
6htq3p9j91y0s          0          0          3          3
cvn54b7yz0s8u          0          0          3          3
92f47aa2q2rmd          0          2          1          3

/*找出变量ivl指定分钟内的TOP CPU SESSION*/
Select session_id, count(*)
  from v$active_session_history
 where session_state = 'ON CPU'
   and SAMPLE_TIME > sysdate -(&ivl/(24 * 60))
 group by session_id
 order by count(*) desc;
输入 ivl 的值:  10
原值    4:    and SAMPLE_TIME > sysdate -(&ivl/(24 * 60))
新值    4:    and SAMPLE_TIME > sysdate -(10/(24 * 60))

SESSION_ID   COUNT(*)
---------- ----------
       136          4

/*找出变量ivl指定分钟内TOP WAITING SESSION*/
Select session_id, count(*)
  from v$active_session_history
 where session_state = 'WAITING'
   and SAMPLE_TIME > SYSDATE - (&ivl / (24 * 60))
 group by session_id
 order by count(*) desc;

输入 ivl 的值:  10
原值    4:    and SAMPLE_TIME > SYSDATE - (&ivl / (24 * 60))
新值    4:    and SAMPLE_TIME > SYSDATE - (10 / (24 * 60))

SESSION_ID   COUNT(*)
---------- ----------
         3         11

/*找出短期内的TOP SESSION及活动历史*/
select ash.session_id,
       ash.session_serial#,
       ash.user_id,
       ash.program,
       sum(decode(ash.session_state, 'ON CPU', 1, 0)) "CPU",
       sum(decode(ash.session_state, 'WAITING', 1, 0)) -
       sum(decode(ash.session_state,
                  'WAITING',
                  decode(en.wait_class, 'User I/O', 1, 0),
                  0)) "WAITING",
       sum(decode(ash.session_state,
                  'WAITING',
                  decode(en.wait_class, 'User I/O', 1, 0),
                  0)) "IO",
       sum(decode(session_state, 'ON CPU', 1, 1)) "TOTAL"
  from v$active_session_history ash, v$event_name en
 where en.event# = ash.event#
 group by session_id, user_id, session_serial#, program
 order by sum(decode(session_state, 'ON CPU', 1, 1));

以上脚本完全可以被ashrpt报表所替代,但在短期内做针对检查仍十分有效。

UNION ALL returning wrong results?

有应用人员反映某套Linux上的11.2.0.1数据库系统中出现了UNION ALL后返回的结果集不正确的问题,我们具体分析下出现问题的其中一条语句:

SELECT MTL_SECONDARY_INVENTORIES.SECONDARY_INVENTORY_NAME,
       MTL_SECONDARY_INVENTORIES.ORGANIZATION_ID,
       MTL_SECONDARY_INVENTORIES.DESCRIPTION,
       MTL_SECONDARY_INVENTORIES.AVAILABILITY_TYPE,
       MTL_SECONDARY_INVENTORIES.MATERIAL_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.MATERIAL_OVERHEAD_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.RESOURCE_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.OVERHEAD_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.OUTSIDE_PROCESSING_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.ASSET_INVENTORY,
       MTL_SECONDARY_INVENTORIES.EXPENSE_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.ENCUMBRANCE_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.ATTRIBUTE3,
       MTL_SECONDARY_INVENTORIES.ATTRIBUTE5,
       WORKFLOW_START_TIMES.WORKFLOW_START_TIME
  FROM REPEMEAERP.MTL_SECONDARY_INVENTORIES,
       REPEMEAERP.WORKFLOW_START_TIMES
 WHERE MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT >
       TO_DATE('01/01/1900 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
   AND MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT <=
       WORKFLOW_START_TIMES.WORKFLOW_START_TIME
   AND WORKFLOW_START_TIMES.WORKFLOW_NAME =
       LTRIM(RTRIM('w_int_FreqBatch_EMEA'))
/*以上是QUERY A*/
UNION ALL
/*以下是QUERY B*/
SELECT DISTINCT 'WORKORDERS',
                MTL_SECONDARY_INVENTORIES.ORGANIZATION_ID,
                'WORK ORDERS WITH WIP AS CATEGORY VALUE',
                1,
                0,
                0,
                0,
                0,
                0,
                1,
                0,
                0,
                'MOI',
                '0',
                WORKFLOW_START_TIMES.WORKFLOW_START_TIME
  FROM REPEMEAERP.MTL_SECONDARY_INVENTORIES, EIMMAINT.WORKFLOW_START_TIMES
 WHERE MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT >
       TO_DATE('01/01/1900 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
   AND MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT <=
       WORKFLOW_START_TIMES.WORKFLOW_START_TIME
   AND WORKFLOW_START_TIMES.WORKFLOW_NAME =
       LTRIM(RTRIM('w_int_FreqBatch_EMEA'))
/
138 rows selected.

以上查询语句中,QUERY A部分(也就是UNION ALL之前的SELECT语句)单独查询时返回返回69条记录,QUERY B部分单独查询时返回15记录,UNION ALL后返回的结果却是138条记录,而非84条记录。实际上这套系统也是最近才从10g迁移到11gr2上,之前在10g中同样的应用没有出过类似的问题,可以猜测是11g中新引入的某种特性存在可能引发wrong result的Bug。

具体思路虽然有了,但仍无法确定问题的关键所在;我们来看看该SQL的执行计划:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name                          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                               |     7 |  2443 |    52   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                   |                               |     7 |  2443 |    52   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL             | WORKFLOW_START_TIMES          |     1 |    29 |    48   (0)| 00:00:01 |
|   3 |   VIEW                          | VW_JF_SET$9BAED2EA            |     1 |   320 |     4   (0)| 00:00:01 |
|   4 |    UNION ALL PUSHED PREDICATE   |                               |       |       |            |          |
|*  5 |     FILTER                      |                               |       |       |            |          |
|   6 |      TABLE ACCESS BY INDEX ROWID| MTL_SECONDARY_INVENTORIES     |     3 |   336 |     2   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN          | IDX_MTL_SECONDARY_INVENTORIES |     1 |       |     1   (0)| 00:00:01 |
|*  8 |     FILTER                      |                               |       |       |            |          |
|   9 |      TABLE ACCESS BY INDEX ROWID| MTL_SECONDARY_INVENTORIES     |     3 |    36 |     2   (0)| 00:00:01 |
|* 10 |       INDEX RANGE SCAN          | IDX_MTL_SECONDARY_INVENTORIES |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("WORKFLOW_START_TIMES"."WORKFLOW_NAME"='w_int_FreqBatch_EMEA')
5 - filter(TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss')<"WORKFLOW_START_TIMES"."WORKFLOW_START_TIME") 7 - access("MTL_SECONDARY_INVENTORIES"."DW_UPDATE_DT">TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "MTL_SECONDARY_INVENTORIES"."DW_UPDATE_DT"<="WORKFLOW_START_TIMES"."WORKFLOW_START_TIME"
)
8 - filter(TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss')<"WORKFLOW_START_TIMES"."WORKFLOW_START_TIME") 10 - access("MTL_SECONDARY_INVENTORIES"."DW_UPDATE_DT">TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "MTL_SECONDARY_INVENTORIES"."DW_UPDATE_DT"<="WORKFLOW_START_TIMES"."WORKFLOW_START_TIME"
)

你可能从以上执行计划中发现了两处十分陌生的字眼:UNION ALL  PUSHED PREDICATE和VW_JF_SET$。它们是什么!?

先来说说JF,JF是join factorization的缩写,你可以把它翻译作链接因式分解,如果你学过离散数学或者数据库原理的话,那么这种在11.2.0.1中最新推出的基于成本的变换操作对你来说并不陌生。用公式的样式来表达大概是下面这样:

YYA,YYB和YYC是3个关联的数据对象亦或者是3个关联的结果集;
(YYA JOIN YYB) UNION [ALL] (YYA JOIN YYC)
可以转换成为:
YYA JOIN (YYB UNION [ALL] YYC)

这样做YYA部分只需要读取一次,还可以少做一次JOIN,听上去是挺不错的吧!
下面我们来看一个Oracle使用join factorization的十分简单的实例:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> drop table yya;

drop table yya

           *

ERROR at line 1:

ORA-00942: table or view does not exist

SQL> drop table yyb;

drop table yyb

           *

ERROR at line 1:

ORA-00942: table or view does not exist

SQL> create table yya as select rownum id1,rownum id2,rownum id3 from dual connect by level<=20000;
Table created.
SQL> create table yyb as select rownum id1,rownum id2,rownum id3 from dual connect by level<=20000;
Table created.

SQL> explain plan for
2  select * from yya ,yyb where yya.id1=yyb.id1
3  union all
4  select * from yya, yyb where yya.id1=yyb.id1;

Explained.

SQL> set linesize 100 pagesize 1400;

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 744914999

-------------------------------------------------------------------------------------------
| Id  | Operation            | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                    | 40000 |  2500K|    49   (3)| 00:00:01 |
|*  1 |  HASH JOIN           |                    | 40000 |  2500K|    49   (3)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | YYA                | 20000 |   234K|    16   (0)| 00:00:01 |
|   3 |   VIEW               | VW_JF_SET$6E3F6682 | 40000 |  2031K|    32   (0)| 00:00:01 |
|   4 |    UNION-ALL         |                    |       |       |            |          |
|   5 |     TABLE ACCESS FULL| YYB                | 20000 |   761K|    16   (0)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| YYB                | 20000 |   761K|    16   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("YYA"."ID1"="ITEM_1")

/*执行计划中出现了VW_JF_SET$F22B2A93,Oracle选择了使用join factorization,该执行计划总成本49*/

SQL> alter session set "_optimizer_join_factorization"=false;

Session altered.

/*隐藏参数_optimizer_join_factorization决定了优化器是否可以选用join factorization,现在我们禁用它*/
SQL> explain plan for
  2  select * from yya join yyb on yya.id1=yyb.id1
  3  union all
  4  select * from yya join yyb on yya.id1=yyb.id1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3439541885

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 40000 |  1992K|    66  (52)| 00:00:01 |
|   1 |  UNION-ALL          |      |       |       |            |          |
|*  2 |   HASH JOIN         |      | 20000 |   996K|    33   (4)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| YYA  | 20000 |   234K|    16   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| YYB  | 20000 |   761K|    16   (0)| 00:00:01 |
|*  5 |   HASH JOIN         |      | 20000 |   996K|    33   (4)| 00:00:01 |
|   6 |    TABLE ACCESS FULL| YYA  | 20000 |   234K|    16   (0)| 00:00:01 |
|   7 |    TABLE ACCESS FULL| YYB  | 20000 |   761K|    16   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("YYA"."ID1"="YYB"."ID1")
   5 - access("YYA"."ID1"="YYB"."ID1")
/*禁用链接因式分解后,Oracle使用了常规的"笨办法",成本上升到66*/

/*有趣的是下面的测试*/

SQL> alter session set "_optimizer_join_factorization"=true;

Session altered.

SQL> create table yyc as select * from yyb;

Table created.

SQL> explain plan for
  2  select * from yya,yyc where yya.id1=yyc.id1
  3  union all
  4  select * from yya,yyb where yya.id1=yyb.id1;

Explained.

SQL>  select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 4240055274

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 40000 |  1992K|    66  (52)| 00:00:01 |
|   1 |  UNION-ALL          |      |       |       |            |          |
|*  2 |   HASH JOIN         |      | 20000 |   996K|    33   (4)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| YYA  | 20000 |   234K|    16   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| YYC  | 20000 |   761K|    16   (0)| 00:00:01 |
|*  5 |   HASH JOIN         |      | 20000 |   996K|    33   (4)| 00:00:01 |
|   6 |    TABLE ACCESS FULL| YYA  | 20000 |   234K|    16   (0)| 00:00:01 |
|   7 |    TABLE ACCESS FULL| YYB  | 20000 |   761K|    16   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("YYA"."ID1"="YYC"."ID1")
   5 - access("YYA"."ID1"="YYB"."ID1")
/*confused,Oracle有什么理由在这里反而不用join factorization了呢?看起来短期内join factorization的实际应用还有待"商榷"
*/

/*10053事件能解释这一问题吗?*/
SQL> alter system flush shared_pool;

System altered.

SQL> oradebug setmypid;
Statement processed.
SQL> oradebug event 10053 trace name context forever,level 1;
Statement processed.
SQL> explain plan for
  2  select * from yya join yyb on yya.id1=yyb.id1
  3  union all
  4  select * from yya join yyc on yya.id1=yyc.id1;

Explained.

SQL> oradebug event 10053 trace name context off;
Statement processed.
SQL> oradebug tracefile_name;
/home/maclean/app/maclean/diag/rdbms/prod/PROD/trace/PROD_ora_7907.trc

view /home/maclean/app/maclean/diag/rdbms/prod/PROD/trace/PROD_ora_7907.trc
***********************************
Cost-Based Join Factorization
***********************************
Join-Factorization on query block SET$1 (#1)
JF: Using search type: exhaustive
JF: Generate basic transformation units
Validating JF unit: (branch: {2, 3} table: {YYA, YYA})
  rejected: join predicates do not match

JF: Generate transformation units from basic units
JF: No state generated.
/*优化器认为其链接谓词不符合使用join  factorization的条件,JF题案被驳回,"悬案"!*/

join factorization是很棒的新技术,这点没错,但新技术往往又是horrible(可怕的),最近我常用这个词。我们的问题是不是这个新来的引起的呢?通过join factorization关键字检索MOS,可以发现一个今年(2010)3月出现的Bug 9504322,quote:

Hdr: 9504322 11.2.0.1 RDBMS 11.2.0.1 QRY OPTIMIZER PRODID-5 PORTID-226
Abstract: WRONG RESULTS WITH UNION_ALL AND INLINE VIEWS

*** 03/24/10 05:38 am ***

PROBLEM:
--------
Wrong results on 11.2 for queries of type:

SELECT * FROM
(
SELECT ... FROM view, table WHERE ...
UNION ALL
SELECT ... FROM view, table WHERE NOT ...
);

DIAGNOSTIC ANALYSIS:
--------------------
Problem seen between 10.2.0.4 and 11.2.0.1.
If we remove the use of inline view the correct results are returned.

WORKAROUND:
-----------
N/A

RELATED BUGS:
-------------

REPRODUCIBILITY:
----------------
It is reproducing on generic 11.2.0.1

呵呵,似乎有点眉目了,不过实践是检验真理的唯一标准:


SQL> alter session set "_optimizer_join_factorization"=true;

Session altered.

SELECT MTL_SECONDARY_INVENTORIES.SECONDARY_INVENTORY_NAME,
       MTL_SECONDARY_INVENTORIES.ORGANIZATION_ID,
       MTL_SECONDARY_INVENTORIES.DESCRIPTION,
       MTL_SECONDARY_INVENTORIES.AVAILABILITY_TYPE,
       MTL_SECONDARY_INVENTORIES.MATERIAL_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.MATERIAL_OVERHEAD_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.RESOURCE_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.OVERHEAD_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.OUTSIDE_PROCESSING_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.ASSET_INVENTORY,
       MTL_SECONDARY_INVENTORIES.EXPENSE_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.ENCUMBRANCE_ACCOUNT,
       MTL_SECONDARY_INVENTORIES.ATTRIBUTE3,
       MTL_SECONDARY_INVENTORIES.ATTRIBUTE5,
       WORKFLOW_START_TIMES.WORKFLOW_START_TIME
  FROM REPEMEAERP.MTL_SECONDARY_INVENTORIES,
       REPEMEAERP.WORKFLOW_START_TIMES
 WHERE MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT >
       TO_DATE('01/01/1900 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
   AND MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT <=
       WORKFLOW_START_TIMES.WORKFLOW_START_TIME
   AND WORKFLOW_START_TIMES.WORKFLOW_NAME =
       LTRIM(RTRIM('w_int_FreqBatch_EMEA'))
/*以上是QUERY A*/
UNION ALL
/*以下是QUERY B*/
SELECT DISTINCT 'WORKORDERS',
                MTL_SECONDARY_INVENTORIES.ORGANIZATION_ID,
                'WORK ORDERS WITH WIP AS CATEGORY VALUE',
                1,
                0,
                0,
                0,
                0,
                0,
                1,
                0,
                0,
                'MOI',
                '0',
                WORKFLOW_START_TIMES.WORKFLOW_START_TIME
  FROM REPEMEAERP.MTL_SECONDARY_INVENTORIES, EIMMAINT.WORKFLOW_START_TIMES
 WHERE MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT >
       TO_DATE('01/01/1900 00:00:00', 'MM/DD/YYYY HH24:MI:SS')
   AND MTL_SECONDARY_INVENTORIES.DW_UPDATE_DT <=
       WORKFLOW_START_TIMES.WORKFLOW_START_TIME
   AND WORKFLOW_START_TIMES.WORKFLOW_NAME =
       LTRIM(RTRIM('w_int_FreqBatch_EMEA'))
/

138 rows selected.

结果和我们猜想的大相径庭,join factorization并非罪魁,找不到终点让我们回到原点。
至此UNION ALL PUSHED PREDICATE有了极大的嫌疑,什么是PUSH PREDICATE?我把它叫做谓词前推,这玩样最早出现在10g上,但一直问题多多!它到底是何种OPERATION呢?让我们来看看下面的例子:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> create table youyus (t1 int,t2 varchar2(20));

Table created.

SQL> alter table youyus add primary key(t1);

Table altered.

SQL> explain plan for
  2  select *
  3    from youyus
  4  union all
  5  select * from youyus;

Explained.
/*在之后的语句中将用到这个子查询*/
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1959159425

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     2 |    50 |     4  (50)| 00:00:01 |
|   1 |  UNION-ALL         |        |       |       |            |          |
|   2 |   TABLE ACCESS FULL| YOUYUS |     1 |    25 |     2   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| YOUYUS |     1 |    25 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------
/*在之后的语句中将用到这个子查询,这里它的"原始"执行计划十分简单*/

SQL> explain plan for
  2  select v2.t1, v2.t2
  3    from (select t1 from youyus where rownum=1) v1,
  4         (select *
  5            from youyus
  6          union all
  7          select * from youyus) v2
  8   where v1.t1 = v2.t1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 2456530141

-----------------------------------------------------------------------------------------------
| Id  | Operation                      | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |              |     1 |    27 |     1   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                  |              |     1 |    27 |     1   (0)| 00:00:01 |
|   2 |   VIEW                         |              |     1 |    13 |     1   (0)| 00:00:01 |
|*  3 |    COUNT STOPKEY               |              |       |       |            |          |
|   4 |     INDEX FULL SCAN            | SYS_C0010819 |     1 |    13 |     1   (0)| 00:00:01 |
|   5 |   VIEW                         |              |     1 |    14 |     0   (0)| 00:00:01 |
|   6 |    UNION ALL PUSHED PREDICATE  |              |       |       |            |          |
|   7 |     TABLE ACCESS BY INDEX ROWID| YOUYUS       |     1 |    25 |     0   (0)| 00:00:01 |
|*  8 |      INDEX UNIQUE SCAN         | SYS_C0010819 |     1 |       |     0   (0)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID| YOUYUS       |     1 |    25 |     0   (0)| 00:00:01 |
|* 10 |      INDEX UNIQUE SCAN         | SYS_C0010819 |     1 |       |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(ROWNUM=1)
   8 - access("YOUYUS"."T1"="V1"."T1")
  10 - access("YOUYUS"."T1"="V1"."T1")
/* PUSHED PREDICATE将谓词逻辑前推到UNION ALL的子查询中,其优势在于可以避免全表扫描,利用索引*/

SQL> set linesize 100 pagesize 1400;
SQL>
SQL> explain plan for
  2  select /*+ no_push_pred(v2) */ v2.t1, v2.t2
  3    from (select t1 from youyus where rownum=1) v1,
  4         (select *
  5            from youyus
  6          union all
  7          select * from youyus) v2
  8   where v1.t1 = v2.t1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 2769827061

-------------------------------------------------------------------------------------
| Id  | Operation            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |              |     1 |    38 |     6  (17)| 00:00:01 |
|*  1 |  HASH JOIN           |              |     1 |    38 |     6  (17)| 00:00:01 |
|   2 |   VIEW               |              |     1 |    13 |     1   (0)| 00:00:01 |
|*  3 |    COUNT STOPKEY     |              |       |       |            |          |
|   4 |     INDEX FULL SCAN  | SYS_C0010819 |     1 |    13 |     1   (0)| 00:00:01 |
|   5 |   VIEW               |              |     2 |    50 |     4   (0)| 00:00:01 |
|   6 |    UNION-ALL         |              |       |       |            |          |
|   7 |     TABLE ACCESS FULL| YOUYUS       |     1 |    25 |     2   (0)| 00:00:01 |
|   8 |     TABLE ACCESS FULL| YOUYUS       |     1 |    25 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("V1"."T1"="V2"."T1")
   3 - filter(ROWNUM=1)
/*no_push_pred hint让Oracle 放弃使用PUSHED PREDICATE,使用常规UNION-ALL操作后,子查询执行计划回归成全表扫描,整个计划成本上升*/

Oracle中清除游标缓存的几种方法

9i以后引入了bind peeking绑定变量窥视特性,但该特性常有帮当忙之嫌,所以有了11g的自适应游标特性。排除因绑定变量窥视造成的因素外,统计信息讹误也会造成执行计划偏差,这时我们就可能需要清除指定游标的缓存信息,从而达到重新解析的目的。

下面我们列举几种可以达到清除游标缓存的方法,权作抛砖引玉:

1. alter system flush shared_pool;              /* 最简单最粗暴的方法,清除所有游标缓存,可能造成短期内大量解析,不推荐*/

2. dbms_shared_pool 包很早就有了,但该包名下的purge过程却要到10.2.0.4才出现,Bug 5614566最早在2006年描述了需要清除游标缓存接口的要求:

Hdr: 5614566 10.2.0.2 RDBMS 10.2.0.2 DICTIONARY PRODID-5 PORTID-176
Abstract: WE NEED A FLUSH CURSOR INTERFACE
*** 10/20/06 07:48 am ***

而且该过程在10.2.0.4中默认是无法正常使用的,需要通过设置event 5614566或者打上5614566补丁来启用;具体设置方法如下:

alter system set events ‘5614566 trace name context forever’;

该存储过程的具体argument如下:

PROCEDURE PURGE
参数名称                       类型                    输入/输出默认值?
------------------------------ ----------------------- ------ --------
NAME                           VARCHAR2                IN
FLAG                           CHAR                    IN     DEFAULT
HEAPS                          NUMBER                  IN     DEFAULT

其中NAME指定了需要清除的对象名,这里分成2种。PL/SQL对象,触发器,序列,类型和JAVA对象以其命名指定;SQL游标对象通过该SQL的address与hash_value组合指定。FLAG指定了对象的类型,若没有指定该参数,Oracle将认为之前代入的NAME参数对应到包/存储过程/函数的命名空间, 需要注意的是该参数是大小写敏感的,包括了以下各类型:

FLAG值 对应对象类型
P 包/存储过程/函数
Q 序列
R 触发器
T 类型
JS Java源程序
JC Java类程序
JR Java资源
JD Java共享数据
C cursor

HEAP参数指定了清除对象的哪些堆信息,以SQL游标为例,其最主要的信息包括在HEAP 0和HEAP 6中,HEAP 0包括了游标自身的大多数信息,而HEAP 6则存放了游标相关的执行计划。如果我们想要清除HEAP 0和HEAP 6中的信息,则2的0次方+2的6次方=1+64=65,那么我们在代入HEAP参数为65 即可;如果我们只想清除游标的执行计划则清除HEAP 6即可,代入HEAP参数为2的6次方即64。该参数的默认值为1,清除HEAP 0将会导致整个对象的缓存信息被清除掉。

下面我们来演示如何利用该存储过程来清除SQL缓存:

SQL> alter system flush shared_pool;

系统已更改。

SQL> select /* cache_me */  count(*)  from youyus;

  COUNT(*)

----------

         9

SQL> select sql_id,
 2         address,
 3         hash_value,
 4         executions,
 5         loads,
 6         version_count,
 7         invalidations,
 8         parse_calls
 9    from v$sqlarea
 10   where sql_text like '%cache_me%'
 11     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS
------------- -------- ---------- ---------- ---------- ------------- ------------- -----------
25asu5a76nqmn 2F51508C 2389334644          3          1             1             0           3

SQL> select address, plan_hash_value
 2    from v$sql_plan
 3   where sql_id = '25asu5a76nqmn';

ADDRESS  PLAN_HASH_VALUE
-------- ---------------
2F51508C      2542806819
2F51508C      2542806819
2F51508C      2542806819

SQL> exec dbms_shared_pool.purge('2F51508C,2389334644','C',64);

PL/SQL 过程已成功完成。

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F51508C 2389334644          4          1             1             0           4      2542806819

SQL> select * from v$sql_plan where plan_hash_value= 2542806819;
未选定行

/*执行计划消失了,而游标主体信息仍在*/
SQL> select /* cache_me */  count(*)  from youyus;

 COUNT(*)
----------
 9

SQL> select sql_id,
 2         address,
 3         hash_value,
 4         executions,
 5         loads,
 6         version_count,
 7         invalidations,
 8         parse_calls,
 9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F51508C 2389334644          5          1             1             0           5      2542806819
/*这里新增的一次parse call是硬解析*/

SQL>  select address,operation from v$sql_plan where plan_hash_value= 2542806819;

ADDRESS  OPERATION
-------- ------------------------------------------------------------
2F51508C SELECT STATEMENT
2F51508C SORT
2F51508C TABLE ACCESS

SQL> exec dbms_shared_pool.purge('2F51508C,2389334644','C',1);

PL/SQL 过程已成功完成。

SQL> select sql_id,
 2         address,
 3         hash_value,
 4         executions,
 5         loads,
 6         version_count,
 7         invalidations,
 8         parse_calls,
 9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

未选定行
SQL> select address,operation from v$sql_plan where plan_hash_value= 2542806819;

未选定行
SQL> select /* cache_me */  count(*)  from youyus;

 COUNT(*)
----------
 9

SQL> select sql_id,
 2         address,
 3         hash_value,
 4         executions,
 5         loads,
 6         version_count,
 7         invalidations,
 8         parse_calls,
 9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F51508C 2389334644          1          2             1             1           1      2542806819

/*清除游标heap 0后,包括执行计划的所有信息都被清除了,甚至于simulator中的信息*/

3.如果您的环境中恰好无法利用dbms_shared_pool.purge存储过程,我们也可以采用一些折中的方法来清除游标缓存;譬如通过一个无关紧要的grant/revoke操作,但这样也会造成所有该授权/撤职对象相关SQL的执行计划失效:

SQL> alter system flush shared_pool;

系统已更改。

SQL> select /* cache_me */  count(*)  from youyus;

  COUNT(*)
----------
         9

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F540EA0 2389334644          1          1             1             0           1      2542806819

SQL> select address,operation,to_char(timestamp,'HH24:MI:SS') from v$sql_plan where plan_hash_value= 2542806819;

ADDRESS  OPERATION                                                    TO_CHAR(
-------- ------------------------------------------------------------ --------
2F540EA0 SELECT STATEMENT                                             13:39:28
2F540EA0 SORT                                                         13:39:28
2F540EA0 TABLE ACCESS                                                 13:39:28

SQL> revoke select on youyus from scott;

撤销成功。

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F540EA0 2389334644          1          1             1             1           1      2542806819

/*授权/撤销会造成执行计划invalid,此处 INVALIDATIONS上升到1*/

SQL> select /* cache_me */  count(*)  from youyus;

  COUNT(*)
----------
         9
/*重新执行SQL,将引发一次硬解析*/
SQL> select address,operation,to_char(timestamp,'HH24:MI:SS') from v$sql_plan where plan_hash_value= 2542806819;

ADDRESS  OPERATION                                                    TO_CHAR(
-------- ------------------------------------------------------------ --------
2F540EA0 SELECT STATEMENT                                             13:40:23
2F540EA0 SORT                                                         13:40:23
2F540EA0 TABLE ACCESS                                                 13:40:23

/*执行计划的时间戳发生了变化,达到了重新解析游标的目的*/

4.或许你不是一个位高权重的DBA,无法执行授权/撤职命令,但如果你能分析游标所涉及对象的统计信息或者执行其他一些ddl操作,那么也可以达到同样的目的:

SQL> alter system flush shared_pool;

系统已更改。

SQL>
SQL> select /* cache_me */  count(*)  from youyus;

  COUNT(*)
----------
         9

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F540EA0 2389334644          1          1             1             0           1      2542806819

SQL> analyze table  youyus compute statistics;

表已分析。

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F540EA0 2389334644          1          1             1             1           1      2542806819
/*统计信息更新,造成了invalid*/

SQL>    create index ind_youyus on youyus(t1);

索引已创建。

SQL> alter system flush shared_pool;

系统已更改。

SQL> select /* cache_me */  count(*)  from youyus;

  COUNT(*)
----------
         9

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F464EA0 2389334644          1          1             1             0           1      2542806819

SQL> alter index ind_youyus rebuild online;

索引已更改。

SQL> select sql_id,
  2         address,
  3         hash_value,
  4         executions,
  5         loads,
  6         version_count,
  7         invalidations,
  8         parse_calls,
  9         plan_hash_value
 10    from v$sqlarea
 11   where sql_text like '%cache_me%'
 12     and sql_text not like '%v$sqlarea%';

SQL_ID        ADDRESS  HASH_VALUE EXECUTIONS      LOADS VERSION_COUNT INVALIDATIONS PARSE_CALLS PLAN_HASH_VALUE
------------- -------- ---------- ---------- ---------- ------------- ------------- ----------- ---------------
25asu5a76nqmn 2F464EA0 2389334644          1          1             1             1           1      2542806819
/*在线重建索引也可以达到同样的目的*/

That's Great!

Upgrading to RAC 11g R2 What you should know

How to upgrade?
Recommended: “Swing Kit”
Set up a new cluster on separate hardware
Install Grid Infrastructure at your leisure
Avoid time pressure
Create a physical standby on the new cluster
At a pre-defined outage window, perform a switchover
Remember: you don’t need to upgrade the database at the same time as you upgrade to Grid Infrastructure
The trick is to keep other standby databases in sync

Other upgrade options
If you don’t have Swing Kit…
Upgrade the existing software stack
Grid Infrastructure is an out of place upgrade
New Oracle home
Pre 11.2 Clusterware home and ASM home made redundant after successful upgrade
Out of place upgrade should make downgrade easier
Have not tested downgrade personally
Usually only one in the audience did

[gview file=”https://www.askmac.cn/wp-content/uploads/2011/08/072811_96317_ppt.ppt”]

哪里有深入学习Oracle的资料?

订阅了oracle-l@freelists.org邮件列表(顺便提一下,订阅这个邮件列表也是学习Oracle的好方法,它完全免费,且有许多Oracle界的”大师”出没于此,当然要通过审查才能发言;你可以从freelists.org上找到入口!)的朋友可能在若干天前收到了一封来自Guillermo Alan Bort关于到哪里可以找到学习Oracle内部工作原理资料的邮件。
热心的邮友给出了五花八门的答案,我们来梳理一下这些渠道,以方便来者。

著名的juliandyke的巢穴,他写了许多独创的专题,本人也从他的著作中获益良多 http://juliandyke.com
oraktable.net上汇集了众多专家在博客上发表的最新技术文章 http://www.oaktable.net/members
致力于挖掘RAC技术,可以说是研究RAC最好的站点 http://www.oracleracsig.org
Oracle 8i service internal作者的站点,虽然已经多年未更新,却是你不能错过的一课 http://www.ixora.com.au/
Tanel Poder可能是世界上真正的第一个OCM,他有很多关于internal和performance的著作 http://blog.tanelpoder.com/
著名的《Cost Based Oracle Fundamentals》的作者Jonathan Lewis在wordpress上的博客 http://jonathanlewis.wordpress.com/
这就像是专门存放Oracle资料的图书馆 http://www.morganslibrary.org/library.html
Pete Finnigans是Oracle security方面的专家,发表了大量关于hack Oracle的文章 http://www.petefinnigan.com
我们之前说的邮件列表的入口! http://www.freelists.org/webpage/oracle-l

to be continued ………..

10g中HASH GROUP BY引起的临时表空间不足

今天早上应用人员反映一个原本在9i上可以顺利完成的CTAS脚本,迁移到10g后运行总是报“ORA-1652: unable to extend temp segment by 128 in tablespace TS_HQY1_TEMP “无法扩展临时表空间的错误。应用人员表示该脚本涉及的数据量在迁移前后变化不大,而且令人匪夷所思的是在新的10g库上临时表空间大小已达40多个G,要远大于原9i库。很显然这不是由于临时表空间过小导致的该问题,更多的原因肯定是出在迁移后Oracle不同的行为方式上。
该脚本每月执行一次用以汇总数据,其中一个单表接近4亿行记录,GROUP BY操作涉及到的数据量十分庞大。我们来具体看一下这个SQL:

create table gprs_bill.zou_201007_cell_id as
select /* g_all_cdr01,60 */
 calling_num mobile_number,
 lac,
 lpad(cell_id, 5, '0') cell_id,
 count(*) c,
 sum(call_duration) call_duration,
 sum(decode(record_type, '00', 1, 0) * call_duration) moc_call_duration,
 sum(decode(record_type, '01', 1, 0) * call_duration) mtc_call_duarion
  from gprs_bill.g_all_cdr01
 where substr(calling_num, 1, 7) in
       (select mobile_prefix from gprs_bill.zou_mobile_prefix)
 group by calling_num, lac, lpad(cell_id, 5, '0');

SQL> set autotrace traceonly exp
SQL> select /* g_all_cdr01,60 */
  2  calling_num mobile_number,
  3  lac,
  4  lpad(cell_id,5,'0') cell_id,
  5  count(*) c,
  6  sum(call_duration) call_duration,
  7  sum(decode(record_type,'00',1,0)*call_duration) moc_call_duration,
  8  sum(decode(record_type,'01',1,0)*call_duration) mtc_call_duarion
  9  from  gprs_bill.g_all_cdr01
 10  where substr(calling_num,1,7) in (select mobile_prefix from gprs_bill.zou_mobile_prefix)
 11  group by
 12  calling_num ,
 13  lac,
 14  lpad(cell_id,5,'0');

Execution Plan
----------------------------------------------------------
Plan hash value: 212866585

--------------------------------------------------------------------------------
-------------------

| Id  | Operation             | Name              | Rows  | Bytes |TempSpc| Cost
 (%CPU)| Time     |

--------------------------------------------------------------------------------
-------------------

|   0 | SELECT STATEMENT      |                   |   229K|  9880K|       |  103
3K  (3)| 03:26:41 |

|   1 |  HASH GROUP BY        |                   |   229K|  9880K|    22M|  103
3K  (3)| 03:26:41 |

|*  2 |   HASH JOIN RIGHT SEMI|                   |   229K|  9880K|       |  103
0K  (3)| 03:26:10 |

|   3 |    TABLE ACCESS FULL  | ZOU_MOBILE_PREFIX |  1692 | 13536 |       |    1
1   (0)| 00:00:01 |

|   4 |    TABLE ACCESS FULL  | G_ALL_CDR01       |   388M|    13G|       |  102
6K  (2)| 03:25:21 |

--------------------------------------------------------------------------------
-------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("MOBILE_PREFIX"=SUBSTR("CALLING_NUM",1,7))

可以看到Oracle使用了HASH GROUP BY 算法以实现数据分组;HASH算法是10g中新引入的分组算法。
下面我们来详细介绍下10g中数据分组的改动:
在10g中GROUP BY操作仍将引发排序操作,但10g中引入了新的算法,这些算法都不保证返回的数据行有序排列;在10g中如果想保证”GROUP BY”后返回的数据有序排列则需要强制使用”ORDER BY”子句,这点和9i是截然不同的。若你没有指定”ORDER BY”子句,则不能保证返回的结果正确排序。
在10g中”GROUP BY”子句更倾向于使用一种HASH算法而非原先的SORT算法来分组数据,HASH算法的CPU COST要低于原先的SORT算法。但这2种算法在10g中都不保证返回数据正常排序,当采用SORT算法时可能”碰巧”出现返回正常排序数据的状况。
MOS建议,如果迁移中出现大量不利的变化,则可以通过修改参数来确保沿用原先的算法。但需要注意的是,即便采用了以下参数仍不能保证10g后”GROUP BY”后返回的数据如9i中那样排序,你需要做的是加入显式的”ORDER BY”子句以保证Oracle为你做到这一点。

alter session set "_gby_hash_aggregation_enabled" = false;
alter session set optimizer_features_enable="9.2.0";
或者
alter session set optimizer_features_enable="8.1.7";

其中_gby_hash_aggregation_enabled隐式参数决定了Oracle是否可以启用新的HASH算法来进行数据分组(也适用于distinct等操作)。

对于以上说法我们通过实验进一步验证:

在11g中的测试如下:
SQL> select  * from v$version;

BANNER
----------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

SQL> select  *  from youyus;

T1                 T2
---------- ----------
A                  10
B                  10
F                  30
G                  30
H                  40
I                  40
J                  40
L                  20
M                  20

已选择9行。
SQL>  analyze table youyus compute statistics for all columns;

表已分析。

SQL> set autotrace on;

SQL>  select t2,count(*) from youyus group by t2;

        T2   COUNT(*)
---------- ----------
        30          2
        20          2
        40          3
        10          2


执行计划
----------------------------------------------------------
Plan hash value: 2940504347

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     4 |     8 |     3  (34)| 00:00:01 |
|   1 |  HASH GROUP BY     |        |     4 |     8 |     3  (34)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| YOUYUS |     9 |    18 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------
/*可以看到使用了hash算法,且返回结果未按t2列大小顺序排列*/

SQL> select t2,count(*) from youyus group by t2 order by t2;

        T2   COUNT(*)
---------- ----------
        10          2
        20          2
        30          2
        40          3


执行计划
----------------------------------------------------------
Plan hash value: 1349668650

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     4 |     8 |     3  (34)| 00:00:01 |
|   1 |  SORT GROUP BY     |        |     4 |     8 |     3  (34)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| YOUYUS |     9 |    18 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------
/*加入order by子句后,又变回了SORT算法,而且正常排序*/
SQL> alter session set "_gby_hash_aggregation_enabled" = false;

会话已更改。
SQL> alter session set optimizer_features_enable="9.2.0";

会话已更改。
SQL> select t2,count(*) from youyus group by t2;

        T2   COUNT(*)
---------- ----------
        10          2
        20          2
        30          2
        40          3


执行计划
----------------------------------------------------------
Plan hash value: 1349668650

-------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     4 |     8 |    11 |
|   1 |  SORT GROUP BY     |        |     4 |     8 |    11 |
|   2 |   TABLE ACCESS FULL| YOUYUS |     9 |    18 |     2 |
-------------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)
/*optimizer_features_enable设置为9.2.0后cpu cost被off了;返回数据正确排序,但我们要记住这是"侥幸"*/

SQL> alter session set optimizer_features_enable="10.2.0.5";

会话已更改。
SQL> select t2,count(*) from youyus group by t2;

        T2   COUNT(*)
---------- ----------
        10          2
        20          2
        30          2
        40          3


执行计划
----------------------------------------------------------
Plan hash value: 1349668650

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     4 |     8 |     3  (34)| 00:00:01 |
|   1 |  SORT GROUP BY     |        |     4 |     8 |     3  (34)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| YOUYUS |     9 |    18 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------
/*optimizer_features_enable设为10.2.0.5 一切正常*/
SQL> alter session set optimizer_features_enable="11.2.0.1";

会话已更改。

SQL> select t2,count(*) from youyus group by t2;

        T2   COUNT(*)
---------- ----------
        10          2
        20          2
        30          2
        40          3


执行计划
----------------------------------------------------------
Plan hash value: 1349668650

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     4 |     8 |     3  (34)| 00:00:01 |
|   1 |  SORT GROUP BY     |        |     4 |     8 |     3  (34)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| YOUYUS |     9 |    18 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------
/*11.2.0.1中没有变化*/
SQL> alter session set optimizer_features_enable="8.1.7";

会话已更改。

SQL> alter session set "_gby_hash_aggregation_enabled" =true;

会话已更改。
/*看看optimizer_features_enable设为8.1.7,而_gby_hash_aggregation_enabled为true,这种"矛盾"情况下的表现*/
SQL> select t2,count(*) from youyus group by t2;

        T2   COUNT(*)
---------- ----------
        30          2
        20          2
        40          3
        10          2


执行计划
----------------------------------------------------------
Plan hash value: 2940504347

-------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     4 |     8 |    10 |
|   1 |  HASH GROUP BY     |        |     4 |     8 |    10 |
|   2 |   TABLE ACCESS FULL| YOUYUS |     9 |    18 |     1 |
-------------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)
/*居然仍采用了HASH GROUP BY,看起来类似_gby_hash_aggregation_enabled这类参数优先级要高于optimizer_features_enable*/

9i上的表现如下:
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
PL/SQL Release 9.2.0.4.0 - Production
CORE    9.2.0.3.0       Production
TNS for Linux: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production

SQL> analyze table youyus_9i compute statistics for all columns;

Table analyzed.

SQL> select * from youyus_9i;

T1         T2
-- ----------
A          10
B          10
F          30
G          30
H          40
I          40
J          40
L          20
M          20

9 rows selected.

SQL> alter session set optimizer_mode=ALL_ROWS;

Session altered.

SQL> select t2,count(*) from youyus_9i group by t2;

        T2   COUNT(*)
---------- ----------
        10          2
        20          2
        30          2
        40          3


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=4 Bytes=8)
   1    0   SORT (GROUP BY) (Cost=4 Card=4 Bytes=8)
   2    1     TABLE ACCESS (FULL) OF 'YOUYUS_9I' (Cost=2 Card=21 Bytes
          =42)
/*9i下虽然没有指定order by,但我们可以放心返回的数据总是排序的;*/

SQL> alter session set "_gby_hash_aggregation_enabled" =true;
alter session set "_gby_hash_aggregation_enabled" =true
                  *
ERROR at line 1:
ORA-02248: invalid option for ALTER SESSION
/*9i下不存在_gby_hash_aggregation_enabled隐式参数*/

That's great!

应用脚本没有数据一定要正确排序的强制要求,但使用HASH GROUP BY算法后临时表空间的使用量大幅上升,远大于之前在9i上的使用量,最后导致语句无法顺利完成。首先想到的当然是通过修改_gby_hash_aggregation_enabled参数恢复到原先的SORT算法,并观察其临时表空间使用量:

SQL> alter session set "_gby_hash_aggregation_enabled"=false;
Session altered.

SQL> select /* g_all_cdr01,60 */
  2  calling_num mobile_number,
  3  lac,
  4  lpad(cell_id,5,'0') cell_id,
  5  count(*) c,
  6  sum(call_duration) call_duration,
  7  sum(decode(record_type,'00',1,0)*call_duration) moc_call_duration,
  8  sum(decode(record_type,'01',1,0)*call_duration) mtc_call_duarion
  9  from  gprs_bill.g_all_cdr01
 10  where substr(calling_num,1,7) in (select mobile_prefix from gprs_bill.zou_mobile_prefix)
 11  group by
 12  calling_num ,
 13  lac,
 14  lpad(cell_id,5,'0');

Execution Plan
----------------------------------------------------------
Plan hash value: 4013005149

--------------------------------------------------------------------------------
-------------------

| Id  | Operation             | Name              | Rows  | Bytes |TempSpc| Cost
 (%CPU)| Time     |

--------------------------------------------------------------------------------
-------------------

|   0 | SELECT STATEMENT      |                   |   229K|  9880K|       |  103
3K  (3)| 03:26:41 |

|   1 |  SORT GROUP BY        |                   |   229K|  9880K|    22M|  103
3K  (3)| 03:26:41 |

|*  2 |   HASH JOIN RIGHT SEMI|                   |   229K|  9880K|       |  103
0K  (3)| 03:26:10 |

|   3 |    TABLE ACCESS FULL  | ZOU_MOBILE_PREFIX |  1692 | 13536 |       |    1
1   (0)| 00:00:01 |

|   4 |    TABLE ACCESS FULL  | G_ALL_CDR01       |   388M|    13G|       |  102
6K  (2)| 03:25:21 |

--------------------------------------------------------------------------------
-------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("MOBILE_PREFIX"=SUBSTR("CALLING_NUM",1,7))

/*重新执行出现问题的脚本*/
create table gprs_bill.zou_201007_cell_id as
    select /* g_all_cdr01,60 */
    calling_num mobile_number,
    lac,
    lpad(cell_id,5,'0') cell_id,
    count(*) c,
    sum(call_duration) call_duration,
    sum(decode(record_type,'00',1,0)*call_duration) moc_call_duration,
    sum(decode(record_type,'01',1,0)*call_duration) mtc_call_duarion
    from  gprs_bill.g_all_cdr01
    where substr(calling_num,1,7) in (select mobile_prefix from gprs_bill.zou_mobile_prefix)
    group by
    calling_num ,
    lac,
    lpad(cell_id,5,'0');

可以看到在会话级别设置_gby_hash_aggregation_enabled为false后,Oracle不再采用10g中的HASH分组算法;因为该CTAS SQL脚本运行时间较长,我们通过动态视图V$SORT_USAGE来观察其运行期间的排序段使用量:

SQL> set time   on;
14:30:59 SQL> select tablespace,contents,segtype,blocks*8/1024 from v$sort_usage where username='GPRS_BILL';

TABLESPACE                      CONTENTS  SEGTYPE   BLOCKS*8/1024
------------------------------- --------- --------- -------------
TS_HQY1_TEMP                    TEMPORARY SORT               9349

14:35:59 SQL> /

TABLESPACE                      CONTENTS  SEGTYPE   BLOCKS*8/1024
------------------------------- --------- --------- -------------
TS_HQY1_TEMP                    TEMPORARY SORT              10011

/*5分钟内共用10011-9349=662MB 临时空间*/
15:02:46 SQL> select target ,totalwork,sofar,time_remaining,elapsed_seconds from v$session_longops where sofar!=totalwork;

TARGET                                                            TOTALWORK      SOFAR TIME_REMAINING ELAPSED_SECONDS
---------------------------------------------------------------- ---------- ---------- -------------- ---------------
GPRS_BILL.G_ALL_CDR01                                               5575890    5435796            143            5557

15:05:10 SQL> select target ,totalwork,sofar,time_remaining,elapsed_seconds from v$session_longops where sofar!=totalwork;

TARGET                                                            TOTALWORK      SOFAR TIME_REMAINING ELAPSED_SECONDS
---------------------------------------------------------------- ---------- ---------- -------------- ---------------
GPRS_BILL.G_ALL_CDR01                                               5575890    5562082             14            5692

15:05:13 SQL> select tablespace,contents,segtype,blocks*8/1024 from v$sort_usage where username='GPRS_BILL';

TABLESPACE                      CONTENTS  SEGTYPE   BLOCKS*8/1024
------------------------------- --------- --------- -------------
TS_HQY1_TEMP                    TEMPORARY SORT              13835

15:12:22 SQL> select tablespace,contents,segtype,blocks*8/1024 from v$sort_usage where username='GPRS_BILL';

TABLESPACE                      CONTENTS  SEGTYPE   BLOCKS*8/1024
------------------------------- --------- --------- -------------
TS_HQY1_TEMP                    TEMPORARY SORT              13922

/* 排序已经完成,排序段不再增长*/

该分组操作最后排序段使用量为13922MB,在客户可以接受的范围内。看起来新引入的HASH算法虽然有CPU成本低于SORT算法的优势,但可能消耗大量临时空间,可谓有得有失。

dbms_stats收集模式在9i和10g上的区别

大约2个月前,一位业内人士问我为什么9i CBO迁移到10g上会出现许多执行计划改变导致的性能,他当然是为了能考考我;实际上我接触过的环境大多在8i/9i下没有使用CBO优化模式,从8i/9i的RBO模式跨越到10g上较为成熟的CBO优化模式,这当中出现执行计划讹误可以说是情理之中的;而9i CBO到10上的CBO问题也不少,我首先想到的是统计信息收集上存在区别,但具体是什么区别却又说不上。那位业内人士听了我的回答,笑,笑而不语。

Oracle十分博大,博大到可以称为Oracle的世界,很多东西长期不用就会遭人淡忘;我们来复习下9i和10g上统计信息收集的一些改动。

在9i中收集统计信息时其默认的MOTHOD_OPT模式为’FOR ALL COLUMNS SIZE 1’,使用这种模式时Oracle只收集所有列上最基础的统计信息,包括了最小/大值,distinct值等信息;但是不会收集列上的直方图。对那些数据均匀分布和没有出现在SQL语句中where子句中作为条件的列来说,这样的统计信息完全足够了。然而如果列上的数据分布并不均匀就可能导致CBO的执行计划成本计算不准确,这时我们需要手动对这些列上的直方图进行统计。

10g上对dbms_stats包中默认的METHOD_OPT模式做了修正,这显然是引起9i CBO迁移到10g CBO后易发地执行计划变化的一个重要因素,也是那位业内人士所要问的题眼。

新的默认METHOD_OPT值为”FOR ALL COLUMNS SIZE AUTO”,这意味着Oracle将通过内部算法自动决定那些列上需要收集统计信息,而那些列上不需要。是否收集直方图取决于列上数据的分布情况和与对应表相关的工作负载,这种工作负载可以解释为数据库中存在某些需要参考这些列的详细信息来计算执行成本的SQL语句。

这种方式听上去十分理想,似乎Oracle可以默默无闻地为我们抓取所有急需的统计信息。

然而问题是在许多环境中Oracle没有做出是否需要收集列上直方图的正确决定。实践证明Oracle有可能收集许许多多不必要的直方图,同时又放弃了许多需要收集的直方图。

在轻量级的应用环境中这种直方图收集不当的问题造成的影响大多数时间不为人们所察觉,相反在performance critical或已经形成性能瓶颈的环境中则可能是一场不大不小的麻烦。

此外Oracle还改变了列上密度(density)信息的计算方式。该值常被Oracle用来确定谓词选择性,当突然出现额外不必要的直方图时可能造成的广泛显著地性能影响(当然好的影响也可能出现,只是概率上……)。

显然这些莫名出现的不速之客也会给共享池造成影响,library cache与row cache相关的闩可能短期内车水马龙,如果您的应用数据表上有成百上千的列那么情况可能更糟(所以说开发要遵循范式,没有规矩的最后结果往往是应用不可用,项目失败。别告诉我你的应用苟且地活着,那同样意味着项目失败)!

DataGuard Managed recovery hang

Our team deleted some archivelog by mistake. Rolled the database forwards by RMAN incremental recovery to an SCN. Did a manual recovery to sync it with the primary. Managed recovery is now failing.
alter database recover managed standby database disconnect

Alert log has :

Fri Jan 22 13:50:22 2010
Attempt to start background Managed Standby Recovery process
MRP0 started with pid=12
MRP0: Background Managed Standby Recovery process started
Media Recovery Waiting for thread 1 seq# 193389
Fetching gap sequence for thread 1, gap sequence 193389-193391
Trying FAL server: ITS
Fri Jan 22 13:50:28 2010
Completed: alter database recover managed standby database d
Fri Jan 22 13:53:25 2010
Failed to request gap sequence. Thread #: 1, gap sequence: 193389-193391
All FAL server has been attempted.

Managed recovery was working earlier today after the Rman incremental and resolved two gaps automatically. But it now appears hung with the standby falling behind the primary.

SQL> show parameter fal

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fal_client string ITS_STBY
fal_server string ITS

[v08k608:ITS:oracle]$ tnsping ITS_STBY

TNS Ping Utility for Solaris: Version 9.2.0.7.0 - Production on 22-JAN-2010 15:01:17

Copyright (c) 1997 Oracle Corporation. All rights reserved.

Used parameter files:
/oracle/product/9.2.0/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL= TCP)(Host= v08k608.am.mot.com)(Port= 1526)) (CONNECT_DATA = (SID = ITS)))
OK (10 msec)
[v08k608:ITS:oracle]$ tnsping ITS

TNS Ping Utility for Solaris: Version 9.2.0.7.0 - Production on 22-JAN-2010 15:01:27

Copyright (c) 1997 Oracle Corporation. All rights reserved.

Used parameter files:
/oracle/product/9.2.0/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL= TCP)(Host= 187.10.68.75)(Port= 1526)) (CONNECT_DATA = (SID = ITS)))
OK (320 msec)

Primary has :
SQL> show parameter log_archive_dest_2
log_archive_dest_2 string SERVICE=DRITS_V08K608 reopen=6
0 max_failure=10 net_timeout=1
80 LGWR ASYNC=20480 OPTIONAL
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_2 string ENABLE
[ITS]/its15/oradata/ITS/arch> tnsping DRITS_V08K608
TNS Ping Utility for Solaris: Version 9.2.0.7.0 - Production on 22-JAN-2010 15:03:24
Copyright (c) 1997 Oracle Corporation. All rights reserved.
Used parameter files:
/oracle/product/9.2.0/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL= TCP)(Host= 10.177.13.57)(Port= 1526)) (CONNECT_DATA = (SID = ITS)))
OK (330 msec)

The arch process on the primary database might hang due to a bug below so that it couldn’t ship the missing archive log
files to the standby database.

BUG 6113783 ARC PROCESSES CAN HANG INDEFINITELY ON NETWORK
[ Not published so not viewable in My Oracle Support ]
Fixed 11.2, 10.2.0.5 patchset

We could work workaround the issue by killing the arch processes on the primary site and they will be respawned
automatically immediately without harming the primary database.

[maclean@rh2 ~]$ ps -ef|grep arc
maclean   8231     1  0 22:24 ?        00:00:00 ora_arc0_PROD
maclean   8233     1  0 22:24 ?        00:00:00 ora_arc1_PROD
maclean   8350  8167  0 22:24 pts/0    00:00:00 grep arc
[maclean@rh2 ~]$ kill -9 8231 8233
[maclean@rh2 ~]$ ps -ef|grep arc
maclean   8389     1  0 22:25 ?        00:00:00 ora_arc0_PROD
maclean   8391     1  1 22:25 ?        00:00:00 ora_arc1_PROD
maclean   8393  8167  0 22:25 pts/0    00:00:00 grep arc

and alert log will have:

Fri Jul 30 22:25:27 EDT 2010
ARCH: Detected ARCH process failure
ARCH: Detected ARCH process failure
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=26, OS id=8389
Fri Jul 30 22:25:27 EDT 2010
ARC0: Archival started
ARC1: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=27, OS id=8391
Fri Jul 30 22:25:27 EDT 2010
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
Fri Jul 30 22:25:27 EDT 2010
ARC1: Becoming the heartbeat ARCH

Actually if we don’t kill some fatal process in 10g , oracle will respawn all nonfatal processes.
For example:

[maclean@rh2 ~]$ ps -ef|grep ora_|grep -v grep
maclean  14264     1  0 23:16 ?        00:00:00 ora_pmon_PROD
maclean  14266     1  0 23:16 ?        00:00:00 ora_psp0_PROD
maclean  14268     1  0 23:16 ?        00:00:00 ora_mman_PROD
maclean  14270     1  0 23:16 ?        00:00:00 ora_dbw0_PROD
maclean  14272     1  0 23:16 ?        00:00:00 ora_lgwr_PROD
maclean  14274     1  0 23:16 ?        00:00:00 ora_ckpt_PROD
maclean  14276     1  0 23:16 ?        00:00:00 ora_smon_PROD
maclean  14278     1  0 23:16 ?        00:00:00 ora_reco_PROD
maclean  14338     1  0 23:16 ?        00:00:00 ora_arc0_PROD
maclean  14340     1  0 23:16 ?        00:00:00 ora_arc1_PROD
maclean  14452     1  0 23:17 ?        00:00:00 ora_s000_PROD
maclean  14454     1  0 23:17 ?        00:00:00 ora_d000_PROD
maclean  14456     1  0 23:17 ?        00:00:00 ora_cjq0_PROD
maclean  14458     1  0 23:17 ?        00:00:00 ora_qmnc_PROD
maclean  14460     1  0 23:17 ?        00:00:00 ora_mmon_PROD
maclean  14462     1  0 23:17 ?        00:00:00 ora_mmnl_PROD
maclean  14467     1  0 23:17 ?        00:00:00 ora_q000_PROD
maclean  14568     1  0 23:18 ?        00:00:00 ora_q001_PROD

[maclean@rh2 ~]$ ps -ef|grep ora_|grep -v pmon|grep -v ckpt |grep -v lgwr|grep -v smon|grep -v grep|grep -v dbw|grep -v psp|grep -v mman |grep -v rec|awk '{print $2}'|xargs kill -9

and alert log will have:
Fri Jul 30 23:20:58 EDT 2010
ARCH: Detected ARCH process failure
ARCH: Detected ARCH process failure
ARCH: STARTING ARCH PROCESSES
ARC0 started with pid=20, OS id=14959
Fri Jul 30 23:20:58 EDT 2010
ARC0: Archival started
ARC1: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
Fri Jul 30 23:20:58 EDT 2010
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1 started with pid=21, OS id=14961
ARC1: Becoming the heartbeat ARCH
Fri Jul 30 23:21:29 EDT 2010
found dead shared server 'S000', pid = (10, 3)
found dead dispatcher 'D000', pid = (11, 3)
Fri Jul 30 23:22:29 EDT 2010
Restarting dead background process CJQ0
Restarting dead background process QMNC
CJQ0 started with pid=12, OS id=15124
Fri Jul 30 23:22:29 EDT 2010
Restarting dead background process MMON
QMNC started with pid=13, OS id=15126
Fri Jul 30 23:22:29 EDT 2010
Restarting dead background process MMNL
MMON started with pid=14, OS id=15128
MMNL started with pid=16, OS id=15132

That's all right!

重做日志浪费(redo wastage)

Oracle中联机日志文件(online redo log)在大多平台上以512 字节为一个标准块。

(HPUX,Tru64 Unix上是1024bytes,SCO UNIX,Reliant UNIX上是2048bytes,而MVS,MPE/ix上是4096bytes,虽然以上许多UNIX已经不再流行,实际情况可以通过

select max(l.lebsz) log_block_size_kccle

from sys.x$kccle l

where l.inst_id = userenv(‘Instance’)   语句查询到)

LGWR后台进程写出REDO时未必能填满最后的当前日志块。举例而言,假设redo buffer中有1025字节的内容需要写出,则1025=512+512+1 共占用三个重做日志标准块,前2个标准块被填满而第三个标准块只使用了1个字节。在LGWR完成写出前,需要释放”redo allocation”闩,在此之前SGA中索引”redo buffer”信息的变量将指向未被填满块后面的一个重做块,换而言之有511字节的空间被LGWR跳过了,这就是我们说的redo wastage;我们可以通过分析v$sysstat动态视图中的redo wastage统计信息了解实例生命周期中的重做浪费量。

SQL> col name for a25
SQL> select name,value from v$sysstat where name like '%wastage%';

NAME                           VALUE
------------------------- ----------
redo wastage                  132032

redo wastage的一个图示:

为什么要浪费这部分空间呢?实际上,这种做法十分有益于LGWR的串行I/O模式。redo wastage并不是问题或者Bug,而是Oracle故意为之的。当然过量的redo wastage也不是什么好事,一般是LGWR写出过于频繁的症状表现。9i以后很少有因为隐式参数_log_io_size过小而导致的LGWR过载了,如果在您的系统中发现redo wastage的问题不小,那么无限制地滥用commit操作往往是引起问题的罪魁祸首,减少不必要的commit语句,把commit从循环中移除都将利于减少redo wastage。

我们来看一下关于redo wastage的演示:

SQL> select distinct bytes/1024/1024 from v$log;

BYTES/1024/1024
---------------
             50                          /*确认联机日志文件大小为50MB*/
SQL> archive log list;                 /*确认数据库处于归档状态*/
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /s01/arch
SQL> set time on;
19:49:45 SQL> alter system switch logfile;           /*切换日志,清理现场*/
System altered.
19:51:07 SQL> col name for a25
19:51:16 SQL> select name,value from v$sysstat where name in ('redo size','redo wastage');

NAME                           VALUE
------------------------- ----------
redo size                 1418793324
redo wastage                88286544               /*演示开始时的基础统计值*/
19:51:19 SQL> begin
19:52:10   2  for i in 1..550000 loop
19:52:10   3  insert into tv values(1,'a');
19:52:10   4   commit;
19:52:10   5   end loop;
19:52:10   6   end;
19:52:11   7  /
/*匿名块中commit操作位于loop循环内,将导致大量redo wastage*/
PL/SQL procedure successfully completed.

19:53:07 SQL> select name,value from v$sysstat where name in ('redo size','redo wastage');

NAME                           VALUE
------------------------- ----------
redo size                 1689225404
redo wastage               112011352
/*频繁提交的匿名块产生了 1689225404-1418793324=257MB的redo,其中存在112011352-88286544=22MB的redo wastage*/

19:53:14 SQL>  begin
19:53:33   2  for i in 1..550000 loop
19:53:33   3  insert into tv values(1,'a');
19:53:33   4  end loop;
19:53:33   5    commit;
19:53:33   6   end;
19:53:34   7  /
/* 此匿名块中commit操作被移除loop循环中,批量修改数据后仅在最后提交一次*/
PL/SQL procedure successfully completed.

19:53:59 SQL> select name,value from v$sysstat where name in ('redo size','redo wastage');

NAME                           VALUE
------------------------- ----------
redo size                 1828546240
redo wastage               112061296
/*稀疏提交的匿名块最后产生了1828546240-1689225404=132MB的重做,而redo wastage为112061296-112011352=48k*/

可能您会很奇怪前者不是只比后者多出22MB的redo浪费吗,为什么总的redo量差了那么多?

我们需要注意到commit本身也是要产生redo的,而且其所产生的还不少!就以上演示来看频繁提交的过程中,commit所占用的redo空间几乎接近一半(257-132-22)/257=40%,而每次commit的平均redo量为(257-132-22)*1024*1024/550000=196 bytes。

commit操作是事务ACID的基础之一,合理运用commit可以帮我们构建健壮可靠的应用,而滥用它必将是另一场灾难!

沪ICP备14014813号-2

沪公网安备 31010802001379号