As an oracle DBA, you may have come across situations where you are left with no other options of tuning the query because of certain application dependencies and then you have to push hints through outlines, sql patch or sql profiles. I had to deal with an interesting situation where it becomes quite tricky while pushing the hints.
First of all as a performance tuning DBA, this question should always come in your mind that why do I have to use hints at all? Why oracle optimizer is not able to pick the optimal execution plan at the first place. But there are always cases when optimizer can't simply pick right driving table, or joing order or index due to various constraints. In my case, I was dealing with nearly 2000 lines of code, with 40 WITH clause subquery blocks and each subquery had 14 tables getting joined together. Due to business urgency, code change was not an option. And in this case, first thing you need to make sure that your stats are accurate, you have histograms on the columns containing skewed data and you have extended stats on the columns combinations determined by dbms_stats.seed_col_usage() procedure and you have selective indexes.
Even after that, with large number of tables and temporary table transformation due to query sub factoring, optimizer may end up picking undesired driving table or unselective index. And things become more worse when due to present of hierarchies, optimizer has to do cardinality guess which doesn't fit accurate. I will discuss on these topics in more detail in separate blogs.
So challenge here was to determine query blocks where optimizer didn't pick right driving table and ended up in doing partition range all. Thankfully, you can use format=>OUTLINE in dbms_xplan.display method to see the optimizer outline hints. From there you can check for LEADING hint and find out which block is not using desired driving table in LEADING hint. Once you have these, you can arrange your tables as per the desired format in LEADING hint and all you have to do is to create a SQL profile or patch. Here is the code I used to tune required queries -
DECLARE
sql_text clob ;
description varchar2(1000):='Temporary sql patch for BI report';
name varchar2(1000) :='Sql_Patch';
output varchar2(1000);
sqlpro_attr SYS.SQLPROF_ATTR;
l_count number:=0;
begin
select outline_hints
bulk collect
into
sqlpro_attr
from
(select 'LEADING(@"SEL$1" "T597999"@"SEL$1" "T1841902"@"SEL$1" "T543491"@"SEL$1" "T572337"@"SEL$1" "T1841904"@"SEL$1" "T547300"@"SEL$1")' as outline_hints from dual
union all
select 'LEADING(@"SEL$2" "T597999"@"SEL$2" "T1841902"@"SEL$2" "T543491"@"SEL$2" "T572337"@"SEL$2" "T1841904"@"SEL$2" "T547300"@"SEL$2")' as outline_hints from dual
union all
select 'LEADING(@"SEL$3" "T597999"@"SEL$3" "T1841902"@"SEL$3" "T543491"@"SEL$3" "T572337"@"SEL$3" "T1841904"@"SEL$3" "T547300"@"SEL$3")' as outline_hints from dual
union all
select 'LEADING(@"SEL$4" "T597999"@"SEL$4" "T1841902"@"SEL$4" "T543491"@"SEL$4" "T572337"@"SEL$4" "T1841904"@"SEL$4" "T547300"@"SEL$4")' as outline_hints from dual);
for l1 in
(select a.sql_id, c.sql_text, a.FORCE_MATCHING_SIGNATURE from sys.wrh$_sqlstat A, dba_hist_snapshot B, sys.wrh$_sqltext C
where B.BEGIN_INTERVAL_TIME >= trunc(sysdate) - 7 AND
A.SNAP_ID = B.SNAP_ID and A.instance_number = B.instance_number and
A.PARSING_SCHEMA_NAME='XXRPT_READONLY' and
C.SQL_ID=A.SQL_ID and
((ROUND(((a.ELAPSED_TIME_DELTA/1000000)/a.EXECUTIONS_DELTA),5) > 60 and a.EXECUTIONS_DELTA <> 0) OR
(ROUND((a.ELAPSED_TIME_DELTA/1000000),5) > 60 and a.EXECUTIONS_DELTA = 0))
) loop
if DBMS_LOB.INSTR(l1.SQL_TEXT, 'T547300.ACCT_PERIOD_END_DT_WID = 10120140831000.0', 1, 1) > 0 or
DBMS_LOB.INSTR(l1.SQL_TEXT, 't547300.acct_period_end_dt_wid = t597999.row_wid', 1, 1) > 0 or
DBMS_LOB.INSTR(l1.SQL_TEXT, 't547300.acct_period_end_dt_wid = 10120140831000.0', 1, 1) > 0 or
DBMS_LOB.INSTR(l1.SQL_TEXT, 'T547300.ACCT_PERIOD_END_DT_WID = T597999.ROW_WID', 1, 1) > 0
then
dbms_output.put_line(l1.sql_id ||'-'|| l1.FORCE_MATCHING_SIGNATURE);
begin
l_count:= l_count + 1;
output := SYS.DBMS_SQLTUNE_INTERNAL.I_CREATE_SQL_PROFILE(
SQL_TEXT => l1.sql_text,
PROFILE_XML => DBMS_SMB_INTERNAL.VARR_TO_HINTS_XML(sqlpro_attr),
NAME => name || l_count,
DESCRIPTION => description,
CATEGORY => 'DEFAULT',
CREATOR => 'SYS',
VALIDATE => TRUE,
TYPE => 'PATCH',
FORCE_MATCH => TRUE,
IS_PATCH => TRUE);
dbms_output.put_line(output);
exception when others then
dbms_output.put_line('SQLERRM-' || sqlerrm);
end;
end if;
end loop;
end;
/
In the above code, I am passing the collection of hints in sqlpro_attr which is collection type , converting the set of hints into XML format and passing it into DBMS_SQLTUNE_INTERNAL.I_CREATE_SQL_PROFILE.
For finding the problematic SQL I am using dbms_lob.instr to check for matching text having elapsed time more than 60 seconds and creating SQL Profile on the text. As you can see that there are four subquery blocks (SEL$1, SEL$2,SEL$3,SEL$4) resulted in poor performance, so all my LEADING hints are referring to those blocks with the desired order of tables to be joined.
And thats it, so no matter how big the query is, as long as you know the optimal join order and subquery block having issues, you can pass as many hints using the method I mentioned above to create SQL profiles on hinted SQL Text.
Let me know your thoughts or if you have any questions.
Azhar Khan's Oracle Blog
Thursday, October 22, 2015
Monday, March 30, 2015
Ask me Oracle
Just opening this thread for addressing day today oracle performance issues or any questions you have. Next time I will do a better job by organizing the content and will probably have separate website altogether.
Monday, March 2, 2015
Tuning SQL query without changing code
In this article, I will be discussing various methods of tuning the SQL without rewriting the code. You can use these methods under certain circumstances like vendor packages where code change is not possible or you need to restore the performance of SQL query immediately if it starts performing poor due to plan changes.
I am using below example where I created a simple table MDSDBA.TEST_REWRITE_EQ from data of dba_users in one of the test environment. The examples used are purely for demonstration purposes and should not be confused with any other performance problems.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3048021686
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495 | 4950 | 1 (0)| 00:00:01 |
| 1 | INDEX FULL SCAN | TEST_REWRITE_EQ_NM_IDX | 495 | 4950 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
8 rows selected.
In the above example you can see that there is INDEX FULL SCAN which is good. Now just for the sake of demonstration I want to force full scan for the table TEST_REWRITE_EQ. I can use below three methods -
1.) Using the package declare_rewrite_equivalance.
2.) Using SQL plan baselines or stored outlines
3.) Using SQL patch.
METHOD I: DBMS_ADVANCED_REWRITE
SQL> begin
2 sys.dbms_advanced_rewrite.
3 declare_rewrite_equivalence
4 (
5 name => 'TEST_REWRITE_EQUIV',
6 source_stmt => 'select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ',
7 destination_stmt => 'select /*+ FULL(TEST_REWRITE_EQ) */ username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ where 1=1',
8 validate => TRUE
9 /*rewrite_mode => 'TEXT_MATCH'*/
10 );
11 end;
12 /
PL/SQL procedure successfully completed.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3048021686
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495 | 4950 | 1 (0)| 00:00:01 |
| 1 | INDEX FULL SCAN | TEST_REWRITE_EQ_NM_IDX | 495 | 4950 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
8 rows selected.
As you can see, rewrite equivalence isn't used in our case and the reason being query_rewrite_enabled is set to FALSE.
SQL> sho parameter rewrite
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_enabled string FALSE
query_rewrite_integrity string ENFORCED
SQL> alter session set query_rewrite_enabled=TRUE;
Session altered.
SQL> alter session set query_rewrite_integrity=TRUSTED;
Session altered
After setting the parameter query_rewrite_enabled to TRUE and query_rewrite_integrity to TRUSTED, you can see that now oracle is doing Full Table Scan instead of Index Full Scan.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3419488459
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495 | 4950 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| TEST_REWRITE_EQ | 495 | 4950 | 4 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
8 rows selected.
#####################################################################################################
METHOD II: SQL Plan Baselines -
NON HINTED SQL details:
SQL_ID:1n1hcgypncsh4, PLAN_HASH_VALUE:3048021686
HINTED SQL details:
SQL_ID:3j238umtfht7y, PLAN_HASH_VALUE:3419488459
Create SQL Plan baseline on the non hinted Original SQL:
SQL> variable cnt number;
EXECUTE :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id=>'1n1hcgypncsh4'); SQL>
PL/SQL procedure successfully completed.
SQL> SELECT sql_handle, sql_text, plan_name, enabled FROM dba_sql_plan_baselines;
SQL_HANDLE SQL_TEXT PLAN_NAME ENA
------------------------------ -------------------------------------------------------------------------------- ------------------------------ ---
SQL_4666a5f125a71327 select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ SQL_PLAN_4ctp5y4kuf4t7ecfcc08f YES
Disable the existing Plan baseline using below code:
variable cnt number;
exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(SQL_HANDLE => 'SQL_4666a5f125a71327',PLAN_NAME=> 'SQL_PLAN_4ctp5y4kuf4t7ecfcc08f',ATTRIBUTE_NAME => 'enabled',ATTRIBUTE_VALUE => 'NO');
Pin the execution plan of hinted SQL to original SQL handle -
exec :cnt:=dbms_spm.load_plans_from_cursor_cache(sql_id => '3j238umtfht7y',plan_hash_value => 3419488459,sql_handle => 'SQL_4666a5f125a71327');
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3419488459
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250 | 2500 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| TEST_REWRITE_EQ | 250 | 2500 | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Note
- SQL plan baseline "SQL_PLAN_4ctp5y4kuf4t7047d5bcc" used for this statement
12 rows selected.
#####################################################################################################################################
METHOD III: SQL Patch -
Drop the SQL plan baseline created earlier using below package -
SQL> exec :cnt:=dbms_spm.drop_SQL_PLAN_BASELINE(SQL_HANDLE => 'SQL_4666a5f125a71327');
PL/SQL procedure successfully completed.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3048021686
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250 | 2500 | 1 (0)| 00:00:01 |
| 1 | INDEX FULL SCAN | TEST_REWRITE_EQ_NM_IDX | 250 | 2500 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
8 rows selected.
BEGIN
SYS.DBMS_SQLDIAG_INTERNAL.i_create_patch(sql_text => 'select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ',
hint_text => 'FULL(TEST_REWRITE_EQ)',
name => 'test_sql_patch');
END;
/
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3419488459
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250 | 2500 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| TEST_REWRITE_EQ | 250 | 2500 | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Another use of SQL patch is to fix SQLs errors as part of SQL repair advisor.
declare
l_task varchar2(1000);
begin
l_task := SYS.DBMS_SQLDIAG.create_diagnosis_task(
sql_id => '3j238umtfht7y',
task_name => 'sql_repair_task',
problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_PERFORMANCE);
end;
/
where problem_type can take any of below values -
PROBLEM_TYPE_PERFORMANCE - The SQL is performing badly.
PROBLEM_TYPE_WRONG_RESULTS - The query appears to be giving inconsistent results.
PROBLEM_TYPE_COMPILATION_ERROR - The query fails to compile, even though it should be valid.
PROBLEM_TYPE_EXECUTION_ERROR - The query compiles, but results in a runtime error, probably due to a bad execution plan, or an execution plan that encounters a bug.
PROBLEM_TYPE_ALT_PLAN_GEN - The user wants the advisor to look for alternative plans.
exec SYS.DBMS_SQLDIAG.execute_diagnosis_task(task_name => 'sql_repair_task');
SQL> set serveroutput on size 1000000
declare
l_report clob;
begin
l_report := SYS.DBMS_SQLDIAG.report_diagnosis_task(task_name => 'sql_repair_task' );
dbms_output.put_line(l_report);
end;
/
SQL> 2 3 4 5 6 7
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : sql_repair_task
Tuning Task Owner : SYS
Workload Type : Single SQL Statement
Scope : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status : COMPLETED
Started at : 03/02/2015 15:19:06
Completed at : 03/02/2015 15:19:07
-------------------------------------------------------------------------------
Schema Name: MDSDBA
SQL ID : 3j238umtfht7y
SQL Text : select /*+ FULL(TEST_REWRITE_EQ) */ username from
MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ
-------------------------------------------------------------------------------
SQL patch was found to resolve the problem.
BEGIN
SYS.DBMS_SQLDIAG.accept_sql_patch(
task_name => 'sql_repair_task',
task_owner => 'SYS',
replace => TRUE);
END;
/
-------------------------------------------------------------------------------
If you are okay with the findings, then accept SQL patch -
BEGIN
SYS.DBMS_SQLDIAG.accept_sql_patch(
task_name => 'sql_repair_task',
task_owner => 'SYS',
replace => TRUE);
END;
/
I am using below example where I created a simple table MDSDBA.TEST_REWRITE_EQ from data of dba_users in one of the test environment. The examples used are purely for demonstration purposes and should not be confused with any other performance problems.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3048021686
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495 | 4950 | 1 (0)| 00:00:01 |
| 1 | INDEX FULL SCAN | TEST_REWRITE_EQ_NM_IDX | 495 | 4950 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
8 rows selected.
In the above example you can see that there is INDEX FULL SCAN which is good. Now just for the sake of demonstration I want to force full scan for the table TEST_REWRITE_EQ. I can use below three methods -
1.) Using the package declare_rewrite_equivalance.
2.) Using SQL plan baselines or stored outlines
3.) Using SQL patch.
METHOD I: DBMS_ADVANCED_REWRITE
SQL> begin
2 sys.dbms_advanced_rewrite.
3 declare_rewrite_equivalence
4 (
5 name => 'TEST_REWRITE_EQUIV',
6 source_stmt => 'select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ',
7 destination_stmt => 'select /*+ FULL(TEST_REWRITE_EQ) */ username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ where 1=1',
8 validate => TRUE
9 /*rewrite_mode => 'TEXT_MATCH'*/
10 );
11 end;
12 /
PL/SQL procedure successfully completed.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3048021686
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495 | 4950 | 1 (0)| 00:00:01 |
| 1 | INDEX FULL SCAN | TEST_REWRITE_EQ_NM_IDX | 495 | 4950 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
8 rows selected.
As you can see, rewrite equivalence isn't used in our case and the reason being query_rewrite_enabled is set to FALSE.
SQL> sho parameter rewrite
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
query_rewrite_enabled string FALSE
query_rewrite_integrity string ENFORCED
SQL> alter session set query_rewrite_enabled=TRUE;
Session altered.
SQL> alter session set query_rewrite_integrity=TRUSTED;
Session altered
After setting the parameter query_rewrite_enabled to TRUE and query_rewrite_integrity to TRUSTED, you can see that now oracle is doing Full Table Scan instead of Index Full Scan.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3419488459
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495 | 4950 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| TEST_REWRITE_EQ | 495 | 4950 | 4 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
8 rows selected.
#####################################################################################################
METHOD II: SQL Plan Baselines -
NON HINTED SQL details:
SQL_ID:1n1hcgypncsh4, PLAN_HASH_VALUE:3048021686
HINTED SQL details:
SQL_ID:3j238umtfht7y, PLAN_HASH_VALUE:3419488459
Create SQL Plan baseline on the non hinted Original SQL:
SQL> variable cnt number;
EXECUTE :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id=>'1n1hcgypncsh4'); SQL>
PL/SQL procedure successfully completed.
SQL> SELECT sql_handle, sql_text, plan_name, enabled FROM dba_sql_plan_baselines;
SQL_HANDLE SQL_TEXT PLAN_NAME ENA
------------------------------ -------------------------------------------------------------------------------- ------------------------------ ---
SQL_4666a5f125a71327 select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ SQL_PLAN_4ctp5y4kuf4t7ecfcc08f YES
Disable the existing Plan baseline using below code:
variable cnt number;
exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(SQL_HANDLE => 'SQL_4666a5f125a71327',PLAN_NAME=> 'SQL_PLAN_4ctp5y4kuf4t7ecfcc08f',ATTRIBUTE_NAME => 'enabled',ATTRIBUTE_VALUE => 'NO');
Pin the execution plan of hinted SQL to original SQL handle -
exec :cnt:=dbms_spm.load_plans_from_cursor_cache(sql_id => '3j238umtfht7y',plan_hash_value => 3419488459,sql_handle => 'SQL_4666a5f125a71327');
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3419488459
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250 | 2500 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| TEST_REWRITE_EQ | 250 | 2500 | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Note
- SQL plan baseline "SQL_PLAN_4ctp5y4kuf4t7047d5bcc" used for this statement
12 rows selected.
#####################################################################################################################################
METHOD III: SQL Patch -
Drop the SQL plan baseline created earlier using below package -
SQL> exec :cnt:=dbms_spm.drop_SQL_PLAN_BASELINE(SQL_HANDLE => 'SQL_4666a5f125a71327');
PL/SQL procedure successfully completed.
SQL> explain plan for select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3048021686
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250 | 2500 | 1 (0)| 00:00:01 |
| 1 | INDEX FULL SCAN | TEST_REWRITE_EQ_NM_IDX | 250 | 2500 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
8 rows selected.
BEGIN
SYS.DBMS_SQLDIAG_INTERNAL.i_create_patch(sql_text => 'select username from MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ',
hint_text => 'FULL(TEST_REWRITE_EQ)',
name => 'test_sql_patch');
END;
/
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3419488459
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250 | 2500 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| TEST_REWRITE_EQ | 250 | 2500 | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Another use of SQL patch is to fix SQLs errors as part of SQL repair advisor.
declare
l_task varchar2(1000);
begin
l_task := SYS.DBMS_SQLDIAG.create_diagnosis_task(
sql_id => '3j238umtfht7y',
task_name => 'sql_repair_task',
problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_PERFORMANCE);
end;
/
where problem_type can take any of below values -
PROBLEM_TYPE_PERFORMANCE - The SQL is performing badly.
PROBLEM_TYPE_WRONG_RESULTS - The query appears to be giving inconsistent results.
PROBLEM_TYPE_COMPILATION_ERROR - The query fails to compile, even though it should be valid.
PROBLEM_TYPE_EXECUTION_ERROR - The query compiles, but results in a runtime error, probably due to a bad execution plan, or an execution plan that encounters a bug.
PROBLEM_TYPE_ALT_PLAN_GEN - The user wants the advisor to look for alternative plans.
exec SYS.DBMS_SQLDIAG.execute_diagnosis_task(task_name => 'sql_repair_task');
SQL> set serveroutput on size 1000000
declare
l_report clob;
begin
l_report := SYS.DBMS_SQLDIAG.report_diagnosis_task(task_name => 'sql_repair_task' );
dbms_output.put_line(l_report);
end;
/
SQL> 2 3 4 5 6 7
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : sql_repair_task
Tuning Task Owner : SYS
Workload Type : Single SQL Statement
Scope : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status : COMPLETED
Started at : 03/02/2015 15:19:06
Completed at : 03/02/2015 15:19:07
-------------------------------------------------------------------------------
Schema Name: MDSDBA
SQL ID : 3j238umtfht7y
SQL Text : select /*+ FULL(TEST_REWRITE_EQ) */ username from
MDSDBA.TEST_REWRITE_EQ TEST_REWRITE_EQ
-------------------------------------------------------------------------------
SQL patch was found to resolve the problem.
BEGIN
SYS.DBMS_SQLDIAG.accept_sql_patch(
task_name => 'sql_repair_task',
task_owner => 'SYS',
replace => TRUE);
END;
/
-------------------------------------------------------------------------------
If you are okay with the findings, then accept SQL patch -
BEGIN
SYS.DBMS_SQLDIAG.accept_sql_patch(
task_name => 'sql_repair_task',
task_owner => 'SYS',
replace => TRUE);
END;
/
Wednesday, January 21, 2015
Index Joins in Oracle database
In normal scenario if you have multiple indexes on the columns on the table oracle optimizer would prefer one of the available indexes based on clustering factor of the index and other factors like use of order by clause on the indexed column etc.I will just show an example where I used index join method to eliminate table access by index row id and was able to retrieve data from indexes only.
Table has five columns and there are two separate indexes XYZ_I1 and XYZ_I2 which on COL1 and COL2 respectively. COL1 and COL2 contain sequence values and table XYZ has around 10000 records.
SQL> desc xyz
Name Null? Type
------------------------------------------------------------------- -------- ---------------------------------------------
COL1 NUMBER
COL2 NUMBER
COL3 NUMBER
COL4 NUMBER
COL5 NUMBER
SQL> explain plan for select /*+INDEX_JOIN(xyz XYZ_I1 XYZ_I2)*/ col1, col2 from xyz where col1=1 and col2=2 order by col1, col2;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 364115606
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 3 (34)| 00:00:01 |
|* 1 | VIEW | index$_join$_001 | 1 | 8 | 3 (34)| 00:00:01 |
|* 2 | HASH JOIN | | | | | |
|* 3 | INDEX RANGE SCAN| XYZ_I1 | 1 | 8 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN| XYZ_I2 | 1 | 8 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
As you can clearly see from explain plan that oracle was able to fetch required data by joining two indexes and there is no table access by index row id.
Other efficient approach could be creating a composite index on (col1,col2) and fetching the data from leaf blocks of one single index.
SQL>create index XYZ_I3 on XYZ(COL1, COL2);
SQL>index created.
SQL> explain plan for select /*+INDEX_JOIN(xyz XYZ_I1 XYZ_I2)*/ col1, col2 from xyz where col1=1 and col2=2 order by col1, col2;
Explained.
Table has five columns and there are two separate indexes XYZ_I1 and XYZ_I2 which on COL1 and COL2 respectively. COL1 and COL2 contain sequence values and table XYZ has around 10000 records.
SQL> desc xyz
Name Null? Type
------------------------------------------------------------------- -------- ---------------------------------------------
COL1 NUMBER
COL2 NUMBER
COL3 NUMBER
COL4 NUMBER
COL5 NUMBER
Now if I execute query (select col1, col2 from xyz where col1=1 and col2=2 order by col1, col2), under this condition Oracle can pick either of below two paths and oracle cannot use both the indexes-
a.) Table access by Index RowID XYZ using Index Range Scan of XYZ_I1 or
b.) Table access by Index RowID XYZ using Index Range Scan of XYZ_I2
SQL> explain plan for select col1, col2 from xyz where col1=1 and col2=2 order by col1, col2;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3398653176
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| XYZ | 1 | 8 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | XYZ_I1 | 1 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------
But if you closely look upon where clause criteria and columns being fetched, you would find that we are trying to fetch only columns col1 and col2 data of which exist in the indexes itself. So, is there a way to join these two indexes and retrieve the data from indexes itself? Yes, there is and this is what so called as index joins.
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 364115606
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 3 (34)| 00:00:01 |
|* 1 | VIEW | index$_join$_001 | 1 | 8 | 3 (34)| 00:00:01 |
|* 2 | HASH JOIN | | | | | |
|* 3 | INDEX RANGE SCAN| XYZ_I1 | 1 | 8 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN| XYZ_I2 | 1 | 8 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
As you can clearly see from explain plan that oracle was able to fetch required data by joining two indexes and there is no table access by index row id.
Other efficient approach could be creating a composite index on (col1,col2) and fetching the data from leaf blocks of one single index.
SQL>create index XYZ_I3 on XYZ(COL1, COL2);
SQL>index created.
SQL> explain plan for select /*+INDEX_JOIN(xyz XYZ_I1 XYZ_I2)*/ col1, col2 from xyz where col1=1 and col2=2 order by col1, col2;
Explained.
SQL> explain plan for select col1, col2 from xyz where col1=1 and col2=2 order by col1, col2;
Explained.
SQL> select * from table(dbms_xplan.display());
Plan hash value: 3398653124
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 1 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN | XYZ_I3 | 1 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------
Hope I am able to explain the concept of index joins and you find this article useful. Please let me know for any questions.
Monday, December 8, 2014
Insert slows down with cell single block physical read and read by other session events.
Recently I got chance to work on performance tuning issue with one of the application where performance suddenly degraded on database running on Exadata machine. On investigation I found that there are lot of concurrent sessions from application trying to insert into same table but spending lot of time on 'read by other session' and 'cell single block physical read' events. The strange thing was that the insert was simple statement with no underlying select query or triggers on the table being inserted. So what would have caused insert to perform poor suddenly which was earlier running in fractions of seconds.
I did some comparison for good and bad performance AWR snapshots. One thing was clear that during the day when we started having performance issue, application was trying to do lot many concurrent inserts on same table. But should oracle really limit the concurrency of application? Yes, under certain scenarios.
1.) If you have primary key on the table and application trying to insert same record, oracle would acquire row lock contention for period of time until blocking session commits and the blocked session would immediately get 'Primary or Unique Key Violation' message. This was not the case in the issue we had as there were no symptoms of such row lock contentions.
2.) Index leaf block becomes hot due to right handed growing index. This normally happens where you have index defined on sequences and such contention can be reduced by use of hash partitioning the index as the whole index tree would be divided into separate multiple trees due to hash algorithm and would minimize such contention. Even that was not in our case as there were no such symptoms.
3.) Other thing in Exadata environment, you would like to check whether storage cell disk is performing fine. We examined same and there was no issue with the Exadata Storage.
4.) GC Buffer busy waits in RAC environment could cause severe performance degradation if multiple sessions from different nodes request for a block. In RAC if an instance doesn't find the buffer in its local SGA, then block needs to be transferred from holding instance SGA through cache fusion. Now if the instance holding the block is busy, or if the block is already in use, then requesting instance has to wait and GC buffer busy will show up. I did not find any such high waits during the time when we had performance issue.
Now we need to drill down and go into some details regarding how oracle works when there is request for inserting record in a data block. Basically each time when a session tries to insert a record, it has to find a block having enough free space where that record can fit. Oracle maintains list of free blocks in bitmap format in ASSM (freelists in MSSM) in the datafile header blocks. Further to speed up this searching process of free blocks, oracle also maintains this information in space search cache.
But what if block marked as free doesn't contain enough free space to hold the record being inserted? User process in this case would spend time in finding the right block having enough space and all other sessions which are also trying to insert record for the same table would also result in scanning these many blocks. And what if the block is not already available in buffer cache, they will have to perform unnecessary physical reads. And what if block is already being read into buffer cache by some session, other sessions will have to wait until the block is read into buffer cache and this shows up with read by other session wait. In our case we finally found that issue happening sporadically matches with 'INSERT slow on ASSM (Doc ID 1263496.1)'.
On performing below steps I was able to relate that the issue occurred because of bulk load and subsequent inserts from multiple sessions slowed down as they had to spend large time in finding the right free block having enough space.
6.) If above is not the case then it could be an issue with space search cache and same can be determined by examining the stack trace taken in step 4.
Short stack dump:
ksedsts()+461<-ksdxfstk()+32<-ksdxcb()+1876<-sspuser()+112<-__sighandler()<-__poll()+47<-ssskgxp_poll()+40<-sskgxp_select()+263<-skgxpiwait()+3901<-skgxpwaiti()+1832<-skgxpwait()+162<-ossnet_wait_all()+241<-ossnet_wait()+146<-oss_wait()+991<-kfk_wait_on_oss_io()+406<-kfk_submit_oss_io_wtevt()+285<-kfk_submit_oss_io()+415<-kfk_submit_io()+105<-kfk_io1()+1068<-kfkRequest()+14<-kfk_transitIO()+1696<-kfioSubmitIO()+4777<-kfioRequestPriv()+199<-kfioRequest()+706<-ksfd_kfioRequest()+649<-ksfd_osmio()+1114<-ksfd_io()+2761<-ksfdread()+581<-kcfrbd1()+609<-kcbzib()+1888<-kcbgcur()+8274<-ktbgcur()+123<-ktspfsrch()+2569<-ktspscan_bmb()+898<-ktspgsp_main()+1817<-kdtgsp()+1576<-kdtgsph()+107<-kdtgrs()+1013<-kdtSimpleInsRow()+320<-qerltcSimpleSingleInsRowCBK()+46<-qerltcSingleRowLoad()+279<-qerltcFetch()+379<-insexe()+682<-opiexe()+5622<-kpoal8()+2076<-opiodr()+916<-ttcpip()+2210<-opitsk()+1717<-opiino()+966<-opiodr()+916<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+252<-main()+201<-__libc_start_main()+244<-_start()+36
If you see above waits for space search cache, then try disabling space search cache and see if performance improves. This action should be done only after consultation with oracle support.
I did some comparison for good and bad performance AWR snapshots. One thing was clear that during the day when we started having performance issue, application was trying to do lot many concurrent inserts on same table. But should oracle really limit the concurrency of application? Yes, under certain scenarios.
1.) If you have primary key on the table and application trying to insert same record, oracle would acquire row lock contention for period of time until blocking session commits and the blocked session would immediately get 'Primary or Unique Key Violation' message. This was not the case in the issue we had as there were no symptoms of such row lock contentions.
2.) Index leaf block becomes hot due to right handed growing index. This normally happens where you have index defined on sequences and such contention can be reduced by use of hash partitioning the index as the whole index tree would be divided into separate multiple trees due to hash algorithm and would minimize such contention. Even that was not in our case as there were no such symptoms.
3.) Other thing in Exadata environment, you would like to check whether storage cell disk is performing fine. We examined same and there was no issue with the Exadata Storage.
4.) GC Buffer busy waits in RAC environment could cause severe performance degradation if multiple sessions from different nodes request for a block. In RAC if an instance doesn't find the buffer in its local SGA, then block needs to be transferred from holding instance SGA through cache fusion. Now if the instance holding the block is busy, or if the block is already in use, then requesting instance has to wait and GC buffer busy will show up. I did not find any such high waits during the time when we had performance issue.
Now we need to drill down and go into some details regarding how oracle works when there is request for inserting record in a data block. Basically each time when a session tries to insert a record, it has to find a block having enough free space where that record can fit. Oracle maintains list of free blocks in bitmap format in ASSM (freelists in MSSM) in the datafile header blocks. Further to speed up this searching process of free blocks, oracle also maintains this information in space search cache.
But what if block marked as free doesn't contain enough free space to hold the record being inserted? User process in this case would spend time in finding the right block having enough space and all other sessions which are also trying to insert record for the same table would also result in scanning these many blocks. And what if the block is not already available in buffer cache, they will have to perform unnecessary physical reads. And what if block is already being read into buffer cache by some session, other sessions will have to wait until the block is read into buffer cache and this shows up with read by other session wait. In our case we finally found that issue happening sporadically matches with 'INSERT slow on ASSM (Doc ID 1263496.1)'.
On performing below steps I was able to relate that the issue occurred because of bulk load and subsequent inserts from multiple sessions slowed down as they had to spend large time in finding the right free block having enough space.
1.) Find all the database sessions and instances for
which the insert is slow. We need to enable the trace for 2-3 such sessions for
collecting more diagnostic information and rule out any possibility of oracle
bug. All below commands need to be executed using sys user. For finding such
sessions below command can be used or same can be retrieved using OEM:
SELECT sess.inst_id, SID, SERIAL#,
sess.sql_id,last_call_ET,to_char(logon_time,'YYYY-MM-DD HH24:MI:SS')
logon_since,
event,
status,
blocking_session,
BLOCKING_INSTANCE,
sess.program,
sess.module,
sess.machine
FROM
gv$session sess, gv$sql sq
WHERE status
= 'ACTIVE' AND USERNAME = <owner> and
sess.sql_id = sq.sql_id and
sess.inst_id = sq.inst_id and
upper(sql_text) like '%object_name%' and
sess.last_call_et > 1;
2.) Enable trace for one of the session undergoing
poor performance (having lot of read by other sessions and cell single physical
read block events)
select spid from gv$process where addr in (select
paddr from gv$session where sid = <session_id> and inst_id =
<instance_number>);
Login to the
node as sysdba where above session is currently running and enable ASSM trace
using below command:
ORADEBUG SETOSPID <spid value from above
query>;
ORADEBUG UNLIMIT;
ORADEBUG EVENT 10320 TRACE NAME CONTEXT FOREVER,
LEVEL 3;
ORADEBUG
EVENT 10612 TRACE NAME CONTEXT FOREVER, LEVEL 1;
ORADEBUG
tracefile_name;
Keep it active for around 2-3 minutes and
then disable the trace using below command:
ORADEBUG
EVENT 10320 TRACE NAME CONTEXT OFF;
ORADEBUG
EVENT 10612 TRACE NAME CONTEXT OFF;
Exit;
Download the tracefile name as appeared
using oradebug tracefile_name.
3.)
Also collect error stack for the session for
which insert is running slow. Login to the instance as sysdba where insert is
slow and run below commands:
sqlplus '/ as sysdba'
oradebug setospid <spid value>
oradebug unlimit
oradebug dump errorstack 3
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 3
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 3
oradebug setospid <spid value>
oradebug unlimit
oradebug dump errorstack 3
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 3
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 1
!sleep 30
oradebug dump errorstack 3
oradebug
tracefile_name;
quit;
4.)
Run below as sysdba from any instance and
capture the output:
set serveroutput on size unlimited
declare
total_bl number;
total_by number;
unused_bl number;
unused_by number;
last_used_extent_file_id number;
last_used_extent_block_id number;
last_used_block number;
unformatted_blocks number;
unformatted_bytes number;
fs1_bl number;
fs1_by number;
fs2_bl number;
fs2_by number;
fs3_bl number;
fs3_by number;
fs4_bl number;
fs4_by number;
full_bl number;
full_by number;
i number;
BEGIN
dbms_space.unused_space('PDA01PRD','DM_SYSOBJECT_S',
'TABLE',total_bl , total_by , unused_bl , unused_by , last_used_extent_file_id
,last_used_extent_block_id , last_used_block );
dbms_space.space_usage('PDA01PRD','DM_SYSOBJECT_S',
'TABLE',unformatted_blocks , unformatted_bytes , fs1_bl , fs1_by , fs2_bl ,
fs2_by , fs3_bl , fs3_by, fs4_bl , fs4_by , full_bl , full_by );
dbms_output.put_line('| total_bl| total_by|
unused_bl| unused_by| last_used_extent_file_id|last_used_extent_block_id|
last_used_block');
dbms_output.put_line('| '||lpad(total_bl,8)||'|'||
lpad(total_by,9)||'|'||
lpad(unused_bl,10)||'|'||
lpad(unused_by,10)||'|'||
lpad(last_used_extent_file_id,25)||'|'||
lpad(last_used_extent_block_id,26)||'|'||
lpad(last_used_block,16));
dbms_output.put_line(' .');
dbms_output.put_line('| unformatted_blocks|
unformatted_bytes| fs1_bl| fs1_by| fs2_bl|fs2_by| fs3_bl| fs3_by| fs4_bl|
fs4_by| full_bl| full_by');
dbms_output.put_line('|
'||lpad(unformatted_blocks,18)||'|'||
lpad(unformatted_bytes,18)||'|'||
lpad(fs1_bl,7)||'|'||
lpad(fs1_by,7)||'|'||
lpad(fs2_bl,7)||'|'||
lpad(fs2_by,7)||'|'||
lpad(fs3_bl,7)||'|'||
lpad(fs3_by,7)||'|'||
lpad(fs4_bl,7)||'|'||
lpad(fs4_by,7)||'|'||
lpad(full_bl,8)||'|'||
lpad(full_by,8));
end;
/
prompt FS1 means 025%
free space within a block
prompt FS2 means 2550%
free space within a block
prompt FS3 means 5075%
free space within a block
prompt FS4 means 75100%
free space within a block
55.)
If value of FS1 is higher than (FS2, FS3 and
FS4), then the reason of slowness could be due to large number of blocks having
not enough space to accommodate inserts. In this case, explicitly marking the
block to be full and increasing PCTFREE should resolve the issue. Increasing PCTFREE results in blocks to be marked as FULL at higher rate but this could result in segment to grow faster and wastage of space. So it can be increased to optimum value after considering performance and space usage.
EXECUTE
DBMS_REPAIR.SEGMENT_FIX_STATUS('PDA01PRD','DM_SYSOBJECT_S');
Same can also be confirmed after examining ASSM trace file where blocks from FS1 category moving to FULL can be clearly seen :
[ktspsrch] adddba dba:0x0100636b curl1:0x01006280 curl2:0x01002601
ktspfsrch: Returns: BlockDBA:0x0100637b
kdt_bseg_srch_cbk: examine dba=4.0x0100637b
kdt_bseg_srch_cbk: failed dba=4.0x0100637b avs=837 afs=0 tosp=837 full=1
ktspfupdst: state change tsn 4, objd 96400, objn 71401, blksz 8192
ktspfupdst: oldstate 2, newstate 1, delta bytes 1065
ktspfupdst: fdba: 0x01006280 offset: 251 nblks: 1 newstate: 1
kdt_bseg_srch_cbk: mark full >the blocks is marked as full
ktspfsrch:Cbk didnot like 0x010063
6.) If above is not the case then it could be an issue with space search cache and same can be determined by examining the stack trace taken in step 4.
Short stack dump:
ksedsts()+461<-ksdxfstk()+32<-ksdxcb()+1876<-sspuser()+112<-__sighandler()<-__poll()+47<-ssskgxp_poll()+40<-sskgxp_select()+263<-skgxpiwait()+3901<-skgxpwaiti()+1832<-skgxpwait()+162<-ossnet_wait_all()+241<-ossnet_wait()+146<-oss_wait()+991<-kfk_wait_on_oss_io()+406<-kfk_submit_oss_io_wtevt()+285<-kfk_submit_oss_io()+415<-kfk_submit_io()+105<-kfk_io1()+1068<-kfkRequest()+14<-kfk_transitIO()+1696<-kfioSubmitIO()+4777<-kfioRequestPriv()+199<-kfioRequest()+706<-ksfd_kfioRequest()+649<-ksfd_osmio()+1114<-ksfd_io()+2761<-ksfdread()+581<-kcfrbd1()+609<-kcbzib()+1888<-kcbgcur()+8274<-ktbgcur()+123<-ktspfsrch()+2569<-ktspscan_bmb()+898<-ktspgsp_main()+1817<-kdtgsp()+1576<-kdtgsph()+107<-kdtgrs()+1013<-kdtSimpleInsRow()+320<-qerltcSimpleSingleInsRowCBK()+46<-qerltcSingleRowLoad()+279<-qerltcFetch()+379<-insexe()+682<-opiexe()+5622<-kpoal8()+2076<-opiodr()+916<-ttcpip()+2210<-opitsk()+1717<-opiino()+966<-opiodr()+916<-opidrv()+570<-sou2o()+103<-opimai_real()+133<-ssthrdmain()+252<-main()+201<-__libc_start_main()+244<-_start()+36
If you see above waits for space search cache, then try disabling space search cache and see if performance improves. This action should be done only after consultation with oracle support.
alter system set event="10262 trace name
context forever, level 1";
Hope you find this article useful. Pls let me know for any questions or suggestions.
References: https://support.oracle.com
Thursday, October 30, 2014
Migrate and upgrade oracle database using RMAN backupset
You can use various methods to perform database move and upgrade depending on the size of database and other constraints. If there are downtime constraints then you may probably like to consider Dataguard Rolling upgrade methods where physical standby can be created in advance and database can be upgraded using transient logical standby method. I have explained same here in my blog:
http://azharkoracle.blogspot.com/2014/08/dataguard-rolling-upgrade-using.html
For very large databases, you may also consider using Business Continuity Volumes (snapshot backups) or split mirror techniques for one time instantiation and then configure golden gate or oracle streams for replication. Refer my blog for configuring oracle streams replication:
http://azharkoracle.blogspot.com/2014/09/configuring-oracle-streams-replication.html
One other method for move and upgrade which DBA mostly considers for medium sized databases is cold backup and then upgrade the database on destination server after starting the same in upgrade mode. This time I have tried using RMAN backupsets rather than relying on cold backup for performing move and upgrade which worked absolutely fine. I moved RMAN backups sets of 11.2.0.1 for database TSTU and cloned it to TST_CFG database in 11.2.0.3 version.
Here are the high level steps for upgrade and cloning the database from one oracle version to another –
1.) Move the backup pieces of source database to desired location (/oracle/data/ora01/TST_CFG/exports) on destination server.
2.) Create instance with same name as that of source name (TSTU in our case) and start it in nomount mode.
3.) Restore the control file from the backup piece. Provide exact location of backup piece where control file is lying for restoring same.
4.) Mount the database.
5.) Rename the redo files using alter database rename file commands.
E.g. alter database rename file '/oracle/data/ora01/TSTU/ora_log_01_01.rdo' to '/oracle/data/redo01/TST_CFG/ora_log_01_01.rdo';
6.) Catalog the backup pieces using catalog start with <backup_location>.
7.) Restore and recover it after setting the filename to new location –
run {
allocate channel t1 type disk;
allocate channel t2 type disk;
set newname for tempfile 1 to '/oracle/data/ora02/TST_CFG/temp_t01_01.dbf';
set newname for tempfile 2 to '/oracle/data/ora02/TST_CFG/temporary_data_t01_01.dbf';
set newname for datafile 1 to '/oracle/data/ora02/TST_CFG/system_01.dbf';
set newname for datafile 2 to '/oracle/data/ora02/TST_CFG/sysaux_01.dbf';
set newname for datafile 3 to '/oracle/data/ora02/TST_CFG/undo_t01_01.dbf';
set newname for datafile 4 to '/oracle/data/ora02/TST_CFG/tools_t01_01.dbf';
set newname for datafile 5 to '/oracle/data/ora02/TST_CFG/users_t01_01.dbf';
set newname for datafile 6 to '/oracle/data/ora02/TST_CFG/USER_DATA_01.dbf';
set newname for datafile 7 to '/oracle/data/ora02/TST_CFG/USER_DATA_02.dbf';
set newname for datafile 8 to '/oracle/data/ora02/TST_CFG/INDEX_DATA_01.dbf';
set newname for datafile 9 to '/oracle/data/ora02/TST_CFG/USER_DATA_03.dbf';
set newname for datafile 10 to '/oracle/data/ora02/TST_CFG/lob_data_t01_01.dbf';
restore database;
switch datafile all;
recover database;
release channel t1;
release channel t2;
}
8.)Open the database using reset log upgrade mode. Rename and Recreate tempfiles if not accessible.
9.) Run catupgrd.sql
10.) Run utlu112s.sql
11.) Run catuppst.sql
12.) Shut down the database and change database name using DBNID utility. Follow post steps and voila, you have just finished cloning from RMAN baseline backup of 11.2.0.3 database (TSTU) to 11.2.0.4 database (TST_CFG).
http://azharkoracle.blogspot.com/2014/08/dataguard-rolling-upgrade-using.html
For very large databases, you may also consider using Business Continuity Volumes (snapshot backups) or split mirror techniques for one time instantiation and then configure golden gate or oracle streams for replication. Refer my blog for configuring oracle streams replication:
http://azharkoracle.blogspot.com/2014/09/configuring-oracle-streams-replication.html
One other method for move and upgrade which DBA mostly considers for medium sized databases is cold backup and then upgrade the database on destination server after starting the same in upgrade mode. This time I have tried using RMAN backupsets rather than relying on cold backup for performing move and upgrade which worked absolutely fine. I moved RMAN backups sets of 11.2.0.1 for database TSTU and cloned it to TST_CFG database in 11.2.0.3 version.
Here are the high level steps for upgrade and cloning the database from one oracle version to another –
1.) Move the backup pieces of source database to desired location (/oracle/data/ora01/TST_CFG/exports) on destination server.
2.) Create instance with same name as that of source name (TSTU in our case) and start it in nomount mode.
3.) Restore the control file from the backup piece. Provide exact location of backup piece where control file is lying for restoring same.
4.) Mount the database.
5.) Rename the redo files using alter database rename file commands.
E.g. alter database rename file '/oracle/data/ora01/TSTU/ora_log_01_01.rdo' to '/oracle/data/redo01/TST_CFG/ora_log_01_01.rdo';
6.) Catalog the backup pieces using catalog start with <backup_location>.
7.) Restore and recover it after setting the filename to new location –
run {
allocate channel t1 type disk;
allocate channel t2 type disk;
set newname for tempfile 1 to '/oracle/data/ora02/TST_CFG/temp_t01_01.dbf';
set newname for tempfile 2 to '/oracle/data/ora02/TST_CFG/temporary_data_t01_01.dbf';
set newname for datafile 1 to '/oracle/data/ora02/TST_CFG/system_01.dbf';
set newname for datafile 2 to '/oracle/data/ora02/TST_CFG/sysaux_01.dbf';
set newname for datafile 3 to '/oracle/data/ora02/TST_CFG/undo_t01_01.dbf';
set newname for datafile 4 to '/oracle/data/ora02/TST_CFG/tools_t01_01.dbf';
set newname for datafile 5 to '/oracle/data/ora02/TST_CFG/users_t01_01.dbf';
set newname for datafile 6 to '/oracle/data/ora02/TST_CFG/USER_DATA_01.dbf';
set newname for datafile 7 to '/oracle/data/ora02/TST_CFG/USER_DATA_02.dbf';
set newname for datafile 8 to '/oracle/data/ora02/TST_CFG/INDEX_DATA_01.dbf';
set newname for datafile 9 to '/oracle/data/ora02/TST_CFG/USER_DATA_03.dbf';
set newname for datafile 10 to '/oracle/data/ora02/TST_CFG/lob_data_t01_01.dbf';
restore database;
switch datafile all;
recover database;
release channel t1;
release channel t2;
}
8.)Open the database using reset log upgrade mode. Rename and Recreate tempfiles if not accessible.
9.) Run catupgrd.sql
10.) Run utlu112s.sql
11.) Run catuppst.sql
12.) Shut down the database and change database name using DBNID utility. Follow post steps and voila, you have just finished cloning from RMAN baseline backup of 11.2.0.3 database (TSTU) to 11.2.0.4 database (TST_CFG).
Thursday, September 11, 2014
Configuring Oracle streams replication
Oracle streams can be considered as an alternative to GoldenGate for configuring data replication. This can save you considerable amount of downtime for huge database migrations.You can create a new database in advance and configure replication using oracle streams even for cross platform migrations. Once both databases are in sync, downtime can be coordinated with business for re-pointing the applications. Here are the high level steps for configuring oracle streams:
1.) Setup STREAMS user in both source and target database with required permissions. In our case source database is STRM_SRC and target database is STRM_TRG.
create table STREAMTST.STREAM_TST as select 1 dummy from dual;
create tablespace STREAMS_TBS
datafile '/oracle/data/ora04/STRM_TRG/streams_tbs_01.dbf' size 500M
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
CREATE USER STRMADMIN IDENTIFIED BY Summerend2014
DEFAULT TABLESPACE STREAMS_TBS TEMPORARY TABLESPACE TEMP_T01;
alter user STRMADMIN quota unlimited on STREAMS_TBS;
alter user STRMADMIN default role all;
grant appl_connect to STRMADMIN;
grant dba to STRMADMIN;
EXEC DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
CREATE DIRECTORY STRMDIR AS '/exports/pmxexpdp';
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
GRANTEE => 'STRMADMIN',
FILE_NAME =>'streams_admin_privs.sql',
DIRECTORY_NAME => 'STRMDIR');
END;
/
2.) Setup database link and queue table in source database STRM_SRC after login as STRADMIN user -
CREATE DATABASE LINK STRM_TRG CONNECT TO STRMADMIN IDENTIFIED BY Summerend2014 USING 'STRM_TRG';
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(QUEUE_NAME=>'STREAMS_QUEUE',
QUEUE_TABLE=>'STREAMS_QUEUE_TABLE',
QUEUE_USER=>'STRMADMIN',
STORAGE_CLAUSE=>'TABLESPACE STREAMS_TBS');
END;
/
3.) Setup rules to capture transaction from redo log for particular schema -
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME =>'STREAMTST',
STREAMS_TYPE =>'CAPTURE',
STREAMS_NAME=>'STRMADMIN_CAPTURE',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
INCLUDE_DML=>TRUE,
INCLUDE_DDL=>TRUE);
END;
/
4.) Add propagation rules using the Oracle-provided PL/SQL procedure
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES (
SCHEMA_NAME=>'STREAMTST',
STREAMS_NAME=>'STRMADMIN_PROPAGATE',
SOURCE_QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
DESTINATION_QUEUE_NAME=> 'STRMADMIN.STREAMS_QUEUE@STRM_TRG',
SOURCE_DATABASE=>'STRM_SRC',
INCLUDE_DML=>TRUE,
INCLUDE_DDL=>TRUE,
QUEUE_TO_QUEUE=>TRUE);
END;
5.) Login to target database STRM_TRG as the Streams administrator, create the Streams capture queue:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(QUEUE_NAME=>'STREAMS_QUEUE',
QUEUE_TABLE=>'STREAMS_QUEUE_TABLE',
QUEUE_USER=>'STRMADMIN',
STORAGE_CLAUSE=>'TABLESPACE STREAMS_TBS');
END;
/
6.) Connect as the Streams administrator, and add the schema apply
rules:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME =>'STREAMTST',
STREAMS_TYPE =>'APPLY',
STREAMS_NAME=>'STRMADMIN_APPLY',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
INCLUDE_DML=>TRUE,
INCLUDE_DDL=>TRUE);
END;
/
7.) Determine and obtain the current SCN from the source database.
To ensure that no data is lost, this step is to be performed before
the target copy is made. This SCN will be used as a starting point
for the capture process to begin the streaming activity.
set numwidth 18
SELECT TABLE_OWNER, TABLE_NAME, SCN from DBA_CAPTURE_PREPARED_TABLES;
8. After the data has been propagated, it needs to be applied to the
appropriate schema. Identify the user in the target database that will
apply the changes
EXEC DBMS_APPLY_ADM.ALTER_APPLY(apply_name=>'STRMADMIN_APPLY',apply_user=>'STRMADMIN');
9.) Set the SCN on the target database to allow the apply process to start the schema and table level instantiation at this SCN.
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
source_schema_name=> 'STREAMTST',
source_database_name=>'STRM_SRC',
instantiation_scn=> 12115252424484);
END;
/
BEGIN DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name=> 'STREAMTST.STREAM_TST',
source_database_name=>'STRM_SRC',
instantiation_SCN=>12115252424484);
END;
10.) Start the apply process, using the following PL/SQL package:
EXEC DBMS_APPLY_ADM.START_APPLY( apply_name => 'STRMADMIN_APPLY');
11.) Enable propagation schedule at the source database. Connect as the Streams administrator and execute the following PL/SQL
procedure:
EXEC DBMS_PROPAGATION_ADM.START_PROPAGATION('STRMADMIN_PROPAGATE') ;
12.) The following procedure starts the capture process, mines redologs, and enqueues the mined redo information into the associated
queue:
EXEC DBMS_CAPTURE_ADM.START_CAPTURE(capture_name=>'STRMADMIN_CAPTURE');
Now test if the changes are being replicated or not. First perform transactions in source database:
STRM_SRC> truncate table STREAMTST.stream_tst;
Table truncated.
STRM_SRC>
STRM_SRC> create table STREAMTST.stream_tst01 as select 1 dummy from dual;
Table created.
STRM_SRC> insert into STREAMTST.stream_tst (select * from STREAMTST.stream_tst);
0 rows created.
STRM_SRC> insert into STREAMTST.stream_tst01 (select * from STREAMTST.stream_tst01);
1 row created.
STRM_SRC> /
2 rows created.
STRM_SRC> /
4 rows created.
STRM_SRC> /
8 rows created.
STRM_SRC> /
16 rows created.
STRM_SRC> /
32 rows created.
STRM_SRC> commit;
Commit complete.
STRM_SRC> insert into STREAMTST.stream_tst01 (select * from STREAMTST.stream_tst01);
64 rows created.
STRM_SRC> /
128 rows created.
STRM_SRC> /
256 rows created.
STRM_SRC> /
512 rows created.
STRM_SRC> commit;
Commit complete.
STRM_SRC> select count(1) from STREAMTST.stream_tst01;
COUNT(1)
----------
1024
STRM_SRC> select count(1) from STREAMTST.stream_tst;
COUNT(1)
----------
0
Now check in target database :
SQL> select count(1) from STREAMTST.STREAM_TST;
COUNT(1)
----------
8192
SQL> /
COUNT(1)
----------
0
SQL> /
COUNT(1)
----------
0
SQL> select count(1) from STREAMTST.STREAM_TST01;
COUNT(1)
----------
1
SQL>
SQL> /
COUNT(1)
----------
64
SQL> /
COUNT(1)
----------
1024
SQL>
Here are some useful monitoring scripts which can be used for day today streams troubleshooting:
If you want to check on the source database which rules are implemented for Oracle Streams replication you can check the following dictionary data views:
select * from dba_streams_rules
select * from dba_streams_schema_rules
select * from dba_streams_table_rules
--Check the state of the apply process and make sure it is enabled:
SELECT apply_name, apply_captured, status, rule_set_name
FROM DBA_APPLY;
--Verify that the apply process has not dequeued any events:
SELECT APPLY_NAME, STATE, TOTAL_MESSAGES_DEQUEUED
FROM V$STREAMS_APPLY_READER;
--Verify the destination queue is correct
SELECT PROPAGATION_NAME, DESTINATION_QUEUE_OWNER ||'.'||
DESTINATION_QUEUE_NAME ||'@'|| DESTINATION_DBLINK "Destination Queue" ,ERROR_MESSAGE,ERROR_DATE
FROM DBA_PROPAGATION AAA;
--Check the propagation schedule time and check for failures
SELECT TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW, s.NEXT_TIME, s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,'Y','Disabled','N','Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME, s.FAILURES
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK = s.DESTINATION
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME;
--Query the total number of events propagated:
SELECT SCHEMA, QNAME, DESTINATION, TOTAL_NUMBER
FROM DBA_QUEUE_SCHEDULES;
--Check the proper rules are created for global, schema and table and at each level (capture, propagation, apply):
SELECT rule_name,rule_set_rule_enabled
FROM dba_rule_set_rules
WHERE rule_set_name = ''
and rule_name = '';
SELECT rule_name,rule_condition
FROM dba_rules
WHERE rule_name = '';
-- Verify that the apply process has not dequeued any events:
SELECT APPLY_NAME, STATE, TOTAL_MESSAGES_DEQUEUED
FROM V$STREAMS_APPLY_READER;
--Query the total number of events propagated:
SELECT SCHEMA, QNAME, DESTINATION, TOTAL_NUMBER
FROM DBA_QUEUE_SCHEDULES;
--Check the apply process latency:
SELECT (hwm_time-hwm_message_create_time)* 86400 "Latency in Seconds",
hwm_message_create_time "Event Creation",
hwm_time "Apply Time",
hwm_message_number "Applied Message #"
FROM v$STREAMS_APPLY_COORDINATOR;
--Check the Error Queue:
SELECT apply_name, source_database,local_transaction_id, error_message
FROM DBA_APPLY_ERROR;
--Check for any defined conflict resolution:
COLUMN object_owner FORMAT a15
COLUMN object_name FORMAT a15
COLUMN method_name FORMAT a20
COLUMN column_name FORMAT a15
COLUMN resolution_column FORMAT a60
SELECT object_owner,object_name,method_name,resolution_column,column_name
FROM dba_apply_conflict_columns;
--Check object instantiated SCN:
SELECT * FROM dba_apply_instantiated_objects;
EXEC DBMS_APPLY_ADM.SET_KEY_COLUMNS('<owner>.<table_name>','<col1>,<col2>,....');
EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS;
EXEC DBMS_APPLY_ADM.START_APPLY('<apply name>');
--- stream capture status
select
capture_name,
capture_message_number,
enqueue_message_number,
state,
total_prefilter_discarded,
total_prefilter_kept,
total_messages_captured,
total_messages_enqueued
from gv$streams_capture;
set pagesize 1000
col first_scn format 999999999999999999
col next_scn format 999999999999999999
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
select source_database,thread#,sequence#,name,modified_time,first_scn,next_scn,dictionary_begin,dictionary_end
from dba_registered_archived_log where &scn between first_scn and next_scn;
--stream apply status - corelated with the above query
select apply_name,state,
total_messages_dequeued,
total_messages_spilled,
dequeued_message_number
from gv$streams_apply_reader@orclstream;
------- *************** streams monitor script ***********************
COLUMN CAPTURE_NAME FORMAT a15
COLUMN QUEUE_NAME FORMAT a15
COLUMN PROPAGATION_NAME FORMAT a15
COLUMN STATUS FORMAT a15
COLUMN apply_name FORMAT a15
COLUMN error_message FORMAT a35
prompt Redo Log Scanning Latency for Each Capture Process
SELECT CAPTURE_NAME,
((SYSDATE - CAPTURE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
((SYSDATE - CAPTURE_TIME)*86400) LAST_STATUS,
TO_CHAR(CAPTURE_TIME, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,
TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME
FROM V$STREAMS_CAPTURE;
prompt "Capture process that are disabled"
SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS
FROM DBA_CAPTURE where capture_name='' and status<>'ENABLED';
prompt "Stream capture process that are not runing normally (state is not ok)"
select state from V$STREAMS_CAPTURE where
state in ('SHUTTING DOWN','INITIALIZING','ABORTING','PAUSED FOR FLOW
CONTROL','DICTIONARY INITIALIZATION');
prompt "if a subscriber has a high count
for total_spilled_msg, then that subscriber is not dequeuing messages
fast enough from the queue buffer. Spilling messages to disk has a
negative impact on the performance of your Streams environment."
--prompt "check scn"
--select * from DBA_APPLY_INSTANTIATED_OBJECTS@ORCLSTREAM;
prompt "Errors in apply? Is the apply process running? Remember to re-execute pending errors before restarting the apply"
--First check sequence
select apply_name,queue_name,status,error_number,error_message
from dba_apply@orclstream
where status<>'ENABLED' OR error_number is not null ;
-- if there are errors in the above query, you should check below query for details
--select * from dba_apply_error@orclstream ;
--execute DBMS_APPLY_ADM.EXECUTE_All_ERRORS;
--select * from dba_apply_error
--execute DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_MTS');
--select * from DBA_APPLY_PROGRESS
prompt "errors in capture process or disabled capture processes"
select CAPTURE_NAME,QUEUE_NAME,STATUS,CAPTURED_SCN,APPLIED_SCN,ERROR_NUMBER,ERROR_MESSAGE
FROM DBA_CAPTURE
where status<>'ENABLED' OR CAPTURED_SCN-APPLIED_SCN<>0 OR ERROR_NUMBER IS NOT NULL;
---- setting trace on for the capture process:
-----execute dbms_capture_adm.set_parameter('&CAPTURE_NAME','trace_level','2');
prompt "Displaying Information About Propagations that Send Buffered Messages"
SELECT p.PROPAGATION_NAME,
s.QUEUE_SCHEMA,
s.QUEUE_NAME,
s.DBLINK,
s.SCHEDULE_STATUS,
P.STATUS,
P.ERROR_MESSAGE,
P.ERROR_DATE
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.DESTINATION_DBLINK = s.DBLINK AND
p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME
and (p.status<>'ENABLED' OR
s.schedule_status<>'SCHEDULE ENABLED' OR ERROR_MESSAGE IS NOT NULL);
--
prompt "Displaying the Schedule for a Propagation Job"
SELECT DISTINCT
p.PROPAGATION_NAME,
TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW,
s.NEXT_TIME,
s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,
'Y', 'Disabled',
'N', 'Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME,
s.FAILURES
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE
p.DESTINATION_DBLINK = s.DESTINATION
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME
and (s.SCHEDULE_DISABLED='Y'
or S.FAILURES>0);
prompt "Apply process errors:"
SELECT s.SUBSCRIBER_NAME,
q.QUEUE_SCHEMA,
q.QUEUE_NAME,
s.LAST_DEQUEUED_SEQ,
s.NUM_MSGS,
s.TOTAL_SPILLED_MSG ,
a.error_number,
a.error_message ,
a.status
FROM V$BUFFERED_QUEUES@orclstream q, V$BUFFERED_SUBSCRIBERS@orclstream s, DBA_APPLY@orclstream a
WHERE q.QUEUE_ID = s.QUEUE_ID AND
s.SUBSCRIBER_ADDRESS IS NULL AND
s.SUBSCRIBER_NAME = a.APPLY_NAME
and (s.total_spilled_msg>0 or A.status<>'ENABLED' OR A.ERROR_NUMBER IS NOT NULL);
prompt "Propagations Dequeuing Messages from Each Buffered Queue"
SELECT p.PROPAGATION_NAME,
s.SUBSCRIBER_ADDRESS,
s.CURRENT_ENQ_SEQ,
s.LAST_BROWSED_SEQ,
s.LAST_DEQUEUED_SEQ,
s.NUM_MSGS,
s.TOTAL_SPILLED_MSG
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s, V$BUFFERED_QUEUES q
WHERE q.QUEUE_ID = s.QUEUE_ID AND
p.SOURCE_QUEUE_OWNER = q.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = q.QUEUE_NAME AND
p.DESTINATION_DBLINK = s.SUBSCRIBER_ADDRESS
and s.TOTAL_SPILLED_MSG>0;
prompt "The next 2 queries should return the same number of enqueue_message and dequeued_message (respectively)"
select
capture_name, state,
capture_message_number,
enqueue_message_number,
total_prefilter_discarded,
total_prefilter_kept,
total_messages_captured,
total_messages_enqueued
from gv$streams_capture;
--stream apply status - corelated with the above query
select apply_name,state,
total_messages_dequeued,
total_messages_spilled,
dequeued_message_number
from gv$streams_apply_reader@orclstream;
------- *************** end streams monitor script ***********************
SELECT * FROM DBA_LOGMNR_PURGED_LOG;
SELECT * FROM DBA_CAPTURE_PARAMETERS;
SELECT * FROM DBA_CAPTURE_EXTRA_ATTRIBUTES
SELECT * FROM DBA_QUEUE_TABLES order by owner, queue_table
SELECT * FROM DBA_QUEUES order by owner
SELECT * FROM DBA_APPLY_DML_HANDLERS
select * from DBA_QUEUE_SCHEDULES
select * from DBA_STREAMS_COLUMNS;
select * from DBA_STREAMS_ADMINISTRATOR;
select * from DBA_STREAMS_RULES;
select * from DBA_STREAMS_TABLE_RULES;
select * from SYS.DBA_STREAMS_UNSUPPORTED;
-- join between all info on capture source database
SELECT * FROM DBA_STREAMS_RULES dsr
inner join dba_propagation dp on dp.rule_set_name=dsr.rule_set_name
SELECT distinct
destination_queue_name,replace(object_name,'PRS_DEPENDECY','PRS_DEPENDENCY')
object_name,replace(streams_name,'PROPAGATION','APPLY') STREAM_NAME
FROM DBA_STREAMS_RULES dsr
inner join dba_propagation dp on dp.rule_set_name=dsr.rule_set_name
INNER JOIN DBA_CAPTURE_PREPARED_TABLES DCPT ON dcpt.table_name=replace(object_name,'PRS_DEPENDECY','PRS_DEPENDENCY')
begin
for rc in (
select propagation_name from dba_propagation)
loop
DBMS_PROPAGATION_ADM.START_PROPAGATION(rc.propagation_name);
end loop;
end;
/************Deleting invalid oracle streams****************/
DBMS_STREAMS_ADM.REMOVE_RULE(
rule_name IN VARCHAR2,
streams_type IN VARCHAR2,
streams_name IN VARCHAR2,
drop_unused_rule IN BOOLEAN DEFAULT TRUE,
inclusion_rule IN BOOLEAN DEFAULT TRUE);
begin
dbms_streams_adm.remove_rule(rulename);
end;
/********************************************************/
Other useful scripts:
View Message Notification:
COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A10
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A20
COLUMN NOTIFICATION_TYPE HEADING 'Notification|Type' FORMAT A15
COLUMN NOTIFICATION_ACTION HEADING 'Notification|Action' FORMAT A25
SELECT STREAMS_NAME,
QUEUE_OWNER,
QUEUE_NAME,
NOTIFICATION_TYPE,
NOTIFICATION_ACTION
FROM DBA_STREAMS_MESSAGE_CONSUMERS
WHERE NOTIFICATION_TYPE IS NOT NULL;
Determine consumer of each message in persistent queue:
COLUMN MSG_ID HEADING 'Message ID' FORMAT 9999
COLUMN MSG_STATE HEADING 'Message State' FORMAT A13
COLUMN CONSUMER_NAME HEADING 'Consumer' FORMAT A30
SELECT MSG_ID, MSG_STATE, CONSUMER_NAME FROM AQ$OE_Q_TABLE_ANY;
View Contents of message in persisten queue:
SELECT qt.user_data.AccessNumber() "Numbers in Queue"
FROM strmadmin.oe_q_table_any qt;
SELECT qt.user_data.AccessVarchar2() "Varchar2s in Queue"
FROM strmadmin.oe_q_table_any qt;
Monitoring Buffered queues:
Buffered queues enable Oracle databases to optimize messages by storing them in the SGA instead of always storing them in a queue table.
Captured LCRs always are stored in buffered queues, but other types of messages can be stored in buffered queues or persistently in queue tables.
Messages in a buffered queue can spill from memory if they have been staged in the buffered queue for a period of time without being dequeued, or
if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate queue table.
Determine number of messages in buffered queue:
COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
SELECT QUEUE_SCHEMA,
QUEUE_NAME,
(NUM_MSGS - SPILL_MSGS) MEM_MSG,
SPILL_MSGS,
NUM_MSGS
FROM V$BUFFERED_QUEUES;
View captured process for LCRS in each buffered queue:
COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A10
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A10
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 9999999999
COLUMN MEMORY_USAGE HEADING 'Percent|Streams|Pool|Used' FORMAT 999
COLUMN PUBLISHER_STATE HEADING 'Publisher|State' FORMAT A10
SELECT SENDER_NAME,
SENDER_ADDRESS,
QUEUE_NAME,
CNUM_MSGS,
LAST_ENQUEUED_MSG,
MEMORY_USAGE,
PUBLISHER_STATE
FROM V$BUFFERED_PUBLISHERS;
Display information about propagations which send buffered messaged:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20
SELECT p.PROPAGATION_NAME,
s.QUEUE_SCHEMA,
s.QUEUE_NAME,
s.DBLINK,
s.SCHEDULE_STATUS
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
p.DESTINATION_QUEUE_NAME = s.DST_QUEUE_NAME;
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A20
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 999999999999
SELECT p.PROPAGATION_NAME,
s.QUEUE_NAME,
s.DBLINK,
s.TOTAL_MSGS,
s.TOTAL_BYTES
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
p.DESTINATION_QUEUE_NAME = s.DST_QUEUE_NAME;
Displaying Performance Statistics for Propagations that Send Buffered Messages:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99
SELECT p.PROPAGATION_NAME,
s.QUEUE_NAME,
s.DBLINK,
(s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
(s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
(s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
p.DESTINATION_QUEUE_NAME = s.DST_QUEUE_NAME;
Viewing the Propagations Dequeuing Messages from Each Buffered Queue:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A11
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A5
COLUMN SUBSCRIBER_ADDRESS HEADING 'Subscriber|Address' FORMAT A15
COLUMN STARTUP_TIME HEADING 'Startup|Time' FORMAT A9
COLUMN CNUM_MSGS HEADING 'Cumulative|Messages' FORMAT 99999999
COLUMN TOTAL_DEQUEUED_MSG HEADING 'Total|Messages' FORMAT 99999999
COLUMN LAST_DEQUEUED_NUM HEADING 'Last|Dequeued|Message|Number' FORMAT 99999999
SELECT p.PROPAGATION_NAME,
s.QUEUE_SCHEMA,
s.QUEUE_NAME,
s.SUBSCRIBER_ADDRESS,
s.STARTUP_TIME,
s.CNUM_MSGS,
s.TOTAL_DEQUEUED_MSG,
s.LAST_DEQUEUED_NUM
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.PROPAGATION_NAME = s.SUBSCRIBER_NAME AND
s.SUBSCRIBER_ADDRESS LIKE '%' || p.DESTINATION_DBLINK;
Displaying Performance Statistics for Propagations That Receive Buffered Messages:
COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A20
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
SELECT SRC_QUEUE_NAME,
SRC_DBNAME,
(ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
(ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
(ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME
FROM V$PROPAGATION_RECEIVER;
Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue:
COLUMN SUBSCRIBER_NAME HEADING 'Apply Process' FORMAT A16
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A5
COLUMN STARTUP_TIME HEADING 'Startup|Time' FORMAT A9
COLUMN CNUM_MSGS HEADING 'Cumulative|Messages' FORMAT 99999999
COLUMN TOTAL_DEQUEUED_MSG HEADING 'Number of|Dequeued|Messages'
FORMAT 99999999
COLUMN LAST_DEQUEUED_NUM HEADING 'Last|Dequeued|Message|Number' FORMAT 99999999
SELECT s.SUBSCRIBER_NAME,
q.QUEUE_SCHEMA,
q.QUEUE_NAME,
s.STARTUP_TIME,
s.CNUM_MSGS,
s.TOTAL_DEQUEUED_MSG,
s.LAST_DEQUEUED_NUM
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND
s.SUBSCRIBER_ADDRESS IS NULL AND
s.SUBSCRIBER_NAME = a.APPLY_NAME;
1.) Setup STREAMS user in both source and target database with required permissions. In our case source database is STRM_SRC and target database is STRM_TRG.
create table STREAMTST.STREAM_TST as select 1 dummy from dual;
create tablespace STREAMS_TBS
datafile '/oracle/data/ora04/STRM_TRG/streams_tbs_01.dbf' size 500M
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
CREATE USER STRMADMIN IDENTIFIED BY Summerend2014
DEFAULT TABLESPACE STREAMS_TBS TEMPORARY TABLESPACE TEMP_T01;
alter user STRMADMIN quota unlimited on STREAMS_TBS;
alter user STRMADMIN default role all;
grant appl_connect to STRMADMIN;
grant dba to STRMADMIN;
EXEC DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
CREATE DIRECTORY STRMDIR AS '/exports/pmxexpdp';
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
GRANTEE => 'STRMADMIN',
FILE_NAME =>'streams_admin_privs.sql',
DIRECTORY_NAME => 'STRMDIR');
END;
/
2.) Setup database link and queue table in source database STRM_SRC after login as STRADMIN user -
CREATE DATABASE LINK STRM_TRG CONNECT TO STRMADMIN IDENTIFIED BY Summerend2014 USING 'STRM_TRG';
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(QUEUE_NAME=>'STREAMS_QUEUE',
QUEUE_TABLE=>'STREAMS_QUEUE_TABLE',
QUEUE_USER=>'STRMADMIN',
STORAGE_CLAUSE=>'TABLESPACE STREAMS_TBS');
END;
/
3.) Setup rules to capture transaction from redo log for particular schema -
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME =>'STREAMTST',
STREAMS_TYPE =>'CAPTURE',
STREAMS_NAME=>'STRMADMIN_CAPTURE',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
INCLUDE_DML=>TRUE,
INCLUDE_DDL=>TRUE);
END;
/
4.) Add propagation rules using the Oracle-provided PL/SQL procedure
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES (
SCHEMA_NAME=>'STREAMTST',
STREAMS_NAME=>'STRMADMIN_PROPAGATE',
SOURCE_QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
DESTINATION_QUEUE_NAME=> 'STRMADMIN.STREAMS_QUEUE@STRM_TRG',
SOURCE_DATABASE=>'STRM_SRC',
INCLUDE_DML=>TRUE,
INCLUDE_DDL=>TRUE,
QUEUE_TO_QUEUE=>TRUE);
END;
5.) Login to target database STRM_TRG as the Streams administrator, create the Streams capture queue:
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(QUEUE_NAME=>'STREAMS_QUEUE',
QUEUE_TABLE=>'STREAMS_QUEUE_TABLE',
QUEUE_USER=>'STRMADMIN',
STORAGE_CLAUSE=>'TABLESPACE STREAMS_TBS');
END;
/
6.) Connect as the Streams administrator, and add the schema apply
rules:
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES (
SCHEMA_NAME =>'STREAMTST',
STREAMS_TYPE =>'APPLY',
STREAMS_NAME=>'STRMADMIN_APPLY',
QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
INCLUDE_DML=>TRUE,
INCLUDE_DDL=>TRUE);
END;
/
7.) Determine and obtain the current SCN from the source database.
To ensure that no data is lost, this step is to be performed before
the target copy is made. This SCN will be used as a starting point
for the capture process to begin the streaming activity.
set numwidth 18
SELECT TABLE_OWNER, TABLE_NAME, SCN from DBA_CAPTURE_PREPARED_TABLES;
8. After the data has been propagated, it needs to be applied to the
appropriate schema. Identify the user in the target database that will
apply the changes
EXEC DBMS_APPLY_ADM.ALTER_APPLY(apply_name=>'STRMADMIN_APPLY',apply_user=>'STRMADMIN');
9.) Set the SCN on the target database to allow the apply process to start the schema and table level instantiation at this SCN.
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
source_schema_name=> 'STREAMTST',
source_database_name=>'STRM_SRC',
instantiation_scn=> 12115252424484);
END;
/
BEGIN DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name=> 'STREAMTST.STREAM_TST',
source_database_name=>'STRM_SRC',
instantiation_SCN=>12115252424484);
END;
10.) Start the apply process, using the following PL/SQL package:
EXEC DBMS_APPLY_ADM.START_APPLY( apply_name => 'STRMADMIN_APPLY');
11.) Enable propagation schedule at the source database. Connect as the Streams administrator and execute the following PL/SQL
procedure:
EXEC DBMS_PROPAGATION_ADM.START_PROPAGATION('STRMADMIN_PROPAGATE') ;
12.) The following procedure starts the capture process, mines redologs, and enqueues the mined redo information into the associated
queue:
EXEC DBMS_CAPTURE_ADM.START_CAPTURE(capture_name=>'STRMADMIN_CAPTURE');
Now test if the changes are being replicated or not. First perform transactions in source database:
STRM_SRC> truncate table STREAMTST.stream_tst;
Table truncated.
STRM_SRC>
STRM_SRC> create table STREAMTST.stream_tst01 as select 1 dummy from dual;
Table created.
STRM_SRC> insert into STREAMTST.stream_tst (select * from STREAMTST.stream_tst);
0 rows created.
STRM_SRC> insert into STREAMTST.stream_tst01 (select * from STREAMTST.stream_tst01);
1 row created.
STRM_SRC> /
2 rows created.
STRM_SRC> /
4 rows created.
STRM_SRC> /
8 rows created.
STRM_SRC> /
16 rows created.
STRM_SRC> /
32 rows created.
STRM_SRC> commit;
Commit complete.
STRM_SRC> insert into STREAMTST.stream_tst01 (select * from STREAMTST.stream_tst01);
64 rows created.
STRM_SRC> /
128 rows created.
STRM_SRC> /
256 rows created.
STRM_SRC> /
512 rows created.
STRM_SRC> commit;
Commit complete.
STRM_SRC> select count(1) from STREAMTST.stream_tst01;
COUNT(1)
----------
1024
STRM_SRC> select count(1) from STREAMTST.stream_tst;
COUNT(1)
----------
0
Now check in target database :
SQL> select count(1) from STREAMTST.STREAM_TST;
COUNT(1)
----------
8192
SQL> /
COUNT(1)
----------
0
SQL> /
COUNT(1)
----------
0
SQL> select count(1) from STREAMTST.STREAM_TST01;
COUNT(1)
----------
1
SQL>
SQL> /
COUNT(1)
----------
64
SQL> /
COUNT(1)
----------
1024
SQL>
Here are some useful monitoring scripts which can be used for day today streams troubleshooting:
If you want to check on the source database which rules are implemented for Oracle Streams replication you can check the following dictionary data views:
select * from dba_streams_rules
select * from dba_streams_schema_rules
select * from dba_streams_table_rules
--Check the state of the apply process and make sure it is enabled:
SELECT apply_name, apply_captured, status, rule_set_name
FROM DBA_APPLY;
--Verify that the apply process has not dequeued any events:
SELECT APPLY_NAME, STATE, TOTAL_MESSAGES_DEQUEUED
FROM V$STREAMS_APPLY_READER;
--Verify the destination queue is correct
SELECT PROPAGATION_NAME, DESTINATION_QUEUE_OWNER ||'.'||
DESTINATION_QUEUE_NAME ||'@'|| DESTINATION_DBLINK "Destination Queue" ,ERROR_MESSAGE,ERROR_DATE
FROM DBA_PROPAGATION AAA;
--Check the propagation schedule time and check for failures
SELECT TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW, s.NEXT_TIME, s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,'Y','Disabled','N','Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME, s.FAILURES
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK = s.DESTINATION
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME;
--Query the total number of events propagated:
SELECT SCHEMA, QNAME, DESTINATION, TOTAL_NUMBER
FROM DBA_QUEUE_SCHEDULES;
--Check the proper rules are created for global, schema and table and at each level (capture, propagation, apply):
SELECT rule_name,rule_set_rule_enabled
FROM dba_rule_set_rules
WHERE rule_set_name = ''
and rule_name = '';
SELECT rule_name,rule_condition
FROM dba_rules
WHERE rule_name = '';
-- Verify that the apply process has not dequeued any events:
SELECT APPLY_NAME, STATE, TOTAL_MESSAGES_DEQUEUED
FROM V$STREAMS_APPLY_READER;
--Query the total number of events propagated:
SELECT SCHEMA, QNAME, DESTINATION, TOTAL_NUMBER
FROM DBA_QUEUE_SCHEDULES;
--Check the apply process latency:
SELECT (hwm_time-hwm_message_create_time)* 86400 "Latency in Seconds",
hwm_message_create_time "Event Creation",
hwm_time "Apply Time",
hwm_message_number "Applied Message #"
FROM v$STREAMS_APPLY_COORDINATOR;
--Check the Error Queue:
SELECT apply_name, source_database,local_transaction_id, error_message
FROM DBA_APPLY_ERROR;
--Check for any defined conflict resolution:
COLUMN object_owner FORMAT a15
COLUMN object_name FORMAT a15
COLUMN method_name FORMAT a20
COLUMN column_name FORMAT a15
COLUMN resolution_column FORMAT a60
SELECT object_owner,object_name,method_name,resolution_column,column_name
FROM dba_apply_conflict_columns;
--Check object instantiated SCN:
SELECT * FROM dba_apply_instantiated_objects;
EXEC DBMS_APPLY_ADM.SET_KEY_COLUMNS('<owner>.<table_name>','<col1>,<col2>,....');
EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS;
EXEC DBMS_APPLY_ADM.START_APPLY('<apply name>');
--- stream capture status
select
capture_name,
capture_message_number,
enqueue_message_number,
state,
total_prefilter_discarded,
total_prefilter_kept,
total_messages_captured,
total_messages_enqueued
from gv$streams_capture;
set pagesize 1000
col first_scn format 999999999999999999
col next_scn format 999999999999999999
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
select source_database,thread#,sequence#,name,modified_time,first_scn,next_scn,dictionary_begin,dictionary_end
from dba_registered_archived_log where &scn between first_scn and next_scn;
--stream apply status - corelated with the above query
select apply_name,state,
total_messages_dequeued,
total_messages_spilled,
dequeued_message_number
from gv$streams_apply_reader@orclstream;
------- *************** streams monitor script ***********************
COLUMN CAPTURE_NAME FORMAT a15
COLUMN QUEUE_NAME FORMAT a15
COLUMN PROPAGATION_NAME FORMAT a15
COLUMN STATUS FORMAT a15
COLUMN apply_name FORMAT a15
COLUMN error_message FORMAT a35
prompt Redo Log Scanning Latency for Each Capture Process
SELECT CAPTURE_NAME,
((SYSDATE - CAPTURE_MESSAGE_CREATE_TIME)*86400) LATENCY_SECONDS,
((SYSDATE - CAPTURE_TIME)*86400) LAST_STATUS,
TO_CHAR(CAPTURE_TIME, 'HH24:MI:SS MM/DD/YY') CAPTURE_TIME,
TO_CHAR(CAPTURE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME
FROM V$STREAMS_CAPTURE;
prompt "Capture process that are disabled"
SELECT CAPTURE_NAME, QUEUE_NAME, RULE_SET_NAME, NEGATIVE_RULE_SET_NAME, STATUS
FROM DBA_CAPTURE where capture_name='' and status<>'ENABLED';
prompt "Stream capture process that are not runing normally (state is not ok)"
select state from V$STREAMS_CAPTURE where
state in ('SHUTTING DOWN','INITIALIZING','ABORTING','PAUSED FOR FLOW
CONTROL','DICTIONARY INITIALIZATION');
prompt "if a subscriber has a high count
for total_spilled_msg, then that subscriber is not dequeuing messages
fast enough from the queue buffer. Spilling messages to disk has a
negative impact on the performance of your Streams environment."
--prompt "check scn"
--select * from DBA_APPLY_INSTANTIATED_OBJECTS@ORCLSTREAM;
prompt "Errors in apply? Is the apply process running? Remember to re-execute pending errors before restarting the apply"
--First check sequence
select apply_name,queue_name,status,error_number,error_message
from dba_apply@orclstream
where status<>'ENABLED' OR error_number is not null ;
-- if there are errors in the above query, you should check below query for details
--select * from dba_apply_error@orclstream ;
--execute DBMS_APPLY_ADM.EXECUTE_All_ERRORS;
--select * from dba_apply_error
--execute DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_MTS');
--select * from DBA_APPLY_PROGRESS
prompt "errors in capture process or disabled capture processes"
select CAPTURE_NAME,QUEUE_NAME,STATUS,CAPTURED_SCN,APPLIED_SCN,ERROR_NUMBER,ERROR_MESSAGE
FROM DBA_CAPTURE
where status<>'ENABLED' OR CAPTURED_SCN-APPLIED_SCN<>0 OR ERROR_NUMBER IS NOT NULL;
---- setting trace on for the capture process:
-----execute dbms_capture_adm.set_parameter('&CAPTURE_NAME','trace_level','2');
prompt "Displaying Information About Propagations that Send Buffered Messages"
SELECT p.PROPAGATION_NAME,
s.QUEUE_SCHEMA,
s.QUEUE_NAME,
s.DBLINK,
s.SCHEDULE_STATUS,
P.STATUS,
P.ERROR_MESSAGE,
P.ERROR_DATE
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.DESTINATION_DBLINK = s.DBLINK AND
p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME
and (p.status<>'ENABLED' OR
s.schedule_status<>'SCHEDULE ENABLED' OR ERROR_MESSAGE IS NOT NULL);
--
prompt "Displaying the Schedule for a Propagation Job"
SELECT DISTINCT
p.PROPAGATION_NAME,
TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW,
s.NEXT_TIME,
s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,
'Y', 'Disabled',
'N', 'Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME,
s.FAILURES
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE
p.DESTINATION_DBLINK = s.DESTINATION
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME
and (s.SCHEDULE_DISABLED='Y'
or S.FAILURES>0);
prompt "Apply process errors:"
SELECT s.SUBSCRIBER_NAME,
q.QUEUE_SCHEMA,
q.QUEUE_NAME,
s.LAST_DEQUEUED_SEQ,
s.NUM_MSGS,
s.TOTAL_SPILLED_MSG ,
a.error_number,
a.error_message ,
a.status
FROM V$BUFFERED_QUEUES@orclstream q, V$BUFFERED_SUBSCRIBERS@orclstream s, DBA_APPLY@orclstream a
WHERE q.QUEUE_ID = s.QUEUE_ID AND
s.SUBSCRIBER_ADDRESS IS NULL AND
s.SUBSCRIBER_NAME = a.APPLY_NAME
and (s.total_spilled_msg>0 or A.status<>'ENABLED' OR A.ERROR_NUMBER IS NOT NULL);
prompt "Propagations Dequeuing Messages from Each Buffered Queue"
SELECT p.PROPAGATION_NAME,
s.SUBSCRIBER_ADDRESS,
s.CURRENT_ENQ_SEQ,
s.LAST_BROWSED_SEQ,
s.LAST_DEQUEUED_SEQ,
s.NUM_MSGS,
s.TOTAL_SPILLED_MSG
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s, V$BUFFERED_QUEUES q
WHERE q.QUEUE_ID = s.QUEUE_ID AND
p.SOURCE_QUEUE_OWNER = q.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = q.QUEUE_NAME AND
p.DESTINATION_DBLINK = s.SUBSCRIBER_ADDRESS
and s.TOTAL_SPILLED_MSG>0;
prompt "The next 2 queries should return the same number of enqueue_message and dequeued_message (respectively)"
select
capture_name, state,
capture_message_number,
enqueue_message_number,
total_prefilter_discarded,
total_prefilter_kept,
total_messages_captured,
total_messages_enqueued
from gv$streams_capture;
--stream apply status - corelated with the above query
select apply_name,state,
total_messages_dequeued,
total_messages_spilled,
dequeued_message_number
from gv$streams_apply_reader@orclstream;
------- *************** end streams monitor script ***********************
SELECT * FROM DBA_LOGMNR_PURGED_LOG;
SELECT * FROM DBA_CAPTURE_PARAMETERS;
SELECT * FROM DBA_CAPTURE_EXTRA_ATTRIBUTES
SELECT * FROM DBA_QUEUE_TABLES order by owner, queue_table
SELECT * FROM DBA_QUEUES order by owner
SELECT * FROM DBA_APPLY_DML_HANDLERS
select * from DBA_QUEUE_SCHEDULES
select * from DBA_STREAMS_COLUMNS;
select * from DBA_STREAMS_ADMINISTRATOR;
select * from DBA_STREAMS_RULES;
select * from DBA_STREAMS_TABLE_RULES;
select * from SYS.DBA_STREAMS_UNSUPPORTED;
-- join between all info on capture source database
SELECT * FROM DBA_STREAMS_RULES dsr
inner join dba_propagation dp on dp.rule_set_name=dsr.rule_set_name
SELECT distinct
destination_queue_name,replace(object_name,'PRS_DEPENDECY','PRS_DEPENDENCY')
object_name,replace(streams_name,'PROPAGATION','APPLY') STREAM_NAME
FROM DBA_STREAMS_RULES dsr
inner join dba_propagation dp on dp.rule_set_name=dsr.rule_set_name
INNER JOIN DBA_CAPTURE_PREPARED_TABLES DCPT ON dcpt.table_name=replace(object_name,'PRS_DEPENDECY','PRS_DEPENDENCY')
begin
for rc in (
select propagation_name from dba_propagation)
loop
DBMS_PROPAGATION_ADM.START_PROPAGATION(rc.propagation_name);
end loop;
end;
/************Deleting invalid oracle streams****************/
DBMS_STREAMS_ADM.REMOVE_RULE(
rule_name IN VARCHAR2,
streams_type IN VARCHAR2,
streams_name IN VARCHAR2,
drop_unused_rule IN BOOLEAN DEFAULT TRUE,
inclusion_rule IN BOOLEAN DEFAULT TRUE);
begin
dbms_streams_adm.remove_rule(rulename);
end;
/********************************************************/
Other useful scripts:
View Message Notification:
COLUMN STREAMS_NAME HEADING 'Messaging|Client' FORMAT A10
COLUMN QUEUE_OWNER HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A20
COLUMN NOTIFICATION_TYPE HEADING 'Notification|Type' FORMAT A15
COLUMN NOTIFICATION_ACTION HEADING 'Notification|Action' FORMAT A25
SELECT STREAMS_NAME,
QUEUE_OWNER,
QUEUE_NAME,
NOTIFICATION_TYPE,
NOTIFICATION_ACTION
FROM DBA_STREAMS_MESSAGE_CONSUMERS
WHERE NOTIFICATION_TYPE IS NOT NULL;
Determine consumer of each message in persistent queue:
COLUMN MSG_ID HEADING 'Message ID' FORMAT 9999
COLUMN MSG_STATE HEADING 'Message State' FORMAT A13
COLUMN CONSUMER_NAME HEADING 'Consumer' FORMAT A30
SELECT MSG_ID, MSG_STATE, CONSUMER_NAME FROM AQ$OE_Q_TABLE_ANY;
View Contents of message in persisten queue:
SELECT qt.user_data.AccessNumber() "Numbers in Queue"
FROM strmadmin.oe_q_table_any qt;
SELECT qt.user_data.AccessVarchar2() "Varchar2s in Queue"
FROM strmadmin.oe_q_table_any qt;
Monitoring Buffered queues:
Buffered queues enable Oracle databases to optimize messages by storing them in the SGA instead of always storing them in a queue table.
Captured LCRs always are stored in buffered queues, but other types of messages can be stored in buffered queues or persistently in queue tables.
Messages in a buffered queue can spill from memory if they have been staged in the buffered queue for a period of time without being dequeued, or
if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate queue table.
Determine number of messages in buffered queue:
COLUMN QUEUE_SCHEMA HEADING 'Queue Owner' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A15
COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
SELECT QUEUE_SCHEMA,
QUEUE_NAME,
(NUM_MSGS - SPILL_MSGS) MEM_MSG,
SPILL_MSGS,
NUM_MSGS
FROM V$BUFFERED_QUEUES;
View captured process for LCRS in each buffered queue:
COLUMN SENDER_NAME HEADING 'Capture|Process' FORMAT A10
COLUMN SENDER_ADDRESS HEADING 'Sender Queue' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue Name' FORMAT A10
COLUMN CNUM_MSGS HEADING 'Number|of LCRs|Enqueued' FORMAT 99999999
COLUMN LAST_ENQUEUED_MSG HEADING 'Last|Enqueued|LCR' FORMAT 9999999999
COLUMN MEMORY_USAGE HEADING 'Percent|Streams|Pool|Used' FORMAT 999
COLUMN PUBLISHER_STATE HEADING 'Publisher|State' FORMAT A10
SELECT SENDER_NAME,
SENDER_ADDRESS,
QUEUE_NAME,
CNUM_MSGS,
LAST_ENQUEUED_MSG,
MEMORY_USAGE,
PUBLISHER_STATE
FROM V$BUFFERED_PUBLISHERS;
Display information about propagations which send buffered messaged:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A10
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A10
COLUMN SCHEDULE_STATUS HEADING 'Schedule Status' FORMAT A20
SELECT p.PROPAGATION_NAME,
s.QUEUE_SCHEMA,
s.QUEUE_NAME,
s.DBLINK,
s.SCHEDULE_STATUS
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
p.DESTINATION_QUEUE_NAME = s.DST_QUEUE_NAME;
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A15
COLUMN DBLINK HEADING 'Database|Link' FORMAT A20
COLUMN TOTAL_MSGS HEADING 'Total|Messages' FORMAT 99999999
COLUMN TOTAL_BYTES HEADING 'Total|Bytes' FORMAT 999999999999
SELECT p.PROPAGATION_NAME,
s.QUEUE_NAME,
s.DBLINK,
s.TOTAL_MSGS,
s.TOTAL_BYTES
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
p.DESTINATION_QUEUE_NAME = s.DST_QUEUE_NAME;
Displaying Performance Statistics for Propagations that Send Buffered Messages:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A15
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A13
COLUMN DBLINK HEADING 'Database|Link' FORMAT A9
COLUMN ELAPSED_DEQUEUE_TIME HEADING 'Dequeue|Time' FORMAT 99999999.99
COLUMN ELAPSED_PICKLE_TIME HEADING 'Pickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_PROPAGATION_TIME HEADING 'Propagation|Time' FORMAT 99999999.99
SELECT p.PROPAGATION_NAME,
s.QUEUE_NAME,
s.DBLINK,
(s.ELAPSED_DEQUEUE_TIME / 100) ELAPSED_DEQUEUE_TIME,
(s.ELAPSED_PICKLE_TIME / 100) ELAPSED_PICKLE_TIME,
(s.ELAPSED_PROPAGATION_TIME / 100) ELAPSED_PROPAGATION_TIME
FROM DBA_PROPAGATION p, V$PROPAGATION_SENDER s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.DESTINATION_QUEUE_OWNER = s.DST_QUEUE_SCHEMA AND
p.DESTINATION_QUEUE_NAME = s.DST_QUEUE_NAME;
Viewing the Propagations Dequeuing Messages from Each Buffered Queue:
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A11
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A5
COLUMN SUBSCRIBER_ADDRESS HEADING 'Subscriber|Address' FORMAT A15
COLUMN STARTUP_TIME HEADING 'Startup|Time' FORMAT A9
COLUMN CNUM_MSGS HEADING 'Cumulative|Messages' FORMAT 99999999
COLUMN TOTAL_DEQUEUED_MSG HEADING 'Total|Messages' FORMAT 99999999
COLUMN LAST_DEQUEUED_NUM HEADING 'Last|Dequeued|Message|Number' FORMAT 99999999
SELECT p.PROPAGATION_NAME,
s.QUEUE_SCHEMA,
s.QUEUE_NAME,
s.SUBSCRIBER_ADDRESS,
s.STARTUP_TIME,
s.CNUM_MSGS,
s.TOTAL_DEQUEUED_MSG,
s.LAST_DEQUEUED_NUM
FROM DBA_PROPAGATION p, V$BUFFERED_SUBSCRIBERS s
WHERE p.SOURCE_QUEUE_OWNER = s.QUEUE_SCHEMA AND
p.SOURCE_QUEUE_NAME = s.QUEUE_NAME AND
p.PROPAGATION_NAME = s.SUBSCRIBER_NAME AND
s.SUBSCRIBER_ADDRESS LIKE '%' || p.DESTINATION_DBLINK;
Displaying Performance Statistics for Propagations That Receive Buffered Messages:
COLUMN SRC_QUEUE_NAME HEADING 'Source|Queue|Name' FORMAT A20
COLUMN SRC_DBNAME HEADING 'Source|Database' FORMAT A20
COLUMN ELAPSED_UNPICKLE_TIME HEADING 'Unpickle|Time' FORMAT 99999999.99
COLUMN ELAPSED_RULE_TIME HEADING 'Rule|Evaluation|Time' FORMAT 99999999.99
COLUMN ELAPSED_ENQUEUE_TIME HEADING 'Enqueue|Time' FORMAT 99999999.99
SELECT SRC_QUEUE_NAME,
SRC_DBNAME,
(ELAPSED_UNPICKLE_TIME / 100) ELAPSED_UNPICKLE_TIME,
(ELAPSED_RULE_TIME / 100) ELAPSED_RULE_TIME,
(ELAPSED_ENQUEUE_TIME / 100) ELAPSED_ENQUEUE_TIME
FROM V$PROPAGATION_RECEIVER;
Viewing the Apply Processes Dequeuing Messages from Each Buffered Queue:
COLUMN SUBSCRIBER_NAME HEADING 'Apply Process' FORMAT A16
COLUMN QUEUE_SCHEMA HEADING 'Queue|Owner' FORMAT A5
COLUMN QUEUE_NAME HEADING 'Queue|Name' FORMAT A5
COLUMN STARTUP_TIME HEADING 'Startup|Time' FORMAT A9
COLUMN CNUM_MSGS HEADING 'Cumulative|Messages' FORMAT 99999999
COLUMN TOTAL_DEQUEUED_MSG HEADING 'Number of|Dequeued|Messages'
FORMAT 99999999
COLUMN LAST_DEQUEUED_NUM HEADING 'Last|Dequeued|Message|Number' FORMAT 99999999
SELECT s.SUBSCRIBER_NAME,
q.QUEUE_SCHEMA,
q.QUEUE_NAME,
s.STARTUP_TIME,
s.CNUM_MSGS,
s.TOTAL_DEQUEUED_MSG,
s.LAST_DEQUEUED_NUM
FROM V$BUFFERED_QUEUES q, V$BUFFERED_SUBSCRIBERS s, DBA_APPLY a
WHERE q.QUEUE_ID = s.QUEUE_ID AND
s.SUBSCRIBER_ADDRESS IS NULL AND
s.SUBSCRIBER_NAME = a.APPLY_NAME;
Subscribe to:
Posts (Atom)