解析一个通过添加本地分区索引提高SQL性能的案例
程序员文章站
2023-11-28 08:50:40
该sql如下:复制代码 代码如下:select /*+ parallel(src, 8) */ distinct src.systemname as...
该sql如下:
复制代码 代码如下:
select /*+ parallel(src, 8) */ distinct
src.systemname as systemname
, src.databasename as databasename
, src.tablename as tablename
, src.username as username
from <strong>meta_dbql_table_usage_exp_hst</strong> src
inner <strong>join dr_qry_log_exp_hst</strong> rl on
<strong>src.acctstringdate = rl.acctstringdate
and src.queryid = rl.queryid</strong>
and src.systemname = rl.systemname
and src.acctstringdate > sysdate - 30
and rl.acctstringdate > sysdate - 30
inner join <strong>meta_dr_qry_log_tgt_all_hst </strong>tgt on
upper(tgt.systemname) = upper('mozart')
and upper(tgt.databasename) = upper('gdw_tables')
and upper(tgt.tablename) = upper('ssa_slng_lstg_mtrc_sd')
<strong>and src.acctstringdate = tgt.acctstringdate
and rl.statement_id = tgt.statement_id</strong>
and rl.systemname = tgt.systemname
and tgt.acctstringdate > sysdate - 30
and not(
upper(tgt.systemname)=upper(src.systemname)
and
upper(tgt.databasename) = upper(src.databasename)
and
upper(tgt.tablename) = upper(src.tablename)
)
and tgt.systemname is not null
and tgt.databasename is not null
and tgt.tablename is not null
sql的简单分析
总得来看,这个sql就是三个表(meta_dbql_table_usage_exp_hst,dr_qry_log_exp_hst,meta_dr_qry_log_tgt_all_hst)的inner join,这三个表数据量都在百万级别,且都是分区表(以acctstringdate为分区键),执行计划如下:
复制代码 代码如下:
------------------------------------------------------------------------------------------------------------------------
| id | operation | name | rows | bytes | cost | pstart| pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | select statement | | 1 | 159 | 8654 | | |
| 1 | px coordinator | | | | | | |
| 2 | px send qc (random) | :tq10002 | 1 | 159 | 8654 | | |
| 3 | sort unique | | 1 | 159 | 8654 | | |
| 4 | px receive | | 1 | 36 | 3 | | |
| 5 | px send hash | :tq10001 | 1 | 36 | 3 | | |
|* 6 | table access by local index rowid| dr_qry_log_exp_hst | 1 | 36 | 3 | | |
| 7 | nested loops | | 1 | 159 | 8633 | | |
| 8 | nested loops | | 8959 | 1076k| 4900 | | |
| 9 | buffer sort | | | | | | |
| 10 | px receive | | | | | | |
| 11 | px send broadcast | :tq10000 | | | | | |
| 12 | partition range iterator | | 1 | 56 | 4746 | key | 14 |
|* 13 | table access full | meta_dr_qry_log_tgt_all_hst | 1 | 56 | 4746 | key | 14 |
| 14 | px block iterator | | 8959 | 586k| 154 | key | key |
|* 15 | table access full | meta_dbql_table_usage_exp_hst | 8959 | 586k| 154 | key | key |
| 16 | partition range iterator | | 1 | | 2 | key | key |
|* 17 | index range scan | dr_qry_log_exp_hst_idx | 1 | | 2 | key | key |
------------------------------------------------------------------------------------------------------------------------
predicate information (identified by operation id):
---------------------------------------------------
6 - filter("rl"."statement_id"="tgt"."statement_id" and "rl"."systemname"="tgt"."systemname" and "src"."systemname"="rl"."systemname")
13 - filter(upper("tgt"."systemname")='mozart' and upper("tgt"."databasename")='gdw_tables' and
upper("tgt"."tablename")='ssa_slng_lstg_mtrc_sd' and "tgt"."acctstringdate">sysdate@!-30 and "tgt"."systemname" is not null
"tgt"."databasename" is not null and "tgt"."tablename" is not null)
15 - filter("src"."acctstringdate"="tgt"."acctstringdate" and (upper("tgt"."systemname")<>upper("src"."systemname") or
upper("tgt"."databasename")<>upper("src"."databasename") or upper("tgt"."tablename")<>upper("src"."tablename")) and
"src"."acctstringdate">sysdate@!-30)
17 - access("src"."queryid"="rl"."queryid" and "src"."acctstringdate"="rl"."acctstringdate")
filter("rl"."acctstringdate">sysdate@!-30)
定位问题
从上面执行计划中的表连接方式可以知道,这三个表之间进行了两次nested loop,问题出现在最里层的nested loop(对两个表都做了table full scan),因为表都是百万级别的(即时过滤后的数据量也不小),性能问题就出现在内表(即被驱动表)meta_dbql_table_usage_exp_hst做了太多次的全表扫描。如果能把全表扫描转换成索引,则性能可以大幅度提高。
下面是nested loop的介绍:
嵌套连接把要处理的数据集分为外部循环(驱动数据源)和内部循环(被驱动数据源),外部循环只执行一次,内部循环执行的次数等于外部循环执行返回的数据个数。
这种连接的好处是内存使用非常少。
如果驱动数据源有限,且被驱动表在连接列上有相应的索引,则这种连接方式才是高效的。
下面是这三个表上索引的情况:
复制代码 代码如下:
sql> select index_name, table_name from user_indexes where table_name in ('dr_qry_log_exp_hst',upper('meta_dbql_table_usage_exp_hst'), upper('meta_dr_qry_log_tgt_all_hs
index_name table_name
------------------------------------------------------------ ------------------------------------------------------------
meta_dr_qry_log_tgt_all_idx meta_dr_qry_log_tgt_all_hst
meta_dbql_tusage_ehst_idx meta_dbql_table_usage_exp_hst
dr_qry_log_exp_hst_idx dr_qry_log_exp_hst
create index "gv"."meta_dr_qry_log_tgt_all_idx" on "gv"."meta_dr_qry_log_tgt_all_hst" ("statement_id", "acctstringdate")
create index "gv"."meta_dbql_tusage_ehst_idx" on "gv"."meta_dbql_table_usage_exp_hst" ("queryid", "acctstringdate")
create index "gv"."dr_qry_log_exp_hst_idx" on "gv"."dr_qry_log_exp_hst" ("queryid", "acctstringdate")
这三个索引都是本地分区索引(都包含分区键acctstringdate),很显然,dr_qry_log_exp_hst表少了个索引,因为它与表meta_dr_qry_log_tgt_all_hst 在statement_id上做join,因此应该在它的statement_id上也创建本地分区索引如下:
复制代码 代码如下:
create index dr_qry_log_exp_hst_idx2 on gv.dr_qry_log_exp_hst (statement_id,acctstringdate) local;
性能对比
新的执行计划如下:
复制代码 代码如下:
------------------------------------------------------------------------------------------------------------------------
| id | operation | name | rows | bytes | cost | pstart| pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | select statement | | 1 | 159 | 4838 | | |
| 1 | sort unique | | 1 | 159 | 4838 | | |
|* 2 | table access by local index rowid | meta_dbql_table_usage_exp_hst | 1 | 67 | 3 | | |
| 3 | nested loops | | 1 | 159 | 4816 | | |
| 4 | nested loops | | 18 | 1656 | 4762 | | |
| 5 | partition range iterator | | 1 | 56 | 4746 | key | 14 |
|* 6 | table access full | meta_dr_qry_log_tgt_all_hst | 1 | 56 | 4746 | key | 14 |
| 7 | partition range iterator | | 18 | 648 | 16 | key | 14 |
|* 8 | table access by local index rowid| dr_qry_log_exp_hst | 18 | 648 | 16 | key | 14 |
|* 9 | <strong>index range scan | dr_qry_log_exp_hst_idx2</strong> | 31 | | 15 | key | 14 |
| 10 | partition range iterator | | 1 | | 2 | key | key |
|* 11 | index range scan | meta_dbql_tusage_ehst_idx | 1 | | 2 | key | key |
------------------------------------------------------------------------------------------------------------------------
predicate information (identified by operation id):
---------------------------------------------------
2 - filter((upper("tgt"."systemname")<>upper("src"."systemname") or
upper("tgt"."databasename")<>upper("src"."databasename") or upper("tgt"."tablename")<>upper("src"."tablename"))
and "src"."systemname"="rl"."systemname")
6 - filter(upper("tgt"."systemname")='mozart' and upper("tgt"."databasename")='gdw_tables' and
upper("tgt"."tablename")='ssa_slng_lstg_mtrc_sd' and "tgt"."acctstringdate">sysdate@!-30 and "tgt"."systemname"
is not null and "tgt"."databasename" is not null and "tgt"."tablename" is not null)
8 - filter("rl"."systemname"="tgt"."systemname")
9 - access("rl"."statement_id"="tgt"."statement_id" and "rl"."acctstringdate">sysdate@!-30 and
"rl"."acctstringdate" is not null)
11 - access("src"."queryid"="rl"."queryid" and "src"."acctstringdate"="rl"."acctstringdate")
filter("src"."acctstringdate"="tgt"."acctstringdate" and "src"."acctstringdate">sysdate@!-30)
从新的的执行计划可以看出,它的第一个nested loop果然用了最新创建的索引。
下面是执行时间:
复制代码 代码如下:
已用时间: 00: 00: 02.16
两秒种搞定,远远超出他期望的5s :)
方法总结
nested loop高效的条件:驱动数据源有限,且被驱动表在连接列上有相应的索引。