MySQL索引优化的实际案例分析
order by desc/asc limit m是我在mysql sql优化中经常遇到的一种场景,其优化原理也非常的简单,就是利用索引的有序性,优化器沿着索引的顺序扫描,在扫描到符合条件的m行数据后,停止扫描;看起来非常的简单,但是我经常看到很多性能较差的sql没有利用这个优化规律,下面将结合一些实际的案例来分析说明:
案例一:
一条sql执行非常的慢,执行时间为:
root@test 02:00:44 select * from test_order_desc where end_time>now() order by gmt_create desc,count_num desc limit 12, 12; +---------+-----------+------------+------+---------------------+---------------------+------------------- data1..................................................................................................... data2..................................................................................................... +---------+-----------+------------+------+---------------------+---------------------+------------------- 12 rows in set (0.49 sec)
执行计划如下:
root@test_db01:53:23 explain select * from test_order_desc where end_time > now() order by gmt_create desc,count_num desc limit 12, 12; +----+-------------+----------+-------+-----------------+-----------------+---------+------+--------+----- | id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra | +----+-------------+----------+-------+-----------------+-----------------+---------+------+--------+----- | 1 | simple | test_order_desc | range | ind_hot_endtime | ind_hot_endtime | 9 | null | 113549 | using where; using filesort | +----+-------------+----------+-------+-----------------+-----------------+---------+------+--------+-----
ind_hot_endtime索引为:
root@test_db01:52:45:show index from test_order_desc; ind_hot_endtime(end_time,count_num)
在注意到sql中满足过滤条件end_time>now()的有113549行,在加上剩余的条件中含有order by,这样会造成排序的结果集非常的大,执行非常的耗费资源;于是分析sql,在sql中包括了order by desc limit这样的排序条件后,新增适当的索引满足排序的条件,同时由于有limit的限制结果集,当扫描到满足条件的行数后退出查询,那么我们来看看优化效果:
添加索引:
root@test 02:01:06:alter table test_order_desc add index ind_gmt_create(gmt_create,count_num); query ok, 211945 rows affected (6.71 sec) records: 211945 duplicates: 0 warnings: 0
再次执行sql,观察其执行时间:
root@test 02:01:35: select * from test_order_desc where end_time > now() order by gmt_create desc,count_num desc limit 12, 12; +---------+-----------+------------+------+---------------------+---------------------+ col2................................................................................... +---------+-----------+------------+------+---------------------+---------------------+ data1.................................................................................. data2.................................................................................. +---------+-----------+------------+------+---------------------+---------------------+ 12 rows in set (0.00 sec)
可以看到执行时间已经降到了毫秒以下,查看其执行计划:
root@test 02:01:42: explain select * from test_order_desc where end_time > now() order by gmt_create desc,count_num desc limit 12, 12; +----+-------------+----------+-------+-----------------+----------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra | +----+-------------+----------+-------+-----------------+----------------+---------+------+------+-------- | 1 | simple | test_order_desc | index | ind_hot_endtime | ind_gmt_create | 14 | null | 48 | using where |
可以看到优化器已经选择了ind_gmt_create索引扫描,这样的话就避免了对结果集进行排序的过程,同时优化器预估扫描14行数据就会得到满足查询条件的数据(end_time > now()),执行计划非常的理想。
root@127.0.0.1 : test_db 16:05:15: explain select b.*,a.*,k.* from instance b left outer join image a on b.image_id=a.image_id left outer join key_pair k on b.key_pair_id=k.key_pair_id left outer join region_alias r_a on r_a.region_no=b.region_no where b.status in (1,8) and b.user_id = 21 and r_a.big_region_no='regeion_xx' order by b.instance_no asc limit 37300,50;
案例二:
root@127.0.0.1 : test_db 16:05:15: explain select b.*,a.*,k.* from instance b left outer join image a on b.image_id=a.image_id left outer join key_pair k on b.key_pair_id=k.key_pair_id left outer join region_alias r_a on r_a.region_no=b.region_no where b.status in (1,8) and b.user_id = 21 and r_a.big_region_no='regeion_xx' order by b.instance_no asc limit 37300,50;
b表的idx_uid_stat_inid的索引列包括了(user_id,status,instance_no):
我们从执行计划上分析来看,表的连接顺序为:b—>r_a—>a—>k,可以看到执行计划的第一行中需要扫描49212行的数据,同时由于status采用的是in的方式,instance_no即使在索引中也用不上,这样就导致了排序使用到了临时表,这也是导致sql执行慢的原因。我们看到sql中的最后一个排序为order by b.instance_no asc limit 37300,50,这里我们好像可以看到优化的曙光,调整数据库的索引以满足b表的排序需求:
root@127.0.0.1 : test_db 16:05:04 alter table instance add index ind_user_id(user_id,instance_no); query ok, 0 rows affected (0.56 sec)
调整索引后查看执行计划:
root@127.0.0.1 : test_db 16:09:42 explain select b.*,a.*,k.* from instance b left outer join image a on b.image_id=a.image_id left outer join key_pair k on b.key_pair_id=k.key_pair_id left outer join region_alias r_a on r_a.region_no=b.region_no where b.status in (1,8) and b.user_id = 21 and r_a.big_region_no='regeion_xx' order by b.instance_no asc limit 37300,50;
我们加上force index强制走我们新加的索引:
root@127.0.0.1 : test_db 16:10:24 explain select b.*,a.*,k.* from instance b force index (ind_user_id) left outer join image a on b.image_id=a.image_id left outer join key_pair k on b.key_pair_id=k.key_pair_id left outer join region_alias r_a on r_a.region_no=b.region_no where b.status in (1,8) and b.user_id = 21 and r_a.big_region_no='regeion_xx' order by b.instance_no asc limit 37300,50;
可以看到在加上提示符后,使用到了我们新加的索引,扫描的行数为54580行,执行时间:
root@127.0.0.1 : test_db 16:10:30 select b.*,a.*,k.* from instance b force index (ind_user_id) left outer join image a on b.image_id=a.image_id left outer join key_pair k on b.key_pair_id=k.key_pair_id left outer join region_alias r_a on r_a.region_no=b.region_no where b.status in (1,8) and b.user_id = 21 and r_a.big_region_no='regeion_xx' order by b.instance_no asc limit 37300,50; (0.49 sec)
原始的执行时间:
root@127.0.0.1 : test_db 16:10:51: select b.*,a.*,k.* from instance b left outer join image a on b.image_id=a.image_id left outer join key_pair k on b.key_pair_id=k.key_pair_id left outer join region_alias r_a on r_a.region_no=b.region_no where b.status in (1,8) and b.user_id = 21 and r_a.big_region_no='regeion_xx' order by b.instance_no asc limit 37300,50; (1.28 sec)
总结:
order by desc/asc limit的优化技术有时候在你无法建立很好索引的时候,往往会得到意想不到的优化效果,但有时候有一定的局限性,优化器可能不会按照你既定的索引路径扫描,优化器需要考虑到查询列的过滤性以及limit的长度,当查询列的选择性非常高的时候,使用sort的成本是不高的,当查询列的选择性很低的时候,那么使用order by +limit的技术是很有效的。
上一篇: Python SQLite3数据库日期与时间常见函数用法分析
下一篇: Java递归算法简单示例两则