当前位置:网站首页>30 optimization skills about mysql, super practical
30 optimization skills about mysql, super practical
2022-07-05 22:37:00 【It's a Samoye】
1. Should try to avoid in where Used in clauses != or <> The operator , Otherwise, the engine will discard the index and scan the whole table .
2. Optimize the query , Try to avoid full scan , The first thing to consider is where And order by Index the columns involved .
3. Should try to avoid in where Field in the clause null Value judgement , Failing to do so will cause the engine to abandon the index for a full table scan . Such as :select id from t where num is null Can be in num Set the default value on 0, Make sure the table num No column null value , And then query like this :select id from t where num=0
4. Try to avoid where Used in clauses or To join the conditions , Failing to do so will cause the engine to abandon the index for a full table scan . Such as :select id from t where num=10 or num=20 You can query like this :select id from t where num=10union all select id from t where num=20
5. The following query will also result in a full table scan :( It can't be preceded by a percent sign )select id from t where name like ‘%c%’ Let's go to the index select id from t where name like ‘c%’ To improve efficiency , Consider full-text search .
6. in and not in Be careful with , Otherwise, it will result in a full table scan . Such as :select id from t where num in(1,2,3) For continuous values , It works between Don't use in 了 :select id from t where num between 1 and 3
7. If in where Use parameters in Clause , It can also lead to a full table scan . because SQL Local variables are resolved only at runtime , But the optimizer can't delay the choice of access plan to run time ; It has to be selected at compile time . However , If an access plan is established at compile time , The value of the variable is still unknown , Therefore, it cannot be selected as an input item of index . As the following statement will scan the whole table :select id from t where [email protected] You can force queries to use indexes instead :select id from t with(index( Index name )) where [email protected]
8. Should try to avoid in where The clause performs an expression operation on the field , This causes the engine to abandon the use of indexes for a full table scan . Such as :select id from t where num/2=100 Should be changed to :select id from t where num=100*2
9. Should try to avoid in where Clause to function fields , This causes the engine to abandon the use of indexes for a full table scan . Such as :select id from t where substring(name,1,3)=’abc’ –name With abc At the beginning id
select id from t where datediff(day,createdate,’2005-11-30′)=0 –’2005-11-30′ Generated id Should be changed to :select id from t where name like ‘abc%’select id from t where createdate>=’2005-11-30′ and createdate<’2005-12-1′
10. Not in where In Clause “=” On the left is the function . Arithmetic operations or other expression operations , Otherwise, the system may not be able to use the index correctly .
11. When using index fields as conditions , If the index is a composite index , Then the first field in the index must be used as a condition to ensure that the system uses the index , Otherwise, the index will not be enabled use , And try to make the order of the fields consistent with the order of the index .
12. Don't write meaningless queries , If you need to generate an empty table structure :select col1,col2 into #t from t where 1=0 This type of code doesn't return any result sets , But it will consume system resources , It should be changed to this :create table #t(…)
13. A lot of times exists Instead of in Is a good choice :select num from a where num in(select num from b) Replace with the following statement :select num from a where exists(select 1 from b where num=a.num)
14. Not all indexes are valid for queries ,SQL Query optimization is based on the data in the table , When the index column has a large number of duplicate data ,SQL Queries may not use indexes , If there are fields in a table sex,male.female Almost half of each , So even in sex The index built on top also has no effect on query efficiency .
15. The more indexes, the better , The index can certainly improve the corresponding select The efficiency of , But it also reduces insert And update The efficiency of , because insert or update It's possible to rebuild the index , So how to build an index needs careful consideration , As the case may be . The number of indexes of a table should be no more than 6 individual , If there are too many, consider whether there are indexes built on columns that are not often used necessary .
16. Updates should be avoided as much as possible clustered Index data columns , because clustered The order of index data columns is the physical storage order of table records , Once the value of this column changes, the order of the records in the whole table will be adjusted , It will cost a lot of resources . If the application system needs to be updated frequently clustered Index data columns , Then you need to consider whether the index should be built as clustered Indexes .
17. Try to use numeric fields , If only contains the numerical value information the field as far as possible does not design for the character type , This reduces the performance of queries and connections , And it increases storage overhead . This is because the engine will Compare each character in the string one by one , For digital models, only one comparison is enough .
18. Use as much as possible varchar/nvarchar Instead of char/nchar , Because first of all the longer fields have less storage space , You can save storage space , Second, for queries , Searching in a relatively small field is obviously more efficient .
19. Don't use... Anywhere select * from t , Replace... With a specific list of fields “*”, Do not return any fields that are not available .
20. Try to use table variables instead of temporary tables . If a table variable contains a lot of data , Please note that the index is very limited ( Only the primary key index ).
21. Avoid frequent creation and deletion of temporary tables , To reduce the consumption of system table resources .
22. The temporary watch is not unusable , Using them properly can make some routines more efficient , for example , When you need to reference a large table or a dataset in a common table repeatedly . however , For a one-time event , Better make Using export table .
23. When creating a temporary table , If a large amount of data is inserted at one time , Then you can use select into Instead of create table, Avoid creating a lot of log , In order to speed up ; If the amount of data is small , In order to ease the resources of system tables , Should first create table, then insert.
24. If a temporary watch is used , Be sure to explicitly delete all temporary tables at the end of the stored procedure , First truncate table , then drop table , This avoids the long-term locking of the system tables .
25. Try to avoid using cursors , Because the efficiency of cursors is poor , If the data of cursor operation exceeds 1 Line ten thousand , Then we should consider rewriting .
26. Before using cursor based methods or temporary table methods , You should first look for a set based solution to solve the problem , The set based approach is usually more efficient .
27. Just like a temporary watch , Cursors are not unusable . Use... For small datasets FAST_FORWARD Cursors are usually better than other line by line processing methods , Especially when you have to reference several tables to get the data you need . Include... In the result set “ total ” Our routines are usually faster than using cursors . If at the time of development Inter permission , You can try both cursor based and set based methods , See which method works better .
28. Set... At the beginning of all stored procedures and triggers SET NOCOUNT ON , Set... At the end SET NOCOUNT OFF . There is no need to send... To the client after each statement of the stored procedure and trigger is executed DONEINPROC news .
29. Try to avoid returning large amounts of data to the client , If the amount of data is too large , We should consider whether the corresponding demand is reasonable .
30. Try to avoid large transaction operations , Improve system concurrency .
Database entry to mastery :
边栏推荐
- Cobaltstrike builds an intranet tunnel
- Editor extensions in unity
- boundary IoU 的计算方式
- Search: Future Vision (moving sword)
- Overriding equals() & hashCode() in sub classes … considering super fields
- QT creator 7 beta release
- Nacos 的安装与服务的注册
- 傅里叶分析概述
- Practice: fabric user certificate revocation operation process
- New 3D particle function in QT 6.3
猜你喜欢
2022 Software Test Engineer salary increase strategy, how to reach 30K in three years
Pl/sql basic case
All expansion and collapse of a-tree
傅里叶分析概述
Opencv judgment points are inside and outside the polygon
EasyCVR集群部署如何解决项目中的海量视频接入与大并发需求?
513. Find the value in the lower left corner of the tree
我把开源项目alinesno-cloud-service关闭了
Leetcode simple question ring and rod
关于MySQL的30条优化技巧,超实用
随机推荐
Exponential weighted average and its deviation elimination
a-tree 树的全部展开和收起
Qtquick3d real time reflection
Request preview display of binary data and Base64 format data
Starting from 1.5, build a micro Service Framework -- log tracking traceid
Stored procedures and stored functions
QT creator 7 beta release
Metaverse Ape获Negentropy Capital种子轮融资350万美元
First, redis summarizes the installation types
关于MySQL的30条优化技巧,超实用
Nail error code Encyclopedia
344. Reverse String. Sol
Metaverse Ape猿界应邀出席2022·粤港澳大湾区元宇宙和web3.0主题峰会,分享猿界在Web3时代从技术到应用的文明进化历程
鏈錶之雙指針(快慢指針,先後指針,首尾指針)
IIC bus realizes client device
等到产业互联网时代真正发展成熟,我们将会看待一系列的新产业巨头的出现
The introduction to go language is very simple: String
Opencv judgment points are inside and outside the polygon
How to create a thread
Distributed solution selection