当前位置:网站首页>High burst solution 2
High burst solution 2
2022-06-13 06:09:00 【Listen to the years】
For a variety of reasons , We have encountered some scenarios that require high burst support .
Scenario business requirements :
1、 High burst
2、 There are a lot of read and write operations , And need immediate response
3、 Stable performance
First step : Enhance service capabilities - SLB Load Balancer
user C The client operates the data through the back-end program , If the user traffic reaches a certain level at a certain time , So it is difficult for a single server to support such a large amount of traffic . Cause server downtime . There are many reasons for server downtime , Let's make a concrete analysis of
1、 Because the flow is too large , The server bandwidth is full , This causes users to download slowly , Poor experience
Traffic is mutual , When users upload content , The server is downloading , When the server responds to the data to the user, the user downloads it , When the user group is large enough , The traffic is full , As a result, everyone's download speed is limited , This will also lead to a series of problems
Connection timeout : Because the data is not completely transmitted to the user within a certain time .
Ask to wait : Bandwidth is full , You need to wait for the server to finish downloading .
CPU Full performance : The user is requesting , The server is responding , However, the throughput of the server becomes smaller due to the traffic , Lead to a lot of tcp Request to enter the waiting state , The server needs to maintain this wait state (http 3 The second handshake ), Maintenance status requires consumption cpu
404 Respond to : The website has been down , reason : Too much flow -> Connection timeout -> Ask to wait -> cpu Full performance -> 404 Respond to
Increase server bandwidth traffic
2、 The computing power of the machine is not enough
Business logic is too complex 、 A single interface consumes a lot of resources ( Such as uploading pictures 、 Audio etc. ). Regardless of bandwidth , The computer that processes pictures and audio also needs to be maintained in the request ( You need to accept from the user in memory now C Upload of client , And save it locally , Wait for the request to end before releasing the resource )
Add server configuration
3、 A single server has been launched , Still unable to support the access of user groups
Then, multiple servers serve users C End group services , Such as load balancing . Specific implementation principles to flip through my other chapters
4、SLB
Real time monitoring of machine performance nodes , When the machine performance reaches a certain threshold , Will be considered to be running out of performance , here SLB A new server instance will be automatically created ( The new server ) And it is attached to its own monitoring system to provide services for users .
The second step : Increase database capacity - Read / write separation 、 Data fragmentation 、 hot standby 、 Connection pool
Generally, the place that causes the website to crash is the database , The importance of databases goes without saying , First, we can use third-party cloud products (RSD or poloarDB ) Generally, dual machine hot standby is used and implemented by default , We don't need to say more about this .
1、 Connection pool
Both traditional and database access are used to connect to the database , Release immediately after use (php). This will bring the overhead of establishing a connection when interacting with the database . There is nothing wrong with low-frequency access , But if it is a high burst , This is not friendly , It is better to build a database connection pool to reduce the cost of establishing connections between programs and databases .( That is, the connection will not be released after the first use , Instead, join the connection pool , The next time you reuse the database , You don't have to re-establish the connection , Directly from the connection pool ). Such as ali cloud RDS or polarDB It implements this connection pool for us , If you are php, It can be accessed and used as usual , It is a database middleware itself . That is, we have changed from database access to database connection pool middleware
2、 Read / write separation
Most of the high burst scenes are read a lot , If so, there are two options
programme 1:redis colony or Fragmentation
If your business is a lot of access to a single piece of data , Then I suggest you query the data , Distribute this data to multiple redis(redis It's best to work on different physical machines , avoid CPU Resource occupancy conflict ), And then randomly select from these multiple... When querying redis Get data in , The purpose is to reduce the cost of a single machine redis The pressure of the , But usually one redis That's enough. .
If your business has extensive access to multiple data , And the frequency of each data access is similar , Then make slices , Put the data A among redisA、 data B among redisB、 During the investigation , Go to whatever data you want redis In order to get .
programme 2:mysql colony Or slice
If your business is a lot of access to a single piece of data , Well, first of all, consider mysql The number of connections (mysql Configuration of ) Whether it supports so many traffic accesses . Can I create a connection pool to buffer the number of connections . Next, consider how my table index optimization works , When getting to the user , How much performance is consumed by a single request , How about the database consumption under the test burst .
1、 colony : Adopt read-write separation mode , The read library obtains from the main library binlog Binary , So as to achieve the master-slave data synchronization function , And users C A large number of requests on the end will get data from the reader , If we create multiple read Libraries , Then there are multiple databases to provide services for users , Thus, the pressure of a single database server is reduced
2、 Fragmentation : Categorize data , Put them in different databases , Common problems are that the business is split and mounted on different servers, while the database is also migrated to different servers . At this point, accessing data is to obtain different data from different databases
What if your scenario requires your service to respond in a timely manner ?
Common activities such as second kill , Ask you to give a feedback immediately .( Processing per second 20W request , Regardless of the server , Only database stability is considered )
Queues cannot be used MQ( The queue also needs to be polled slowly in the background , In case of high burst , The waiting time may be a little long , Even if there are several more processing processes , There is no guarantee of immediate response ).
redis I can't hold on ( In case of a great burst ,redis A single 10W It's the top of the sky , And it doesn't support transactions , Secsha needs transaction support , Cannot exceed inventory )
Personal opinion : use mysql、 But one mysql. We can allocate the inventory of this second kill commodity to 100 platform mysql database , also mysql The databases are located on different physical machines . user C End in the second kill , There will be a workflow as follows :
1、 Check what data is still in stock ( This step is not to poll all databases , But in redis Configurations stored in the cluster , This configuration is Database identifier => Is there any stock Such as db0=>1 db1=>0 representative db0 In stock db1 period )
2、 Randomly select one from the database set with inventory for users to use
3、 After obtaining the database address, the user goes to the database to execute the business program ( Second kill logic ), Because the database returns no inventory exception ( Out of stock ), Trigger the inventory closing mechanism ( Go to redis Close the inventory of this database in the cluster )
4、 Running a task schedule in the background ( Every time 30s Do it once , The task is to check this 100 Which databases still have inventory , Which are not in stock . And keep in real time with redis Database configuration status synchronization for )
The third step :CDN Network distribution
Users will download a large number of static resources when browsing , Such as images ,css,js, Audio 、 Video etc. , Don't hesitate , Throw all these contents to CDN To deal with , That is, users no longer access and download these resources from the business server , But to CDN Visit and download these resources ,CDN The essence is network distribution , Better service providers have their own nodes all over the country , And theoretically, there is no upper limit to the performance , You don't have to think about CDN Performance problems of .
边栏推荐
猜你喜欢

You still can't remotely debug idea? Come and have a look at my article. It's easy to use

Fichier local second Search Tool everything

软件测试——接口常见问题汇总

How MySQL optimizes the use of joint index ABC

Add attributes in storyboard and Xib (fillet, foreground...) Ibinspectable and ibdesignable

Echart柱状图:堆叠柱状图value格式化显示

What happens when the MySQL union index ABC encounters a "comparison operator"?

pthon 执行 pip 指令报错 You should consider upgrading via ...

Software testing - Summary of common interface problems

Use of Nacos configuration center
随机推荐
Leetcode- third largest number - simple
Power simple of leetcode-3
A brief analysis of the overall process of view drawing
USB debugging assistant 20181018 (v1.3)
Zero copy technology
Test logiciel - résumé des FAQ d'interface
Custom view
自定义View
1+1>2,Share Creators可以帮助您实现
Leetcode- find all missing numbers in the array - simple
Software testing - Summary of common interface problems
ArrayList loop removes the pit encountered
Cmake add -fpic option GCC
Adding classes dynamically in uni app
高迸发解决方案2
Echart histogram: stack histogram value formatted display
= = relation between int and integer
Fragment lifecycle
A glimpse of [wechat applet]
JS convert text to language for playback