当前位置:网站首页>Single machine high concurrency model design
Single machine high concurrency model design
2022-07-08 00:10:00 【Abbot's temple】
background
In the microservices architecture , We are used to using multiple machines 、 Distributed storage 、 Cache to support a highly concurrent request model , It ignores how the single machine high concurrency model works . This article deconstructs the process of establishing connection and data transmission between client and server , Explain how to design a single machine high concurrency model .
classic C10K problem
How to serve on a physical machine at the same time 10K user , And 10000 Users , about java For the programmer , It's not difficult , Use netty It can be built to support concurrency more than 10000 The server of . that netty How is it realized ? First we forget netty, Analyze from the beginning . One connection per user , There are two things about the server
Manage this 10000 A connection
Handle 10000 Connected data transmission
TCP Connection and data transmission
Connection is established
We take common TCP For example, connection .
A familiar picture . This article focuses on the analysis of the server , So ignore the client details first . On the server side, create socket,bind port ,listen Be on it . Finally through accept Establish a connection with the client . Get one connectFd, namely Connect socket ( stay Linux Are file descriptors ), Used to uniquely identify a connection . After that, data transmission is based on this .
The data transfer
For data transmission , The server opens a thread to process data . The specific process is as follows
select
Application program to system kernel space , Ask if the data is ready ( Because there is a window size limit , There is no data , You can read ), The data is not ready , The application has been blocked , Waiting for an answer .read
The kernel judges that the data is ready , Copy data from the kernel to the application , After completion , Successfully returns .The application goes on decode, Business logic processing , Last encode, Send it out , Return to the client
Because a thread processes a connection data , The corresponding threading model is like this
Multiplexing
Blocking vs Non blocking
Because a connection transmits , One thread , Too many threads are required , It takes up a lot of resources . At the same time, the connection ends , Resource destruction . You have to re create the connection . So a natural idea is to reuse threads . That is, multiple connections use the same thread . This raises a problem , Originally, the entrance where we carried out data transmission ,, Suppose the thread is processing the data of a connection , But the data has never been in good time , because select
It's blocked , In this way, even if other connections have data readable , I can't read . So it can't be blocked , Otherwise, multiple connections cannot share a thread . So it must be non blocking .
polling VS Event notification
After changing to non blocking , Applications need to constantly poll the kernel space , Determine whether a connection ready.
for (connectfd fd: connectFds) {
if (fd.ready) {
process();
}
}
Polling is inefficient , Extraordinary consumption CPU, So a common practice is that the callee sends an event notification to inform the caller , Instead of the caller polling . This is it. IO Multiplexing , All the way refers to standard input and connection socket . Register a batch of sockets into a group in advance , When there is any one in this group IO When an event is , Go to inform the blocking object that it is ready .
select/poll/epoll
IO The common realization of multiplexing technology is select,poll.select And poll Not much difference , Mainly poll There is no limit to the maximum file descriptor .
From polling to event notification , Use multiplexing IO After optimization , Although the application does not have to poll the kernel space all the time . But after receiving the event notification in kernel space , The application does not know which corresponding connection event , You have to traverse
onEvent() {
// Listening for events
for (connectfd fd: registerConnectFds) {
if (fd.ready) {
process();
}
}
}
Foreseeable , As the number of connections increases , The time consumption increases in proportion . Comparison poll The number of events returned ,epoll There is an event to return connectFd Array , This avoids application polling .
onEvent() {
// Listening for events
for (connectfd fd: readyConnectFds) {
process();
}
}
Of course epoll The high performance of is more than that , There are also edge triggers (edge-triggered), I will not elaborate in this article .
Non blocking IO+ The multiplexing process is as follows :
select
Application program to system kernel space , Ask if the data is ready ( Because there is a window size limit , There is no data , You can read ), Go straight back to , Nonblocking call .Data is ready in kernel space , send out ready read Feed the application
The application reads data , Conduct decode, Business logic processing , Last encode, Send it out , Return to the client
Thread pool division
Above we mainly through non blocking + Multiplexing IO To solve local select
and read
problem . Let's re sort out the overall process , See how the whole data processing process can be grouped . Each stage uses a different thread pool to handle , Increase of efficiency . First of all, there are two kinds of events
Connection event
accept
Action to deal withTransport events
select
,read
,send
Action to deal with .The connection event processing flow is relatively fixed , No additional logic , No further splitting is required . Transport events
read
,send
It is relatively fixed , The processing logic of each connection is similar , It can be processed in a thread pool . And concrete logicdecode
,logic
,encode
Each connection processing logic is different . The whole can be processed in a thread pool .
The server is split into 3 part
reactor part , Unified handling of events , Then distribute according to the type
Connection events are distributed to acceptor, Data transmission events are distributed to handler
If it is data transmission type ,handler read Give it to me after processorc Handle
because 1,2 It's faster to handle , Put it into the process pool for treatment , The business logic is processed in another thread pool .
The above is the famous reactor High concurrency model .
I am participating in the recruitment of nuggets technology community creator signing program
边栏推荐
- 第四期SFO销毁,Starfish OS如何对SFO价值赋能?
- Opengl3.3 mouse picking up objects
- Is 35 really a career crisis? No, my skills are accumulating, and the more I eat, the better
- 10 schemes to ensure interface data security
- Robomaster visual tutorial (1) camera
- At the age of 35, I made a decision to face unemployment
- 80% of the people answered incorrectly. Does the leaf on the apple logo face left or right?
- 【编程题】【Scratch二级】2019.03 垃圾分类
- CoinDesk评波场去中心化进程:让人们看到互联网的未来
- limit 与offset的用法(转载)
猜你喜欢
Two small problems in creating user registration interface
【编程题】【Scratch二级】2019.12 飞翔的小鸟
[leetcode] 20. Valid brackets
new和delete的底层原理以及模板
光流传感器初步测试:GL9306
35岁那年,我做了一个面临失业的决定
80%的人答错,苹果logo上的叶子到底朝左还是朝右?
ROS从入门到精通(九) 可视化仿真初体验之TurtleBot3
C language 005: common examples
Stm32f1 and stm32cubeide programming example - rotary encoder drive
随机推荐
Emotional post station 010: things that contemporary college students should understand
Opengl3.3 mouse picking up objects
商品的设计等整个生命周期,都可以将其纳入到产业互联网的范畴内
Problems faced when connecting to sqlserver after downloading (I)
Connect diodes in series to improve voltage withstand
[basis of recommendation system] sampling and construction of positive and negative samples
How did a fake offer steal $540million from "axie infinity"?
Set up personal network disk with nextcloud
STM32F1與STM32CubeIDE編程實例-旋轉編碼器驅動
Introduction to programming hardware
limit 与offset的用法(转载)
Restricted linear table
Go time package common functions
Postgres timestamp to human eye time string or millisecond value
Pigsty:开箱即用的数据库发行版
SQL 使用in关键字查询多个字段
Trust orbtk development issues 2022
35岁真就成了职业危机?不,我的技术在积累,我还越吃越香了
QT and OpenGL: loading 3D models using the open asset import library (assimp) - Part 2
2022.7.7-----leetcode. six hundred and forty-eight