当前位置:网站首页>Scheduling system of kubernetes cluster
Scheduling system of kubernetes cluster
2022-07-05 04:05:00 【Fate in the Jianghu】
kubernetes Cluster scheduling system
One 、kube-scheduler Introduce
1.kube-scheduler brief introduction
1.Kubernetes Scheduler yes Kubernetes One of the core components of the control plane .
2.Scheduler Run in the control plane , And distribute the workload to Kubernetes colony .
3.kube-scheduler Will be based on Kubernetes The scheduling principle and our configuration options select the best node to run pod,
2.k8s The role of the dispatching system
1. Maximize resource utilization
2. Meet the user specified scheduling requirements
3. Meet custom priority requirements
4. High scheduling efficiency , Be able to make quick decisions based on resources
5. It can adjust the scheduling strategy according to the change of load
6. Consider fairness at all levels
3.kubernetes Component diagram

4.schedule Working diagram of scheduler

Two 、 see kubernetes state
[[email protected]-master ~]# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 39h v1.23.1 192.168.3.201 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.6
k8s-node01 Ready worker 39h v1.23.1 192.168.3.202 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.6
k8s-node02 Ready <none> 39h v1.23.1 192.168.3.203 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 containerd://1.6.6
3、 ... and 、kube-scheduler Process of selecting nodes
1. primary
. The preselection phase is mainly used to exclude nodes that do not meet the conditions .
# One pod The resources running in the container are as required
resources:
request:
cpu: 1
memory: 1Gi
2. optimization
The optimization link is mainly to score the nodes that meet the conditions .
Reference items for scoring
1. The actual resource occupation of the node
2. In nodes pod The number of
3. Nodes in the cpu Load condition
4. Memory usage in nodes
.......
3. Final selection
The final selection phase sorts the nodes according to the score , Find the node with the highest score , To schedule .
Four 、 Intervention scheduling method - tag chooser
1. Introduction to tag selector
1. For the specified node tagging
2. by pod Specify to schedule nodes with specific labels
2. Of the label selector yaml How to write it
① View all node labels
[[email protected]-master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master Ready control-plane,master 40h v1.23.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node01 Ready worker 40h v1.23.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,node-role.kubernetes.io/worker=
k8s-node02 Ready <none> 40h v1.23.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,type=dell730
② stay yaml Select a label in the file
volumes:
- name: rootdir
hostPath:
path: /data/mysql
nodeSelector:
#disk: ssd
kubernetes.io/hostname=k8s-node01 # You can choose the unique tags built into the system
containers:
- name: mysql
image: mysql:5.7
volumeMounts:
- name: rootdir
mountPath: /var/lib/mysql
3. Run a complete pod Example
cat ./label.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy: {
}
template:
metadata:
creationTimestamp: null
labels:
app: mysql
spec:
volumes:
- name: datadir
hostPath:
path: /data/mysql
nodeSelector:
disk: ssd
nodeType: cpu
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "redhat"
volumeMounts:
- name: datadir
mountPath: /var/lib/mysql
4. to node02 Add labels to nodes
[[email protected]-master ~]# kubectl label nodes k8s-node02 disk=ssd
node/k8s-node02 labeled
[[email protected]-master ~]# kubectl label nodes k8s-node02 nodeType=cpu
node/k8s-node02 labeled
5. establish pod
[[email protected]-master ~]# kubectl apply -f ./label.yaml
deployment.apps/mysql created
6. see pod The node
[[email protected]-master ~]# kubectl get pod -owide |grep node02
elasticsearch-master-0 1/1 Running 2 (7m54s ago) 29h 10.244.58.224 k8s-node02 <none> <none>
fb-filebeat-lj5p7 1/1 Running 3 (7m54s ago) 28h 10.244.58.221 k8s-node02 <none> <none>
kb-kibana-5c46dbc5dd-htw7n 1/1 Running 1 (7m54s ago) 25h 10.244.58.222 k8s-node02 <none> <none>
metric-metricbeat-5h5g5 1/1 Running 2 (7m53s ago) 27h 192.168.3.203 k8s-node02 <none> <none>
metric-metricbeat-758c5c674-ldgg4 1/1 Running 2 (7m54s ago) 27h 10.244.58.225 k8s-node02 <none> <none>
mysql-59c6fc696d-qrjx9 1/1 Running 0 13m 10.244.58.223 k8s-node02 <none> <none>
5、 ... and 、 Intervention scheduling method —— The stain
1. The stain taint Introduce
The stain : When a node is injured taint When the tag , By default , whatever pod Will not be dispatched to this node , Even if this node is assigned a label selector , This node must be selected ,pod It will not run to this node , here pod Meeting pending.
2. Stain type
* preferNoSchedule:
Try not to schedule
* NoSchedule: No scheduling
At present node If there is already some before the stain pod Running on , When stained , new pod Will not schedule it ; But already running pod Will not be expelled
* NoExecute: No scheduling .
At present node If there is some before the stain pod Running on , When stained , Will immediately expel the existing pod
3. Create a stain for the work node
[[email protected]-master ~]# kubectl taint node k8s-node02 key1=value:NoSchedule
node/k8s-node02 tainted
[[email protected]-master ~]# kubectl taint node k8s-node02 key2=value:NoExecute
node/k8s-node02 tainted
4. Remove the stain on the work node
kuberctl taint node k8s-node02 key1-
5. Check the stain of a node
[[email protected]-master ~]# kubectl describe nodes k8s-node02 |grep -i tain -A2 -B2
nodeType=cpu
type=dell730
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.3.203/24
--
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 03 Jul 2022 01:14:11 +0800
Taints: key2=value:NoExecute
key1=value:NoSchedule
Unschedulable: false
--
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.6.6
Kubelet Version: v1.23.1
Kube-Proxy Version: v1.23.1
6、 ... and 、 Intervention scheduling method —— tolerate
1. tolerate toerations Introduce
1. tolerate : When one pod When you can tolerate stains on nodes , Do not represent , It will select this node , And for this pod for , This node is the same as other nodes without stains ..
2. One pod Can tolerate multiple stains , When there are multiple stains on a node , Only the pod Tolerate all the stains on this node , This node is in this pod In front of you, you can behave like a node without stains .
2. Tolerant and tainted pod choice

3. stay yaml Tolerant usage in documents
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.5.2
tolerations:
- key: "check"
operator: "Equal"
value: "xtaint"
effect: "NoExecute"
tolerationSeconds: 3600
4. Tolerance related parameter interpretation
tolerations:-----------> tolerate
- key: "check" -----------> Tolerant key
operator: "Equal"-----------> The operator " be equal to "
value: "xtaint"-----------> The key value corresponding to the tolerated key
effect: "NoExecute"-----------> Tolerable key corresponding influence effect
tolerationSeconds: 3600-----------> tolerate 3600 second . This pod It won't be like ordinary pod Be expelled immediately like that , But wait 3600 Seconds before being deleted .
边栏推荐
- Containerd series - what is containerd?
- 10种寻址方式之间的区别
- Threejs clicks the scene object to obtain object information, and threejs uses raycaster to pick up object information
- [brush questions] BFS topic selection
- As soon as I write the code, President Wang talks with me about the pattern all day
- 根据入栈顺序判断出栈顺序是否合理
- 一文带你了解BI的前世今身与企业数字化转型的关系
- Is there a sudden failure on the line? How to make emergency diagnosis, troubleshooting and recovery
- Interview summary: This is a comprehensive & detailed Android interview guide
- Alibaba cloud ECS uses cloudfs4oss to mount OSS
猜你喜欢

Interview summary: This is a comprehensive & detailed Android interview guide

Online sql to excel (xls/xlsx) tool

JWT vulnerability recurrence

MindFusion. Virtual Keyboard for WPF

技术教程:如何利用EasyDSS将直播流推到七牛云?

Common features of ES6

On the day 25K joined Tencent, I cried

Threejs Internet of things, 3D visualization of farms (II)

C语言课设:影院售票管理系统

Threejs rendering obj+mtl model source code, 3D factory model
随机推荐
Enterprise level: spire Office for . NET:Platinum|7.7. x
Threejs Internet of things, 3D visualization of farms (I)
Three level linkage demo of uniapp uview u-picker components
面试汇总:这是一份全面&详细的Android面试指南
[C language] address book - dynamic and static implementation
JWT vulnerability recurrence
It took two nights to get Wu Enda's machine learning course certificate from Stanford University
Judge whether the stack order is reasonable according to the stack order
How does the applet solve the rendering layer network layer error?
一文带你了解BI的前世今身与企业数字化转型的关系
On the day 25K joined Tencent, I cried
Rust blockchain development - signature encryption and private key public key
我国算力规模排名全球第二:计算正向智算跨越
Special Edition: spreadjs v15.1 vs spreadjs v15.0
As soon as I write the code, President Wang talks with me about the pattern all day
根据入栈顺序判断出栈顺序是否合理
EasyCVR平台出现WebRTC协议视频播放不了是什么原因?
How to solve the problem that easycvr changes the recording storage path and does not generate recording files?
What is the reason why the webrtc protocol video cannot be played on the easycvr platform?
Rome链分析