当前位置:网站首页>How does kubernetes support stateful applications through statefulset? (07)

How does kubernetes support stateful applications through statefulset? (07)

2022-07-06 20:17:00 wzlinux

We learned Kubernetes Stateless workload in , And put it into practice Deployment object , I believe now you have gradually fallen in love with Kubernetes 了 .

So this class , Let's have a look at Kubernetes Another workload in StatefulSet. You can tell by the name , This workload is mainly used for stateful service publishing . About stateful services and stateless Services , You can refer to the previous chapter .

Let's gradually understand from a specific example 、 know StatefulSet. stay kubectl Command line , We usually StatefulSet Shorthand for sts. Deploying a StatefulSet When , There is a pre dependent object , namely  Headless Services. This object is StatefulSet The role of , We will come together in the following . in addition , Detailed introduction and other functions of this object , We will explain it separately in later courses . Here it is , You can skip it for a while Service Perception . Let's first look at the following Headless Services:

$ cat nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  namespace: demo
  labels:
    app: nginx
spec:
  clusterIP: None
  ports:
  - port: 80
    name: web
  selector:
    app: nginx

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.

The above paragraph yaml It means , stay demo In this namespace , Create a file called nginx-demo Service for , This service is exposed 80 port , You can access with app=nginx This label Of Pod.

Now let's use the above paragraph yaml Create a in the cluster Service:

$ kubectl create ns demo
$ kubectl create -f nginx-svc.yaml
service/nginx-demo created
$ kubectl get svc -n demo
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
nginx-demo   ClusterIP   None             <none>        80/TCP    5s

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.

Create the pre dependent Service, Now we can start to create real StatefulSet object , Refer to the following yaml file :

$ cat web-sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web-demo
  namespace: demo
spec:
  serviceName: "nginx-demo"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.2-alpine
        ports:
        - containerPort: 80
          name: web
$ kubectl create -f web-sts.yaml
$ kubectl get sts -n demo
NAME       READY   AGE
web-demo   0/2     9s

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.

You can see , I have named it web-demo Of StatefulSet Deployment is complete .

Now let's explore a little StatefulSet The secret of , See what features it has , Why can we guarantee the stateful operation of services .

StatefulSet Characteristics of

adopt kubectl Of watch function ( Add parameters to the command line -w), We can observe that Pod The state changes step by step .

$ kubectl get pod -n demo -w
NAME         READY   STATUS              RESTARTS   AGE
web-demo-0   0/1     ContainerCreating   0          18s
web-demo-0   1/1     Running             0          20s
web-demo-1   0/1     Pending             0          0s
web-demo-1   0/1     Pending             0          0s
web-demo-1   0/1     ContainerCreating   0          0s
web-demo-1   1/1     Running             0          2s

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.

adopt StatefulSet created Pod Names have certain rules , namely $(statefulset name )-$( Serial number ), For example, in this example web-demo-0、web-demo-1.

There is another interesting point here ,web-demo-0 This Pod Than web-demo-1 Priority creation , And in web-demo-0 Turn into Running After the State , Was created . To prove this conjecture , We observe in a terminal window StatefulSet Of Pod:

$ kubectl get pod -n demo -w -l app=nginx

     
  • 1.

Let's open another terminal port watch This namespace Medium event:

$ kubectl get event -n demo -w

     
  • 1.

Now we try to change this StatefulSet Number of copies , Change it to 5:

$ kubectl scale sts web-demo -n demo --replicas=5
statefulset.apps/web-demo scaled

     
  • 1.
  • 2.

At this time, we observe the output of the other two terminal ports :

$ kubectl get pod -n demo -w
NAME         READY   STATUS    RESTARTS   AGE
web-demo-0   1/1     Running   0          20m
web-demo-1   1/1     Running   0          20m
web-demo-2   0/1     Pending   0          0s
web-demo-2   0/1     Pending   0          0s
web-demo-2   0/1     ContainerCreating   0          0s
web-demo-2   1/1     Running             0          2s
web-demo-3   0/1     Pending             0          0s
web-demo-3   0/1     Pending             0          0s
web-demo-3   0/1     ContainerCreating   0          0s
web-demo-3   1/1     Running             0          3s
web-demo-4   0/1     Pending             0          0s
web-demo-4   0/1     Pending             0          0s
web-demo-4   0/1     ContainerCreating   0          0s
web-demo-4   1/1     Running             0          3s

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.

We see again StatefulSet Managed Pod according to 2、3、4 Create in order , The name is regular , Pass the following section Deployment Created randomly Pod Names are very different .

By observing the corresponding event Information , It can also confirm our conjecture again .

$ kubectl get event -n demo -w
LAST SEEN   TYPE     REASON             OBJECT                 MESSAGE
20m         Normal   Scheduled          pod/web-demo-0         Successfully assigned demo/web-demo-0 to kraken
20m         Normal   Pulling            pod/web-demo-0         Pulling image "nginx:1.19.2-alpine"
20m         Normal   Pulled             pod/web-demo-0         Successfully pulled image "nginx:1.19.2-alpine"
20m         Normal   Created            pod/web-demo-0         Created container nginx
20m         Normal   Started            pod/web-demo-0         Started container nginx
20m         Normal   Scheduled          pod/web-demo-1         Successfully assigned demo/web-demo-1 to kraken
20m         Normal   Pulled             pod/web-demo-1         Container image "nginx:1.19.2-alpine" already present on machine
20m         Normal   Created            pod/web-demo-1         Created container nginx
20m         Normal   Started            pod/web-demo-1         Started container nginx
20m         Normal   SuccessfulCreate   statefulset/web-demo   create Pod web-demo-0 in StatefulSet web-demo successful
20m         Normal   SuccessfulCreate   statefulset/web-demo   create Pod web-demo-1 in StatefulSet web-demo successful
0s          Normal   SuccessfulCreate   statefulset/web-demo   create Pod web-demo-2 in StatefulSet web-demo successful
0s          Normal   Scheduled          pod/web-demo-2         Successfully assigned demo/web-demo-2 to kraken
0s          Normal   Pulled             pod/web-demo-2         Container image "nginx:1.19.2-alpine" already present on machine
0s          Normal   Created            pod/web-demo-2         Created container nginx
0s          Normal   Started            pod/web-demo-2         Started container nginx
0s          Normal   SuccessfulCreate   statefulset/web-demo   create Pod web-demo-3 in StatefulSet web-demo successful
0s          Normal   Scheduled          pod/web-demo-3         Successfully assigned demo/web-demo-3 to kraken
0s          Normal   Pulled             pod/web-demo-3         Container image "nginx:1.19.2-alpine" already present on machine
0s          Normal   Created            pod/web-demo-3         Created container nginx
0s          Normal   Started            pod/web-demo-3         Started container nginx
0s          Normal   SuccessfulCreate   statefulset/web-demo   create Pod web-demo-4 in StatefulSet web-demo successful
0s          Normal   Scheduled          pod/web-demo-4         Successfully assigned demo/web-demo-4 to kraken
0s          Normal   Pulled             pod/web-demo-4         Container image "nginx:1.19.2-alpine" already present on machine
0s          Normal   Created            pod/web-demo-4         Created container nginx
0s          Normal   Started            pod/web-demo-4         Started container nginx

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.

Now let's try to shrink the volume :

$ kubectl scale sts web-demo -n demo --replicas=2
statefulset.apps/web-demo scaled

     
  • 1.
  • 2.

At this time, observe the other two terminal windows , They are as follows :

web-demo-4   1/1     Terminating   0          11m
web-demo-4   0/1     Terminating   0          11m
web-demo-4   0/1     Terminating   0          11m
web-demo-4   0/1     Terminating   0          11m
web-demo-3   1/1     Terminating   0          12m
web-demo-3   0/1     Terminating   0          12m
web-demo-3   0/1     Terminating   0          12m
web-demo-3   0/1     Terminating   0          12m
web-demo-2   1/1     Terminating   0          12m
web-demo-2   0/1     Terminating   0          12m
web-demo-2   0/1     Terminating   0          12m
web-demo-2   0/1     Terminating   0          12m
0s          Normal   SuccessfulDelete   statefulset/web-demo   delete Pod web-demo-4 in StatefulSet web-demo successful
0s          Normal   Killing            pod/web-demo-4         Stopping container nginx
0s          Normal   Killing            pod/web-demo-3         Stopping container nginx
0s          Normal   SuccessfulDelete   statefulset/web-demo   delete Pod web-demo-3 in StatefulSet web-demo successful
0s          Normal   SuccessfulDelete   statefulset/web-demo   delete Pod web-demo-2 in StatefulSet web-demo successful
0s          Normal   Killing            pod/web-demo-2         Stopping container nginx

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.

You can see , At the time of volume reduction ,StatefulSet The associated Pod Press 4、3、2 Delete in order .

so , For a possession N Copies of StatefulSet Come on ,Pod When deploying, follow {0 …… N-1} Created in sequence number order , When deleting, delete one by one in reverse order , This is the first feature I want to talk about .

Then let's see ,StatefulSet created Pod Have fixed 、 And the exact host name , such as :

$ for i in 0 1; do kubectl exec web-demo-$i -n demo -- sh -c 'hostname'; done
web-demo-0
web-demo-1

     
  • 1.
  • 2.
  • 3.

Let's look at the top StatefulSet Of API Object definitions , Have you found anything similar to that in our last section Deployment The definition of is very similar , The main difference is spec.serviceName This field . It is very important ,StatefulSet According to this field , For each Pod Create a DNS domain name , This The format of the domain name by $(podname).(headless service name), Let's take a look at it through examples .

At present Pod and IP The corresponding relationship between them is as follows :

$ kubectl get pod -n demo -l app=nginx -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
web-demo-0   1/1     Running   0          3h17m   10.244.0.39   kraken   <none>           <none>
web-demo-1   1/1     Running   0          3h17m   10.244.0.40   kraken   <none>           <none>

     
  • 1.
  • 2.
  • 3.
  • 4.

Pod web-demo-0 Of IP The address is 10.244.0.39,web-demo-1 Of IP The address is 10.244.0.40. Here we go through kubectl run In the same namespace demo Create a name in dns-test Of Pod, meanwhile attach Into the container , Be similar to docker run -it --rm This command .
We run in containers nslookup To query their internal DNS Address , As shown below :

$ kubectl run -it --rm --image busybox:1.28 dns-test -n demo
If you don't see a command prompt, try pressing enter.
/ # nslookup web-demo-0.nginx-demo
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      web-demo-0.nginx-demo
Address 1: 10.244.0.39 web-demo-0.nginx-demo.demo.svc.cluster.local
/ # nslookup web-demo-1.nginx-demo
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name:      web-demo-1.nginx-demo
Address 1: 10.244.0.40 web-demo-1.nginx-demo.demo.svc.cluster.local

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.

You can see , Every Pod There is a corresponding  A Record .
Let's delete these now Pod, See what's going to change :

$ kubectl delete pod -l app=nginx -n demo
pod "web-demo-0" deleted
pod "web-demo-1" deleted
$ kubectl get pod -l app=nginx -n demo -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
web-demo-0   1/1     Running   0          15s   10.244.0.50   kraken   <none>           <none>
web-demo-1   1/1     Running   0          13s   10.244.0.51   kraken   <none>           <none>

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.

After deleting successfully , You can find StatefulSet A new Pod, however Pod The name remains unchanged . The only change is IP It has changed .

Let's see DNS Record :

$ kubectl run -it --rm --image busybox:1.28 dns-test -n demo
If you don't see a command prompt, try pressing enter.
/ # nslookup web-demo-0.nginx-demo
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      web-demo-0.nginx-demo
Address 1: 10.244.0.50 web-demo-0.nginx-demo.demo.svc.cluster.local
/ # nslookup web-demo-1.nginx-demo
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      web-demo-1.nginx-demo
Address 1: 10.244.0.51 web-demo-1.nginx-demo.demo.svc.cluster.local

     
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.

It can be seen that ,DNS On record Pod The domain name of has not changed , only IP The address has changed . So when Pod The node where it is located fails, resulting in Pod Drift to other nodes , perhaps Pod Deleted and rebuilt due to failure ,Pod Of IP Will change , however Pod There will be no change in your domain name , That means Services can pass through constant Pod Domain name to ensure the stability of communication , Instead of relying on Pod IP.

With spec.serviceName This field , To ensure the StatefulSet The associated Pod Can have a stable network identity , namely Pod The serial number of 、 Host name 、DNS Record name, etc .

The last thing I want to say is , For stateful Services , Each copy will use persistent storage , And the data used are different .

StatefulSet adopt PersistentVolumeClaim(PVC) Can guarantee Pod The one-to-one corresponding binding relationship between storage volumes . meanwhile , Delete StatefulSet The associated Pod when , Will not delete its associated PVC.

We will introduce it in the following chapters of network storage , Skip again .

How to update StatefulSet

that , If you want to talk to a StatefulSet upgrade , What to do ?

stay StatefulSet in , Two update and upgrade strategies are supported , namely RollingUpdate and OnDelete.

RollingUpdate Strategy is Default update strategy . Can achieve Pod Rolling upgrade , Follow us in the last lesson Deployment To introduce the RollingUpdate The strategy is the same . For example, we did the image update operation at this time , Then the whole upgrade process is roughly as follows , Delete all in reverse order first Pod, Then create new images in turn Pod come out . Here you can go through kubectl get pod -n demo -w -l app=nginx Come and watch .

Use at the same time RollingUpdate The update strategy also supports partition Parameter to update a segment StatefulSet. All serial numbers are greater than or equal to partition Of Pod Will be updated . You can also update manually here StatefulSet Configuration to experiment .

When you set the update policy to OnDelete when , We have to delete it manually first Pod, To trigger a new Pod to update .

Last

Now let's summarize StatefulSet Characteristics :

  • Have fixed network marks , For example, host name , Domain name etc. ;
  • Support persistent storage , And it's better to bind with instances one by one ;
  • You can deploy and expand in order ;
  • You can terminate and delete in order ;
  • During rolling upgrade , It will also follow a certain order .

With the help of StatefulSet These capabilities of , We can deploy some stateful Services , such as MySQL、ZooKeeper、MongoDB etc. . You can follow this   course stay Kubernetes Build a ZooKeeper colony .

Welcome to scan the code to pay attention to , For more information

#yyds Dry inventory # Kubernetes How to use StatefulSet Support stateful applications ?(07)_bash

原网站

版权声明
本文为[wzlinux]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202131225482003.html