Kubernets Package management tools —>Helm
What is? Helm?
We all know ,Linux Each distribution of the system has its own package management tools , such as Centos Of YUM, Again Ubuntu Of APT.
Kubernetes Also has its own cluster package management tool , That's it Helm.
Helm The essence is to let K8S Application management (Deployment,Service etc. ) Configurable , Can dynamically generate . By dynamically generating K8S Resource list file (deployment.yaml,service.yaml), And then call kubectl Automatic execution K8S Deploy .
Helm There are two important concepts ,chart and release
- chart It is to create an application information set , Includes a variety of Kubernetes Object configuration template 、 Parameters are defined 、 Dependencies and documentation ,chart It is the self-contained logical unit of application deployment . Can be chart Imagine apt、yum Software installation package in .
- release yes chart Running instance of , Represents a running application , When chart Installed to kubernetes colony , Just create a release,chart Can install to the same cluster multiple times , Every installation is a release.
Helm There are two components ,Helm Client and Tiller The server
- Helm The client is responsible for chart and release The creation and management of , And the Tiller Interaction .
- Tiller Service running on Kubernetes In the cluster , It will deal with Helm Client requests , And Kubernetes API Server Interaction
Helm Deploy
More and more companies are using Helm This Kubernetes Package management tools ,Helm It's also very simple to install , download helm Command line tools to Master The node can be , The following example is installed as Helm v2.16.10 edition , Package download address : https://github.com/helm/helm/releases
[root@Centos8 heml]# wget https://get.helm.sh/helm-v2.16.10-linux-amd64.tar.gz [root@Centos8 heml]# tar zxvf helm-v2.16.10-linux-amd64.tar.gz -C /usr/local/ [root@Centos8 heml]# cd /usr/local/linux-amd64/ [root@Centos8 linux-amd64]# ln -s `pwd`/helm /usr/local/bin/
above Helm Command installation complete , Official documents : https://helm.sh/docs/intro/install/#helm
For installation tiller, It also needs to be configured on this machine kubectl Tools and kubeconfig file , Make sure kubectl Tools can be accessed from this machine apiserver And normal use .
because Kubernetes ApiServer Open the RBAC Access control , So you need to create tiller The use of service account:tiller And assign the right role to it . It's just for the sake of simplicity cluster-admin This cluster has built-in CluserRole Give it . establish rbac-config.yaml file :
vim rbac-config.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller subjects: - kind: ServiceAccount name: tiller namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
[root@Centos8 rbac]# kubectl create -f rbac-config.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
stay K8S Initialization in the cluster helm
[root@Centos8 rbac]# helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
[root@Centos8 rbac]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE tiller-deploy-8487d94bcf-nfc74 0/1 ContainerCreating 0 98s [root@Centos8 ~]# kubectl describe pod tiller-deploy-8487d94bcf-nfc74 -n kube-system Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.16.10"
Will find tiller Of Pod Running Don't get up , Because the import image failed , Visit for network reasons gcr.io Can't access , So it came to docker hub Query this image, It turns out that there is indeed the same image,pull Come and change your name .
[root@Centos8 ~]# docker pull jessestuart/tiller:v2.16.10 Status: Downloaded newer image for jessestuart/tiller:v2.16.10 docker.io/jessestuart/tiller:v2.16.10 docker tag jessestuart/tiller:v2.16.10 gcr.io/kubernetes-helm/tiller:v2.16.10
And then it's transmitted to every one of them node Node :
[root@Centos8 ~]# docker save gcr.io/kubernetes-helm/tiller -o /usr/local/install-k8s/heml/tiller.tgz [root@Centos8 ~]# scp /usr/local/install-k8s/heml/tiller.tgz 192.168.152.253:/usr/local/install-k8s/
node When the node receives it , Then import it into image that will do :
[root@TestCentos7 install-k8s]# docker load < tiller.tgz Loaded image: gcr.io/kubernetes-helm/tiller:v2.16.10
Look again tiller Pod The state of , Has changed to Running:
[root@Centos8 ~]# kubectl get pod -n kube-system tiller-deploy-8487d94bcf-nfc74 1/1 Running 0 1h
Helm Use
Helm Use and yum、apt The tools are the same , You can go in advance helm hub Find the tool or application you want to install :https://hub.helm.sh/, Its page will have specific installation methods and steps .
To install redis For example :https://hub.helm.sh/charts/choerodon/redis
1、 To add redis Of repo Source
helm repo add choerodon https://openchart.choerodon.com.cn/choerodon/c7n "choerodon" has been added to your repositories
2、 Update helm repo
[root@Centos8 ~]# helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "choerodon" chart repository ...Successfully got an update from the "stable" chart repository Update Complete.
3、 Start installation
[root@Centos8 ~]# helm install choerodon/redis --version 0.2.5 NAME: exhaling-yak LAST DEPLOYED: Sun Sep 6 22:57:51 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE exhaling-yak-cm 1 0s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE exhaling-yak 0/1 0 0 0s ==> v1/Pod(related)
4、 You can see , stay default The namespace generated ConfigMap、Deployment and Pod
[root@Centos8 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE exhaling-yak-cdc8cf8f9-xqtk9 0/1 ImagePullBackOff 0 40s [root@Centos8 ~]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE exhaling-yak 0/1 1 0 85s [root@Centos8 ~]# kubectl get cm NAME DATA AGE exhaling-yak-cm 1 109s
Pod ImagePullBackOff The reason is that redis The image was not imported successfully , Again by itself pull that will do
3、Helm Common commands , You can go through helm --help Get to know
Helm Custom template
All of the above are user-defined templates , You can also do some template upload or collection . This test creates hello-world Templates
1. Create the directory where all the files of the template are placed
mkdir charts cd charts/ mkdir templates # You must create a name named templates The catalog of
2. edit Chart.yaml
vim Chart.yaml # You must create a file named Chart.yaml The file of , And designate name and version Two key Value
name: hello-world version: 1.0.0
3. stay templates Create under directory deployment And service
vim templates/deployments.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: nginx:1.2.1 imagePullPolicy: IfNotPresent ports: - containerPort: 80
vim services.yaml
apiVersion: v1 kind: Service metadata: name: hello-world spec: type: NodePort ports: - port: 80 containerPort: 80 nodePort: 30001 selector: app: hello-world
At this time, the overall directory structure is :
[root@Centos8 charts]# tree
.
├── Chart.yaml
└── templates
├── deployments.yaml
└── services.yaml
4. Install this custom chart
[root@Centos8 charts]# helm install . NAME: wishing-badger LAST DEPLOYED: Mon Sep 7 20:55:42 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE hello-world-767c98894d-7lrzt 0/1 ContainerCreating 0 1s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.100.108.217 <none> 80:30001/TCP 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-world 0/1 1 0 0s
see Pod、Deployment、Service
[root@Centos8 charts]# kubectl get pod NAME READY STATUS RESTARTS AGE hello-world-767c98894d-7lrzt 1/1 Running 0 67s [root@Centos8 charts]# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-world 1/1 1 1 78s [root@Centos8 charts]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.100.108.217 <none> 80:30001/TCP 81s
Helm Common commands and usage
1. Update image
The first one is : Update manually
Get into deployments.yaml modify image That's ok , then helm upgrade
vim templates/deployments.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: hub.vfancloud.com/test/myapp imagePullPolicy: IfNotPresent ports: - containerPort: 80
[root@Centos8 charts]# helm upgrade wishing-badger . Release "wishing-badger" has been upgraded. LAST DEPLOYED: Mon Sep 7 21:07:04 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE hello-world-7466c45989-cxnps 0/1 Terminating 0 69s hello-world-864f865db8-zjt79 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.100.108.217 <none> 80:30001/TCP 11m ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-world 0/1 1 0 11m
see index.html, Version is v1
[root@Centos8 charts]# curl http://10.100.108.217 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
The second kind : Variable update
Create a variable file values.yaml, preservation image And tag
vim values.yaml
image: repository: hub.vfancloud.com/test/myapp tag: 'v2'
vim templates/deployments.yaml # take image The field is changed to the variable of the above file
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: {{ .Values.image.repository }}:{{ .Values.image.tag }} imagePullPolicy: IfNotPresent ports: - containerPort: 80
Start updating
[root@Centos8 charts]# helm upgrade wishing-badger . Release "wishing-badger" has been upgraded. LAST DEPLOYED: Mon Sep 7 21:17:31 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE hello-world-5759c969fc-w9s88 0/1 ContainerCreating 0 0s hello-world-864f865db8-zjt79 1/1 Terminating 0 10m ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.100.108.217 <none> 80:30001/TCP 21m ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-world 1/1 1 1 21m
see index.html, Version is v2
[root@Centos8 charts]# curl http://10.100.108.217 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Or modify it directly from the command line image Of tag label , To update the mirror version
[root@Centos8 charts]# helm upgrade wishing-badger --set image.tag='v3' . Release "wishing-badger" has been upgraded. LAST DEPLOYED: Mon Sep 7 21:27:04 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE hello-world-5759c969fc-w9s88 1/1 Terminating 0 9m33s hello-world-6454b8dcc8-pjgk9 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.100.108.217 <none> 80:30001/TCP 31m ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE hello-world 0/1 1 0 31m
see index.html, Has been updated to v3
[root@Centos8 charts]# curl http://10.100.108.217 Hello MyApp | Version: v3 | <a href="hostname.html">Pod Name</a>
2. see release Version history
[root@Centos8 charts]# helm history wishing-badger REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Mon Sep 7 20:55:42 2020 SUPERSEDED hello-world-1.0.0 Install complete 2 Mon Sep 7 21:07:04 2020 DEPLOYED hello-world-1.0.0 Upgrade complete
3. Delete release
[root@Centos8 charts]# helm delete wishing-badger release "wishing-badger" deleted
The above command prompts this release Delete , But it's not exactly “ Delete ”, It's putting it back “ The recycle bin ”
The reason is that you may want to roll back one day ,“ The recycle bin ” Check the method :
[root@Centos8 charts]# helm list --deleted NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE wishing-badger 5 Mon Sep 7 21:27:04 2020 DELETED hello-world-1.0.0 default
If you want to completely delete , In execution delete When combined with --purge that will do
4. Roll back release
helm rollback [name] [ edition ]
[root@Centos8 charts]# helm rollback wishing-badger 2 Rollback was a success.
take wishing-badger Roll back to the second version
see index.html, Back to the second version ,version by v1
[root@Centos8 charts]# curl http://10.109.145.22 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>