This article is based on my blog post :《002. Use kubeadm install kubernetes 1.17.0》https://www.cnblogs.com/zyxnh... as well as 《Kubernetes Authoritative guide 》 Record and summarize the data after deployment .
This paper is divided into three parts ,( On ) It records the preparation of the infrastructure environment and Master Deployment of , And web plug-ins Flannel Deployment of .( in ) Recorded node Deployment process of , And some necessary container deployment .( Next ) Introduce some monitoring and DevOps Related content of .
3、 ... and . Deploy node
It is, of course, suggested here that ansible And other configuration management tools for unified deployment . I have only one document node An example to .
1. To join the cluster :kubeadm join command
[root@k8s-master opt]# kubeadm join 192.168.56.99:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:ef543a59aac99e4f86e6661a7eb04d05b8a03269c7784378418ff3ff2a2d2d3c
2. Look at the results :
If successful, the screen echo should be like this , If it fails , Check the cause .
I failed on my side , Through two terminal check method, finally found that , It's still an old problem , Mirror pull is stuck . I thought the mirror was just master It's enough to pull it up , In fact, every one of them node All should have the same mirror image . Then I figured it out , Mirroring is through the file system , Object storage to represent , The existence of this file object on one machine does not mean that there is a file object on another machine , Besides, it doesn't matter K8S still Docker There is no mirror synchronization mechanism .
[root@k8s-master opt]# docker pull kubeimage/kube-proxy-amd64:v1.19.3
[root@k8s-master opt]# docker pull kubeimage/pause-amd64:3.2
This shows , The container image mentioned above should be encapsulated in the operating system image . Private cloud Openstack perhaps VMVware vSphere, Public cloud Alibaba cloud and others support custom images , It's a lot faster all of a sudden . It's not just mirror localization , Those common configurations don't need to be configured repeatedly , It saves a lot of productivity .
3. Deploy Node Problems encountered in .
(1)kubelet Can't start . By looking at the system service status, it is found that the service is always in Activating state , instead of running. Checked message After the discovery kubelet.service Start up will check swap Partition , The partition cannot be started if it exists . Manually shut down .
The diary is very clear :
Nov 6 09:04:24 k8s-node01 kubelet: F1106 09:04:24.680751 2635 server.go:265] failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename#011#011#011#011Type#011#011Size#011Used#011Priority /dev/dm-1 partition#011839676#0110#011-2]
4. Deploy node complete
[root@k8s-master opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.56.99 Ready master 4d22h v1.19.3
k8s-node01 NotReady <none> 4d22h v1.19.3
Four . Deploy the necessary containers
1. kubernetes-dashboard It's the whole cluster web Management interface , It is necessary to manage the cluster in the future .
[root@k8s-master opt]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
Let's go with the standard configuration first apply, If there is any problem, please revise it .
[root@k8s-master opt]# kubectl apply -f recommand.yml
amespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
If there are questions , The screen will tell you which resource was not created successfully , To modify and deal with .
kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-76585494d8-5jtvw 1/1 Running 0 86s
kubernetes-dashboard-b7ffbc8cb-4xcdp 1/1 Running 0 86s
2. Port exposure :
Now there's a problem , How to make dashboard Encapsulate it for my computer to access ? This dashboard There's only one virtual ClusterIP( There is no direct access to , I can't PING through ), One more podIP, It's also impossible to directly access . The final answer is actually dashboard.yml Add some configuration to the file , take Service Of ClusterIP Switch to virtual machine NodeIP come up , adopt NodeIP:NodePort To access this service .
[root@k8s-master opt]# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # Add a line here , Access method is NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 32443 # Add a line here , Appoint NodePort The specific value of
selector:
k8s-app: kubernetes-dashboard
Put the original set of dashboard All the resources are ruined , Rebuild .
[root@k8s-master opt]# kubectl delete -f recommended.yaml
namespace "kubernetes-dashboard" deleted
serviceaccount "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted
secret "kubernetes-dashboard-certs" deleted
secret "kubernetes-dashboard-csrf" deleted
secret "kubernetes-dashboard-key-holder" deleted
configmap "kubernetes-dashboard-settings" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
deployment.apps "kubernetes-dashboard" deleted
service "dashboard-metrics-scraper" deleted
deployment.apps "dashboard-metrics-scraper" deleted
[root@k8s-master opt]# kubectl apple -f recommended.yaml
So you can visit :https://192.168.56.99:32443
3. dashboard Of token And authority
When you open the page, you will be asked if you are through token still kubeconfig To authenticate . choose kubeconfig It's like you have a local one kubeadmin-config Configuration file for , Upload through the browser to do verification , This is very simple .
token In the following way , The printed results are pasted into the browser :
[root@k8s-master opt]# kubectl describe secret -n kubernetes-dashboard $(kubectl get secret -n kubernetes-dashboard |grep kubernetes-dashboard-token | awk '{print $1}') |grep token | awk '{print $2}'
Log in and you can't see a lot of things , That's because dashboard The default permissions are lower .
[root@k8s-master opt]# vim recommended.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
#name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
Smashed , And redeploy dashboard, Of course, when you log in token It must have changed .
3. Deploy kubernetes-dashboard Problems encountered in , And some points worth mentioning .
(1) I am here apply Found that kubernetes-dashboard The container has been collapsing over and over again , be in CrashLoopBackOff This state .
Through various ways to check and check , for example kubectl describe pod kubernetes-dashboard-b7ffbc8cb-4xcdp -n kubernetes-dashboard You can view the health of the container , For example, to node Up docker logs $CONTAINERID, In, for example, going to see kubelet Of message, Finally, it is found that there are problems in network communication , The container can't communicate with Service Of ClusterIP 10.96.0.1 Network communication .
Nov 3 23:48:54 k8s-node01 journal: panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: no route to host
After looking at the routes, there are indeed , But error reporting is also a related problem in the prompt . It turned out firewalld.service It doesn't matter . After closing, the container is created successfully .
(2)dashboard The author of the book created a single namespace, Different from the unified use of the old version kube-system
(3)quay.io Images can be used , And it's very fast .