当前位置:网站首页>Introduction to kubernetes

Introduction to kubernetes

2022-07-03 21:37:00 MssGuo

Preface

Environmental Science :centos7.9 docker-ce-20.10.9 kubernetes-version v1.22.6

As a beginner , We should first understand why k8s, And from history to the present , How software development deployment evolves . This article will explain these knowledge contents in detail .

Single application

in the past , Most applications are large single applications , In the form of a single process or several processes , Running on several servers , These apps have a long release cycle , And iterations are not frequent , Before the end of each release cycle , Developers will package the application and hand it over to the operation and maintenance team , The operation and maintenance personnel will process and deploy 、 Monitoring matters , And manually move the application forward in case of hardware failure .

Characteristics of monomer application

1、 It consists of many components , Components are tightly coupled , Run in the same operating system process , Development 、 Deploy 、 It must be managed by the same entity ;
2、 For monomer applications , Even a small change in a component , Redeploy the entire application ;
3、 Lack of strict boundary definitions between components , Interdependence , Accumulate over a long period , The system complexity is increased , Quality deterioration ;

Expansion of single application

Vertical expansion : Increase, increase CPU、 Memory or other server system resources ;
Horizontal expansion : Add more servers to run single applications ;

shortcoming : Cost is very high

Microservices

Large monomer applications are decomposed into small 、 Components that can run independently , We call it microservice , Each microservice runs in a separate process , And through simple and well-defined api Interface with other micro Services , Microservices are decoupled from each other , So they can be developed independently 、 Deploy 、 upgrade 、 Telescopic .

Disadvantages of microservices

1、 As the number of components increases , It is difficult to decide where to deploy components , Because not only the number of combinations of component deployment is increasing , The number of combinations of dependencies between components is also increasing ;
2、 Operation and maintenance engineers need to correctly configure all microservices to make it called a single system work , As the number of microservices increases , Configuration becomes redundant and error prone ;
3、 Because microservices span multiple processes and machines , It makes it difficult to debug code and locate abnormal calls .
4、 Because microservice components can be developed and deployed independently , Thus, the differences between components depend , Applications need different versions of the same dependent Library , This problem is inevitable . especially , The difference of program running environment , This is the biggest problem to be solved by the development and operation and maintenance team .

Provide a consistent environment for applications
In order to solve the difference of program running environment : The best way is to make the application run in exactly the same environment in the development environment and production stage 、 Same operating system 、 library 、 The system configuration 、 Network environment and all other resource conditions .

DevOps

DevOps It's a kind of practice , Involve a team in application development 、 Deploy 、 The whole life cycle of operation and maintenance , That means developers 、QA The cooperation with the operation and maintenance team needs to run through the whole software process .

Container technology

Containers allow you to run multiple services on the same server , Containers not only provide different environments for each service , It also isolates services from each other .
The process running in a container actually runs on the operating system of the host , Just like all other processes ( Unlike virtual machines , Processes run on different operating systems ), But the process in the container is still isolated from other processes , For the process itself in the container , It's like the only process running on the machine and the operating system .

Compare virtual machines and containers

1、 Compared with virtual machine , Containers are more lightweight , It allows more components to run on the same hardware , Mainly because each virtual machine needs to run its own set of system processes ( Because virtual machines have to install operating systems ), This results in additional resource consumption in addition to component processes . On the other hand , A container is just a single process running on the host that is isolated , Consume only the resources consumed by the application container , There's no other process overhead .
2、 Because of the extra overhead of virtual machines , As a result, there are not enough resources to open a dedicated virtual machine for each application , Eventually, multiple application groups will be crammed into each virtual machine ; When using containers , You can and should have a container for each application , The end result is that you can run more applications on the same bare metal machine .
3、 Virtual machines have their own operating systems , Virtual machines also provide a completely isolated environment , Applications on virtual machines call their own operating system kernel ; The container is running on the host , Therefore, multiple containers completely execute the system calls of the same kernel on the host , This kernel is the only one to execute on the host operating system x86 The kernel of instruction ,CPU There is no need to do any virtualization .
4、 Containers are a better choice because of their low consumption , remember , Each virtual machine runs its own set of system services , And the container will not , Because all containers are running on the same host operating system , This means that a container does not need to be powered on like a virtual machine , The process of the container can be started quickly .
 Insert picture description here

Isolation mechanism of container

Maybe you'll be curious , If multiple processes are running on the same operating system , How does the container achieve isolation , Yes 2 A mechanism : The first is Linux Namespace , It allows each process to see only its own system view ( file 、 process 、 Network interface 、 Host name, etc ); The second is Linux Control group (cgroups), It limits the resources that the process can use (cpu、 Memory 、 Network resources, etc. ).

use Linux Namespace isolation process
By default , Every Linux The system initially had only one namespace , All system resources ( file system 、 user ID 、 Network interface, etc ) Belongs to this namespace . You can create extra spaces , And organizing resources between them . For a process , You can run it in one of the namespace , The process will only see resources in the same namespace . There are multiple types and multiple namespace , Therefore, a process does not only belong to a certain namespace , One namespace for each type .

type Macro definition Isolated resources
MountCLONE_NEWNS File system mount point
Process IDCLONE_NEWPID process ID
NetworkCLONE_NEWNET Network devices 、 Network stack 、 Port, etc
IPC (Inter-Process Communication)CLONE_NEWIPC Semaphore 、 Message queuing and shared memory , as well as POSIX Message queue
UTS (UNIX Time-sharing System)CLONE_NEWUTS Host name and NIS domain name
UserCLONE_NEWUSER Users and user groups
CgroupCLONE_NEWCGROUPCgroup root directory

Limit the resources available to the process
cgroups It's a Linux Kernel functions , It is used to limit the resource usage of a process or group of processes . Resources of a process ( CPU 、 Memory 、 Network bandwidth, etc ) Usage cannot exceed the allocated amount .

docker Of 3 Big concept

Mirror image : Docker The image contains the packaged application and the environment it depends on ;
Mirror warehouse : Docker The mirror warehouse is used to store Docker Mirror image , And promoting the sharing of these images between different people and computers ;
Containers : Docker The container is usually a Linux Containers , be based on Docker The image is created . A running container is a container running in Docker Processes on the host , But it and the mainframe , And all other processes running on the host are isolated . This process is also resource constrained , Only the resources assigned to it can be accessed and used (CPU 、 Memory, etc. ).

structure 、 Distribute and run Docker Mirror image

Developers first build an image , Then push the image into the image warehouse , Anyone who can access the image warehouse can use the image . then , They can pull the image to any running Docker On the machine and running the mirror . Docker A separate container will be created based on the image , And runs the executable binary file specified in the image .
 Insert picture description here

kubernetes Introduce

Kubernetes It is a container arrangement cluster system , It allows you to easily deploy and manage containerized applications on it . It depends on Linux Container to run heterogeneous applications , There is no need to know the internal details of these applications , There is no need to manually deploy these applications to each machine . Because these applications run in containers , They do not affect other applications running on the same server .

Kubernetes Let you run software on thousands of computer nodes as if all nodes are single large nodes . It abstracts the underlying infrastructure , This simplifies the development of applications at the same time 、 Deploy , As well as the development and operation and maintenance team management .

adopt Kubernetes When deploying an application , Your cluster contains the same number of nodes . Cluster size will not make any difference , Additional cluster nodes only represent some additional resources that can be used to deploy applications .
 Insert picture description here
The picture above shows the simplest Kubernetes System diagram . The whole system consists of a master node and several working nodes . Developers submit an application list to the master node , Kubernetes They are deployed to the working nodes of the cluster . The node in which component is deployed is not of concern to developers and system administrators .
Developers can also specify that some applications must run together , Kubernetes They will be deployed on a work node . Others will be distributed to the cluster , But no matter where it's deployed , They can all communicate with each other in the same way .

Kubernetes Cluster architecture

At the hardware level , One Kubernetes A cluster consists of many nodes , These nodes are divided into the following two types :

Master node : namely master node , Bearing the Kubernetes Control and manage the control panel of the whole cluster system ;
Work node :, namely node node , seeing the name of a thing one thinks of its function , Working node , Running the actually deployed application ;

Control panel 、master node

Control panel (master node ) Used to control the cluster and make it work . It contains multiple components , Components can be run on a single master node or deployed on multiple master nodes through replicas to ensure master High availability of nodes .

Kubernetes API The server : Clients and other control panel components should communicate with it ;
Scheduler : Scheduling applications , That is, through internal calculation , Calculate a suitable work node for the deployment components of the application ;
Controller Manager : Perform cluster level functions , Like copying components 、 Keep track of work nodes 、 Failed to process nodes, etc ;
etcd : A distributed storage database , Used to persist the storage cluster configuration ;

Work node 、node node

A work node is a real node server that runs containerized applications . function 、 The task of monitoring and managing application services is accomplished by the following components :

Docker: docker It's a containerized platform ,k8s And docker It is the most widely used in the market , however k8s It also supports its container type , Such as rkt;
Kubelet : And API Server communication , And manage the container of the node on which it is located ;
Kubernetes Services Proxy (kube-proxy) : Responsible for load balancing network traffic between components ;

stay Kubernetes Running applications in

stay Kubernetes Steps to run the application in :
1、 Package the application into one or more container images ;
2、 Push the produced image to the image warehouse ;
3、 Publish the description of the application to Kubernetes API The server .

The description of the application includes but is not limited to the following points :
1、 Container image or container image containing application components
2、 How do these components relate to each other
3、 Which components need to run on the same node at the same time
4、 Which components do not need to run at the same time
5、 Which components serve internal or external customers and should be delivered through a single IP Address exposure , And make other components discoverable

Describe how information becomes a running container

When API When the server processes the description of the application , The scheduler schedules the specified container group to the available work nodes , Scheduling is based on the computing resources required by each group , And the resources not allocated by each node during scheduling . then , Those on the nodes Kubelet Indicates the container runtime ( for example Docker) Pull the required image and run the container .

 chart 1.10 Kubernetes A basic overview of the architecture and the applications running on it

The picture above can help to better understand how to use the Kubernetes Central deployment application . The application descriptor lists four containers , They were divided into three groups ( These sets are called pod ). The first two pod Contains only one container , The last one contains two containers . This means that both containers need to work together , They should not be isolated from each other . At every pod side , You can also see a number , Represents each component that needs to run in parallel pod Number of copies of . In the Kubernetes After submitting descriptor , It will take every pod Schedule the specified number of replicas to the available work nodes . nodes Kubelets Will inform Docker Pull the container image from the image warehouse and run the container .

Keep the container running

Once the application is running , Kubernetes You will constantly confirm that the deployment status of the application always matches the description you provide . It will automatically restart the process that crashed or stopped responding , And it can automatically migrate all containers running on the failed node to the new node .

Number of extended copies

Kubernetes Additional copies can be added or redundant copies can be stopped as directed , And according to the real-time index ( Such as CPU load 、 Memory consumption 、 Query per second or any other metrics exposed by the application ) Auto adjust copies .

Hit the moving target

You can tell Kubernetes Which containers provide the same service , and Kubernetes Will be passed through a static IP Address all containers , The address is exposed to all applications running in the cluster . This is done through environment variables , But the client can also use good DNS Find services IP . kube-proxy Will ensure that the connection to the service can be load balanced in all containers that provide the service . Service IP The address remains the same , So the client can always connect to its container , Even if they move in the cluster

Use Kubernetes The benefits of

Simplify application deployment : Developers can start deploying applications themselves , You don't need to know the servers that make up the cluster ;
Better use of hardware : Kubernetes According to the resource requirement description of the application and the available resources on each node, select the most suitable node to run the application ;
Health check and self repair : Kubernetes Monitor application components and the nodes on which they run , When the nodes fail, they are automatically rescheduled to other nodes ;
Automatic expansion : Kubernetes You can monitor the resources used by each application , And constantly adjust the number of running instances of each application ;
Simplify application development ; The development environment is just as helpful for rapid discovery as the production environment bug, Developers don't need to implement features they would normally implement , Such as : Service discovery 、 Capacity expansion 、 Load balancing 、 Since the recovery , Even cluster leader The election ,Kubernetes It can automatically detect whether there is a problem with the new version of an application , If there is a problem, stop its rolling update immediately .

原网站

版权声明
本文为[MssGuo]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202142307076202.html