当前位置:网站首页>InfoQ geek media's 15th anniversary solicitation of papers | design practice of micro service architecture in the cloud native Era

InfoQ geek media's 15th anniversary solicitation of papers | design practice of micro service architecture in the cloud native Era

2022-06-09 15:14:00 InfoQ

Preface

Microservice architecture has been popular for many years , Such as :Dubbo、Spring Cloud, And then later  Spring Cloud Alibaba, But it's all about  Java  The bottleneck of language , How to make the micro service between different languages more effective 、 Fast communication , This is a problem that many enterprises need to face at present , Because in an enterprise , Not just based on a single language , This involves access between multilingual services . With  Kubernetes(k8s)  The cloud native wave set off by container technology as the core is still sweeping the world , In the vigorous transformation of digital transformation technology , The pioneers began to think about what the new technology system could bring to the industry and society , And how to put  DevOps  And other advanced development management models into all walks of life , Let more enterprises enjoy cloud native and  AI、IoT  And other cutting-edge technological innovations . Key points of this column , It's about speaking in so many languages , How to design microservice Architecture , And the high availability of micro services in the cloud native era 、 Automation and so on .

Microservice architecture

History of microservices
Before the advent of micro Services , Everything is a single service , Of course, there is a single application , There are also many shortcomings exposed , There are mainly :
  • High complexity
  • The cost of teamwork is high
  • Expandability of
  • Inefficient deployment
  • Poor high availability of the system
complexity , Embodied in : As the business continues to iterate , The amount of code in a project has increased dramatically , Project modules will also increase with time , The whole project will become very complicated .
High development cost , Embodied in : The team developed dozens of people modifying the code , And then merge together into the same address Branch , Packaged deployment , In the test phase, as long as a small piece of function has problems , You have to recompile, package, deploy , To test , All relevant developers have to be involved , inefficiency , High development cost .
Expandability of , Embodied in : When adding new services , At the code level, we will consider writing code on the basis of not affecting the existing business , Increased code complexity .
Low deployment efficiency , Embodied in : When the code of single application is more and more , When we rely on more and more resources , Application compilation package 、 Deploy the test once , It takes more and more time , Resulting in inefficient deployment .
High availability is poor , Embodied in : Because all the business functions are finally deployed to the same file , Once there is a problem with the code or resources involved in a function , That will affect the functionality of the entire package deployment . Take a particularly striking example : In the eighth century 、 1990s , A lot of yellow pages and extended to later websites , A lot of display pages and the back end of data acquisition are in the same service module . This has a very bad effect : If only a small part of the page display or image display is modified , You need to package and deploy the whole service module , This will lead to a serious waste of time and an increase in costs . What's worse is , Bring a very bad experience to users , What users don't understand is : It's just a tiny display area on a different website , Caused the whole website to be unable to visit normally at that moment . Of course , Maybe , For the underdevelopment of the Internet at that time , People's experience of this , It's already a kind of happy enjoyment .
Due to the above disadvantages of monomer application , It leads to a new term 、 The birth of new concepts ——
Microservices
.
Actually , From the monomer application in the early years , To  2014  From the year onwards , Thanks to  Docker  Represented by the maturity of containerization technology and  DevOps  The rise of culture , The idea of servitization has further evolved , It has evolved into what we know today as micro services . that , What is micro service ?
Microservices , English name :microservice, Baidu Encyclopedia defines it as :SOA  A variant of Architecture . Microservices ( Or microservice Architecture ) It's a way to construct an application as a set of low coupling Services .
Microservices have some distinctive features :
  • Single function
  • Service granularity is small
  • Strong independence between services
  • Weak dependence between services
  • Service independent maintenance
  • Service independent deployment
Microservices
Split the originally coupled complex business into a single service , Avoid the endless accumulation of original complexity , Each microservice focuses on a single function , And clearly express the service boundary through a well-defined interface .
Because microservices have independent running processes , So each microservice can be deployed independently . When business iterations, you only need to publish iterations of related services , It reduces the workload of testing and the risk of service publishing .
In the microservices architecture , When a component fails , Faults are isolated in a single service . If through current limiting 、 Fuse and other ways to reduce the harm caused by errors , Ensure the normal operation of core business .
Microservices have developed to the present , With the following marks :
High cohesion 、 Low coupling
,
Focus on business
,
Autonomy and high availability
.
Granularity of microservice segmentation
The division of services , Can be divided from the level of function , It can also be divided from vertical business , The size of the particle size , It can be positioned according to the current product demand , The key is to do :
High cohesion 、 Low coupling
.
Take e-commerce system as an example , Here's the picture :
null
E-commerce system architecture diagram
The business involved in e-commerce is probably the most , goods 、 stock 、 Order 、 Sales promotion 、 payment 、 members 、 The shopping cart 、 invoice 、 Shops and so on , This is the division of modules according to different businesses . The granularity of microservice partition must be clear , You can't add a new service module because of ambiguity , This will lead to poor reusability of functional interfaces . A good architecture design , It must be a very reusable structure pattern . I like the saying that :** The boundaries of microservices  ( Particle size )  yes  " Decision making ",  Not a  " The standard answer "**. That is, the way to divide the micro Services , Think deeply , Comprehensive consideration of various factors , The one made ” Best fit ” Architecture decisions for , It's not just a person ” The standard answer “.

Containerization Technology

What is a container
What is a container ? The explanation of nature : Container refers to the basic device which is used to hold materials and mainly consists of shell . But today's container is also a carrier of material . The container that the computer refers to (Container) What is it ? Containers are mirrors (Image) Runtime instance of . Just like starting from a virtual machine template  VM  equally , Users can also start one or more containers from a single image . The biggest difference between a virtual machine and a container is that the container is faster and lighter , Compared with the virtual machine running on a complete operating system , The container shares the operating system of the host on which it resides / kernel .
Why use containers ? Suppose you're using a computer to develop an application , And the development environment has a specific configuration . Other developers may be in a slightly different environment configuration . The application you're developing doesn't just depend on your current configuration , Some specific libraries are also needed 、 Dependencies and files . meanwhile , Your business also has a standardized development and production environment , It has its own configuration and a series of supporting files . You want to simulate as many of these environments as possible locally , Without the overhead of recreating the server environment . Now , You need containers to simulate these environments .
Our common way to start a container is  Docker,Docker  Is an open source application container engine , be based on  Go  Language   And follow  Apache2.0  Open source agreement .Docker  Allows developers to package their applications and dependencies into a lightweight package 、 In a portable container , And then publish it to any  Linux  On the machine , You can also implement virtualization .
null
Kubernetes
Google  Containers have been used for many years as an important way to deliver applications , And run a model called  Borg  Orchestration tools for .Google、RedHat  Wait for the company to fight with  Docker  The container business ecosystem with the company as the core , Together they set up  CNCF(Cloud Native Computing Foundation). When Google  2014  year  3  Development begins in  Kubernetes  when , It's wise to choose the most popular container at that time , you 're right , Namely  Docker.Kubernetes  Yes  Docker  Container runtime support , There are a lot of users .Kubernetes  On  2014  year  6  month  6  First published on . Here's the container choreography tool  Kubernetes  The birth of . in addition ,CNCF  The goal is to open source  K8S  Based on , bring  K8S  It can cover more scenes in container layout , Provide greater capabilities .K8S  We have to face  Swarm  and  Mesos  The challenge of .Swarm  My strong point is to work with  Docker  Natural seamless integration of Ecology ,Mesos  The strong point of cluster management is the management and scheduling of large-scale cluster .K8S  yes  Google  Based on the company has been using for more than ten years  Borg  A set of framework was put forward after precipitation and sublimation of the project . Its advantage is that it has a complete new design concept , At the same time there is  Google  Endorsement of , And it has strong expansibility in design , therefore , Final  K8S  Won the victory , It has become the industry standard of container ecology .

K8s  Why does it become the infrastructure of microservices

Why?  K8s  It is the foundation of the next generation microservice architecture
After the emergence of micro Services , It's also an important topic : High availability . So-called
High availability
: English abbreviation  HA(High Availability), When a service or the node where the service is located fails , Its external functions can be transferred to other copies of the service or copies of the service in other nodes , So that on the premise of reducing downtime , Meet business continuity , These two or more services constitute a highly available service . meanwhile , This kind of high availability needs to consider the performance pressure of the service , Load balancing of services .
We know about the high availability of services , Or the load of the service , There are many ways to solve these problems . such as :
  • Master-slave mode , It works by : The host is working , Standby is under monitoring 、 Ready to go , When the host goes down , The standby machine can take over all the work of the host machine , When the host is back to normal , The service will be switched to the host manually or automatically , Data consistency is achieved through shared storage .
  • The cluster approach , It works by : Multiple machines running one or more services at the same time , When one of the nodes goes down , At this time, the service of the node will not be able to provide business functions , You can choose according to a certain mechanism , Transfer the service request to other nodes where the service is located , This allows the logic to continue to execute , That is to eliminate software single point of failure . This actually involves load balancing strategy .
For micro Services
High availability
, One of the issues involved is load balancing of its services . In microservices , The premise of load balancing is , The same service needs to be found multiple , Or multiple copies , Only in this way can load balancing and high availability of services be realized .
meanwhile , After service discovery , In fact, the main problem is which one to visit ? Because multiple instances of a service have been found , Eventually, only one of them will be visited , This involves load balancing of services .
Load balancing is a common topic in microservices , There are more and more plug-ins for load balancing .netflix  Open source  Zuul、Gateway  wait .
But such microservices , The benefit is a high degree of autonomy , But it will also bring some side effects : The technology stack used is too complex , The whole system looks heavy .
K8s  How to solve these problems ? stay  K8s  Provides a mechanism for service registration and discovery :Kubernetes  Serve and  Pod  establish  DNS  Record . You can use consistent  DNS  Name not  IP  Address contact service . Each service defined in the cluster ( Include  DNS  The server itself ) All assigned a  DNS  name . By default , client  Pod  Of  DNS  The search list includes  Pod  Its own namespace and the default domain of the cluster .DNS  Queries can use  pod  Of  /etc/resolv.conf. Kubelet  For each  pod  Set this file . for example , Query pair  data  It can be extended to  test.default.svc.cluster.local. The  search  The value of the option is used to extend the query :
apiVersion: v1
kind: Service
metadata:
  name: test
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
The specification creates a named “test” New service objects for , The goal is any with  
app=MyApp
  Labeled  Pod  Upper  TCP  port  9376 .
meanwhile ,K8s  Provide a resource  Configmap, You can write a  spec  quote  ConfigMap  Of  Pod , And according to  ConfigMap  The data configuration in  Pod  In the container .Pod  and  ConfigMap  Must be in the same namespace :
kind: ConfigMap
apiVersion: v1
metadata:
  name: rest-service
  namespace: system-server
data:
  application.yaml: |-
    greeting:
      message: Say Hello to the World
    ---
    spring:
      profiles: dev
    greeting:
      message: Say Hello to the Developers
also , Exposure to services ,K8s  Provides a resource :
Ingress controller
,Ingress  The controller is similar to  Nginx, It can help us proxy the service out of the cluster , Provide to the front end or external third parties for use .
such , For the complexity of the system itself , The use of  Spring cloud  Various self-contained components :
null
Spring Cloud  Component diagram

K8s  The foundation and actual combat

K8s  Basics
in front , We talked about  K8s  Why can we replace  Springcloud  Components in the family to uniformly manage services 、 Access the service . Next , Let's talk about it  K8s  What are the basic and common resources . These resources are available in  K8s  Has its interface function , But here , We call the interface in the form of script commands , Generating resources .
The first is to write a configuration file , Because the configuration file can be  YAML  perhaps  JSON  Format . For the convenience of reading and understanding , In the later explanation , I will use  YAML  Documents to refer to them . Kubernetes  Follow  Docker  The biggest difference in many projects , That's because it doesn't recommend that you run the container directly from the command line ( although  Kubernetes  The project also supports this approach , such as :kubectl run), I hope you use  YAML  How to file , namely : Define the container 、 Parameters 、 To configure , It's all recorded in one  YAML  In file , Then run it with such a command :
kubectl create -f xxx.yaml
One of the most direct benefits of this is : You will have a file to record  K8s  to the end  run  What have you learned . Here's an example :
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
spec:
  selector:
    matchLabels:
      app: tomcat
  replicas: 2
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: tomcat:10.0.5
        ports:
        - containerPort: 80
One like this  YAML  file , Corresponding to  kubernetes  in , It's just one.  API Object(API  object ). When you fill in values for each field of this object and submit them to  Kubernetes  after ,Kubernetes  It will be responsible for creating containers or other types of  API  resources . You can see , This  YAML  In the document  Kind  Field , It's assigned this  API  Type of object (Type), It's a  Deployment.Deployment  Is a defined multi copy application ( Multiple copies  Pod) The object of . Besides ,Deployment  Also responsible for  Pod  When the definition changes , Roll update each copy (Rolling+Update).
On top of this  Yaml  In file , I defined it as  Pod  Number of copies  (spec.replicas) yes :2. but , these  Pod  What does the copy look like ? So , We define a  Pod  Template (spec.template), This template describes what I want to create  Pod  The details of the . In the example above , This  Pod  There's only one container in it , The image of this container (spec.containers.image) yes  tomcat=10.0.5, This container listens on the port (containerPort) yes  80.
It should be noted that , Like this , Use a  API  object (Deployment) Manage another  API  object (Pod) Methods , stay  Kubernetes  in , called “ controller ” Pattern (controller pattern). In our world  demo  in ,Deployment  It's just  Pod  The role of the controller . and  Pod  yes  Kubernetes  Applications in the world ; And an application , It can be made up of multiple containers (container) form . In order for us to  tomcat  Service containerization runs , We just need to execute :
[email protected]:~/damon$ kubectl create -f tomcat-deployment.yaml
deployment.apps/tomcat-deployment created
After executing the above command , You can see how the container works , here , Just execute :
[email protected]:~/damon$ kubectl get pod -l app=tomcat
NAME                                 READY   STATUS              RESTARTS   AGE
tomcat-deployment-799f46f546-7nxrj   1/1     Running             0          77s
tomcat-deployment-799f46f546-hp874   0/1     Running             0          77s
kubectl get
  Role of instructions , It's from  Kubernetes  Get in there (GET) designated  API  object . You can see , I've also added a  
-l
  Parameters , That is, get all the matches  app=nginx  Labeled  Pod. It should be noted that , On the command line , all  key-value  Format parameters , All use “=” Instead of “:” Express .  From the result returned by this instruction , We can see that there are two  Pod  be in  Running  state , That means our  Deployment  Managed by  Pod  Are in the expected state .
Besides ,  You can still use it  
kubectl describe
  command , View one  API  Details of the object , such as :
[email protected]:~/damon$ kubectl describe pod tomcat-deployment-799f46f546-7nxrj
Name:           tomcat-deployment-799f46f546-7nxrj
Namespace:      default
Priority:       0
Node:           ca005/10.10.2.5
Start Time:     Thu, 08 Apr 2021 10:41:08 +0800
Labels:         app=tomcat
                pod-template-hash=799f46f546
Annotations:    cni.projectcalico.org/podIP: 20.162.35.234/32
Status:         Running
IP:             20.162.35.234
Controlled By:  ReplicaSet/tomcat-deployment-799f46f546
Containers:
  tomcat:
    Container ID:   docker://5a734248525617e950b7ce03ad7a19acd4ffbd71c67aacd9e3ec829d051b46d3
    Image:          tomcat:10.0.5
    Image ID:       docker-pullable://[email protected]:2637c2c75e488fb3480492ff9b3d1948415151ea9c503a496c243ceb1800cbe4
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 08 Apr 2021 10:41:58 +0800
    Ready:          True
    Restart Count:  0
&nbsp;&nbsp;&nbsp;&nbsp;Environment:&nbsp;&nbsp;&nbsp;&nbsp;<none>
&nbsp;&nbsp;&nbsp;&nbsp;Mounts:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/var/run/secrets/kubernetes.io/serviceaccount&nbsp;from&nbsp;default-token-2ww52&nbsp;(ro)
Conditions:
&nbsp;&nbsp;Type&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Status
&nbsp;&nbsp;Initialized&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;True
&nbsp;&nbsp;Ready&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;True
&nbsp;&nbsp;ContainersReady&nbsp;&nbsp;&nbsp;True
&nbsp;&nbsp;PodScheduled&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;True
Volumes:
&nbsp;&nbsp;default-token-2ww52:
&nbsp;&nbsp;&nbsp;&nbsp;Type:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Secret&nbsp;(a&nbsp;volume&nbsp;populated&nbsp;by&nbsp;a&nbsp;Secret)
&nbsp;&nbsp;&nbsp;&nbsp;SecretName:&nbsp;&nbsp;default-token-2ww52
&nbsp;&nbsp;&nbsp;&nbsp;Optional:&nbsp;&nbsp;&nbsp;&nbsp;false
QoS&nbsp;Class:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;BestEffort
Node-Selectors:&nbsp;&nbsp;<none>
Tolerations:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;node.kubernetes.io/not-ready:NoExecute&nbsp;for&nbsp;300s
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;node.kubernetes.io/unreachable:NoExecute&nbsp;for&nbsp;300s
Events:
&nbsp;&nbsp;Type&nbsp;&nbsp;&nbsp;&nbsp;Reason&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Age&nbsp;&nbsp;&nbsp;&nbsp;From&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Message
&nbsp;&nbsp;----&nbsp;&nbsp;&nbsp;&nbsp;------&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;----&nbsp;&nbsp;&nbsp;----&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-------
&nbsp;&nbsp;Normal&nbsp;&nbsp;Scheduled&nbsp;&nbsp;4m17s&nbsp;&nbsp;default-scheduler&nbsp;&nbsp;Successfully&nbsp;assigned&nbsp;default/tomcat-deployment-799f46f546-7nxrj&nbsp;to&nbsp;ca005
&nbsp;&nbsp;Normal&nbsp;&nbsp;Pulling&nbsp;&nbsp;&nbsp;&nbsp;4m16s&nbsp;&nbsp;kubelet,&nbsp;ca005&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pulling&nbsp;image&nbsp;&quot;tomcat:10.0.5&quot;
&nbsp;&nbsp;Normal&nbsp;&nbsp;Pulled&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3m27s&nbsp;&nbsp;kubelet,&nbsp;ca005&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Successfully&nbsp;pulled&nbsp;image&nbsp;&quot;tomcat:10.0.5&quot;
&nbsp;&nbsp;Normal&nbsp;&nbsp;Created&nbsp;&nbsp;&nbsp;&nbsp;3m27s&nbsp;&nbsp;kubelet,&nbsp;ca005&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Created&nbsp;container&nbsp;tomcat
&nbsp;&nbsp;Normal&nbsp;&nbsp;Started&nbsp;&nbsp;&nbsp;&nbsp;3m27s&nbsp;&nbsp;kubelet,&nbsp;ca005&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Started&nbsp;container&nbsp;tomcat
stay  kubectl describe  In the result returned by the command , You can see this clearly  Pod  Details of , Like its  IP  Address, etc. . among , There is a part that deserves your special attention , It is  Events( event ).
stay  Kubernetes  During execution , Yes  API  All the important operations of the object , Will be recorded in this object  Events  in , And it's shown in  kubectl describe  In the result returned by the instruction . these  Events  The information in is very important , You can check whether the container is running 、 The reason for normal operation .
If you want to upgrade  tomcat  Version of , That can be changed directly  Yaml  file :
&nbsp;&nbsp;&nbsp;&nbsp;spec:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containers:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;tomcat
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;image:&nbsp;tomcat:latest
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ports:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;containerPort:&nbsp;80
After revising  Yaml  After the document , perform :
kubectl&nbsp;apply&nbsp;-f&nbsp;tomcat-deployment.yaml
This way of operating , yes  Kubernetes“ declarative  API” Recommended usage . in other words , As the user , You don't have to worry that the current operation is to create , Or update , Your orders are always  kubectl apply, and  Kubernetes  Will be based on  YAML  Changes in the content of the document , Automatic specific processing .
meanwhile , You can view the logs of the services in the container :
[email protected]:~/damon$&nbsp;kubectl&nbsp;logs&nbsp;-f&nbsp;tomcat-deployment-799f46f546-7nxrj
NOTE:&nbsp;Picked&nbsp;up&nbsp;JDK_JAVA_OPTIONS:&nbsp;&nbsp;--add-opens=java.base/java.lang=ALL-UNNAMED&nbsp;--add-opens=java.base/java.io=ALL-UNNAMED&nbsp;--add-opens=java.base/java.util=ALL-UNNAMED&nbsp;--add-opens=java.base/java.util.concurrent=ALL-UNNAMED&nbsp;--add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
08-Apr-2021&nbsp;02:41:59.037&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Server&nbsp;version&nbsp;name:&nbsp;&nbsp;&nbsp;Apache&nbsp;Tomcat/10.0.5
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Server&nbsp;built:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Mar&nbsp;30&nbsp;2021&nbsp;08:19:50&nbsp;UTC
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Server&nbsp;version&nbsp;number:&nbsp;10.0.5.0
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;OS&nbsp;Name:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Linux
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;OS&nbsp;Version:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.4.0-116-generic
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Architecture:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;amd64
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Java&nbsp;Home:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/local/openjdk-11
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;JVM&nbsp;Version:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;11.0.10+9
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;JVM&nbsp;Vendor:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Oracle&nbsp;Corporation
08-Apr-2021&nbsp;02:41:59.040&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;CATALINA_BASE:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/local/tomcat
08-Apr-2021&nbsp;02:41:59.041&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;CATALINA_HOME:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;/usr/local/tomcat
08-Apr-2021&nbsp;02:41:59.051&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;--add-opens=java.base/java.lang=ALL-UNNAMED
08-Apr-2021&nbsp;02:41:59.051&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;--add-opens=java.base/java.io=ALL-UNNAMED
08-Apr-2021&nbsp;02:41:59.051&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;--add-opens=java.base/java.util=ALL-UNNAMED
08-Apr-2021&nbsp;02:41:59.051&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;--add-opens=java.base/java.util.concurrent=ALL-UNNAMED
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;--add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Djdk.tls.ephemeralDHKeySize=2048
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Djava.protocol.handler.pkgs=org.apache.catalina.webresources
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Dorg.apache.catalina.security.SecurityListener.UMASK=0027
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Dignore.endorsed.dirs=
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Dcatalina.base=/usr/local/tomcat
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Dcatalina.home=/usr/local/tomcat
08-Apr-2021&nbsp;02:41:59.052&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.VersionLoggerListener.log&nbsp;Command&nbsp;line&nbsp;argument:&nbsp;-Djava.io.tmpdir=/usr/local/tomcat/temp
08-Apr-2021&nbsp;02:41:59.056&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.core.AprLifecycleListener.lifecycleEvent&nbsp;Loaded&nbsp;Apache&nbsp;Tomcat&nbsp;Native&nbsp;library&nbsp;[1.2.27]&nbsp;using&nbsp;APR&nbsp;version&nbsp;[1.6.5].
08-Apr-2021&nbsp;02:41:59.056&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.core.AprLifecycleListener.lifecycleEvent&nbsp;APR&nbsp;capabilities:&nbsp;IPv6&nbsp;[true],&nbsp;sendfile&nbsp;[true],&nbsp;accept&nbsp;filters&nbsp;[false],&nbsp;random&nbsp;[true],&nbsp;UDS&nbsp;[true].
08-Apr-2021&nbsp;02:41:59.059&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.core.AprLifecycleListener.initializeSSL&nbsp;OpenSSL&nbsp;successfully&nbsp;initialized&nbsp;[OpenSSL&nbsp;1.1.1d&nbsp;&nbsp;10&nbsp;Sep&nbsp;2019]
08-Apr-2021&nbsp;02:41:59.312&nbsp;INFO&nbsp;[main]&nbsp;org.apache.coyote.AbstractProtocol.init&nbsp;Initializing&nbsp;ProtocolHandler&nbsp;[&quot;http-nio-8080&quot;]
08-Apr-2021&nbsp;02:41:59.331&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.Catalina.load&nbsp;Server&nbsp;initialization&nbsp;in&nbsp;[441]&nbsp;milliseconds
08-Apr-2021&nbsp;02:41:59.369&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.core.StandardService.startInternal&nbsp;Starting&nbsp;service&nbsp;[Catalina]
08-Apr-2021&nbsp;02:41:59.370&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.core.StandardEngine.startInternal&nbsp;Starting&nbsp;Servlet&nbsp;engine:&nbsp;[Apache&nbsp;Tomcat/10.0.5]
08-Apr-2021&nbsp;02:41:59.377&nbsp;INFO&nbsp;[main]&nbsp;org.apache.coyote.AbstractProtocol.start&nbsp;Starting&nbsp;ProtocolHandler&nbsp;[&quot;http-nio-8080&quot;]
08-Apr-2021&nbsp;02:41:59.392&nbsp;INFO&nbsp;[main]&nbsp;org.apache.catalina.startup.Catalina.start&nbsp;Server&nbsp;startup&nbsp;in&nbsp;[61]&nbsp;milliseconds
ad locum , Why is it that  Deployment  Resources for example ? Because in  K8s  Resources ,Deployment  Form resources provide declaration updates and replica sets , Can be in  Deployment  Described in “ Required state ”, also  Deployment  Change the actual state to the desired state at a controlled rate . You can define a deployment to create a new replica set , Or delete the existing deployment and adopt all its resources through the new deployment . stay  rc  When rolling upgrade , To prevent interruption of service access , Introduced  Deployment  resources .
Next , Let's see.  K8s  More important resources  ConfigMap, It is for  Pod  The configuration information of works , Various configurations are provided in the form of service mounting :
kind:&nbsp;ConfigMap
apiVersion:&nbsp;v1
metadata:
&nbsp;&nbsp;name:&nbsp;rest-service
&nbsp;&nbsp;namespace:&nbsp;system-server
data:
&nbsp;&nbsp;application.yaml:&nbsp;|-
&nbsp;&nbsp;&nbsp;&nbsp;greeting:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;message:&nbsp;Say&nbsp;Hello&nbsp;to&nbsp;the&nbsp;World
&nbsp;&nbsp;&nbsp;&nbsp;---
&nbsp;&nbsp;&nbsp;&nbsp;spring:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;profiles:&nbsp;dev
&nbsp;&nbsp;&nbsp;&nbsp;greeting:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;message:&nbsp;Say&nbsp;Hello&nbsp;to&nbsp;the&nbsp;Developers

&nbsp;&nbsp;&nbsp;&nbsp;---
&nbsp;&nbsp;&nbsp;&nbsp;spring:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;profiles:&nbsp;test
&nbsp;&nbsp;&nbsp;&nbsp;greeting:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;message:&nbsp;Say&nbsp;Hello&nbsp;to&nbsp;the&nbsp;Test
&nbsp;&nbsp;&nbsp;&nbsp;---
&nbsp;&nbsp;&nbsp;&nbsp;spring:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;profiles:&nbsp;prod
&nbsp;&nbsp;&nbsp;&nbsp;greeting:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;message:&nbsp;Say&nbsp;Hello&nbsp;to&nbsp;the&nbsp;Prod
Of course , It supports various forms of mounting ,key-value  character string 、 Document form, etc . This decouples in microservices , It's very important , such as : In an online environment , The deployed service may need to modify one or more of its parameters , here , If previously coded , Decouple these parameters into configuration resources , You can dynamically refresh the service configuration by modifying the configuration :
kubectl&nbsp;edit&nbsp;cm&nbsp;rest-service&nbsp;-n&nbsp;system-server
After executing this command to edit the configuration of this service , We can see the log information of the service :
2021-11-29&nbsp;07:59:52.860:152&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.cloud.kubernetes.config.reload.EventBasedConfigurationChangeDetector&nbsp;-Detected&nbsp;change&nbsp;in&nbsp;config&nbsp;maps
2021-11-29&nbsp;07:59:52.862:74&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.cloud.kubernetes.config.reload.EventBasedConfigurationChangeDetector&nbsp;-Reloading&nbsp;using&nbsp;strategy:&nbsp;REFRESH
2021-11-29&nbsp;07:59:53.444:112&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration&nbsp;-Located&nbsp;property&nbsp;source:&nbsp;[BootstrapPropertySource&nbsp;{name='bootstrapProperties-configmap.rest-service.system-server'}]
2021-11-29&nbsp;07:59:53.499:652&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.boot.SpringApplication&nbsp;-The&nbsp;following&nbsp;profiles&nbsp;are&nbsp;active:&nbsp;kubernetes,dev
2021-11-29&nbsp;07:59:53.517:652&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.boot.SpringApplication&nbsp;-The&nbsp;following&nbsp;profiles&nbsp;are&nbsp;active:&nbsp;kubernetes,dev
2021-11-29&nbsp;07:59:53.546:61&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.boot.SpringApplication&nbsp;-Started&nbsp;application&nbsp;in&nbsp;0.677&nbsp;seconds&nbsp;(JVM&nbsp;running&nbsp;for&nbsp;968605.422)
2021-11-29&nbsp;07:59:53.553:61&nbsp;[OkHttp&nbsp;https://10.16.0.1/...]&nbsp;INFO&nbsp;&nbsp;org.springframework.boot.SpringApplication&nbsp;-Started&nbsp;application&nbsp;in&nbsp;0.685&nbsp;seconds&nbsp;(JVM&nbsp;running&nbsp;for&nbsp;968617.369)
In the log  
Detected change in config maps
Reloading using strategy: REFRESH
, Indicates that the automatic refresh effect is achieved after modifying the configuration .
Next , Let's look at service registration and discovery , If simply from  K8s  Native , It provides a form of domain name access to call each other between services :
$(service name).$(namespace).svc.cluster.local
, among  
cluster.local
  The domain name of the specified cluster , This represents a local cluster .
meanwhile ,Service  Since it is to define a variety of services  pod  And a kind of access  pod  The strategy of .
Service  There are four types of :
  • ExternalName: Create a  DNS  The alias points to  service name, This will prevent  service name  change , But it needs cooperation  DNS  The plug-in USES .
  • ClusterIP: The default type , Used for within cluster  Pod  During the interview , Fixed access address provided , The default is to assign addresses automatically , You can use  ClusterIP  Key words are fixed  IP.
  • NodePort: be based on  ClusterIp, For external access to the cluster  Service  Back  Pod  Provide access port .
  • LoadBalancer: It is based on  NodePort.
From the above  Service, We can see a scene : All the microservices are in one LAN , Or in a  K8s  Under Cluster , So you can go through  Service  It is used in the cluster  Pod  The interview of , This is it.  Service  One of the default types  ClusterIP,ClusterIP  This default will automatically assign the address .
So here comes the question , Now that you can go through the above  ClusterIp  To realize the service access within the cluster , So how to sign up for a service ? Actually  K8s  No registry has been introduced , What you use is  K8s  Of  kube-dns  Components . then  K8s  take  Service  Registered as a domain name to  kube-dns  in , every last  Service  stay  kube-dns  There's one of them  DNS  Record , meanwhile , If there's a service  ip  Replace ,kube-dns  Automatically synchronize , There is no need to change the service . adopt  Service  The name of provides access to the services it provides . So here comes the question , If a service  pod  There are more than one , So how to do that  LB? Actually , Finally through  kube-proxy, Load balancing . in other words  kube-dns  adopt  servicename  Find the specified  clusterIP,kube-proxy  To pass  clusterIP  To  PodIP  The process of .
Speaking of this , Let's see  Service  Service discovery and load balancing strategy ,Service  There are two load distribution strategies :
  • RoundRobin: Polling mode , That is, polling forwards requests to various back-end  pod  On , It's the default mode .
  • SessionAffinity: client-based  IP  Address for session hold mode , similar  IP Hash  The way , To achieve load balancing of services .
However, this form of service access provided natively has a little regret : It just needs to have  Service  The namespace of the , This may  K8s  Has its own considerations , Suppose I have one here  Service:
apiVersion:&nbsp;v1
kind:&nbsp;Service
metadata:
&nbsp;&nbsp;name:&nbsp;rest-service-service
&nbsp;&nbsp;namespace:&nbsp;system-server
spec:
&nbsp;&nbsp;type:&nbsp;NodePort
&nbsp;&nbsp;ports:
&nbsp;&nbsp;-&nbsp;name:&nbsp;rest-svc
&nbsp;&nbsp;&nbsp;&nbsp;port:&nbsp;2001
&nbsp;&nbsp;&nbsp;&nbsp;targetPort:&nbsp;2001
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;rest-service
This  Service  Indicates that the target is monitoring  http  The protocol port is  2001  A group of services  pod, such , But visit the  Service  when , It will be resolved to through its domain name  pod  Information to access  pod  Of  IP  and  port:
system-server&nbsp;&nbsp;&nbsp;rest-service-deployment-cc7c5b559-6t4lp&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;11d&nbsp;&nbsp;&nbsp;10.244.0.188&nbsp;&nbsp;&nbsp;&nbsp;leinao-deploy-server&nbsp;&nbsp;&nbsp;<none>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<none>
system-server&nbsp;&nbsp;&nbsp;rest-service-deployment-cc7c5b559-gpg4m&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;11d&nbsp;&nbsp;&nbsp;10.244.0.189&nbsp;&nbsp;&nbsp;&nbsp;leinao-deploy-server&nbsp;&nbsp;&nbsp;<none>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<none>
such , When we pass through the container  
rest-service-service.system-server.svc.cluster.local:2001/api
  When accessing a service , such , We can see that the default type is  ClusterIP, Used for within cluster  Pod  During the interview , You can resolve to... Through the domain name first  2  Service address information , And then through  LB  Policy to select one of them as the requested object .
Okay , These are the common ones  K8s  resources , Of course , There are more resources (DaemonSet、StatefulSet、ReplicaSet  etc. ) For interest, see
Official website
.

actual combat  K8s  The architecture implementation of the next microservice

stay 《
Spring Boot 2.x  combination  k8s  Implement distributed micro service architecture
》 Chat  in , We briefly described how to combine  K8s  To implement the architecture of distributed microservices .
But here we have left a few problems :
  • Oauth2  High availability implementation
  • How to access services across namespaces
  • How to realize the gray scale of distributed services 、 Blue green release
In view of the above problems , Let's crack them one by one .
Oauth2  High availability implementation of
We know , about  Oauth2  Original , Two methods are provided for service authentication :
  • Obtain user information for authentication
  • Pass the test  token  To authenticate
security:
&nbsp;&nbsp;path:
&nbsp;&nbsp;&nbsp;&nbsp;ignores:&nbsp;/,/index,/static/**,/css/**,&nbsp;/image/**,&nbsp;/favicon.ico,&nbsp;/js/**,/plugin/**,/avue.min.js,/img/**,/fonts/**
&nbsp;&nbsp;oauth2:
&nbsp;&nbsp;&nbsp;&nbsp;client:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;client-id:&nbsp;rest-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;client-secret:&nbsp;rest-service-123
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;user-authorization-uri:&nbsp;${cas-server-url}/oauth/authorize
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;access-token-uri:&nbsp;${cas-server-url}/oauth/token
&nbsp;&nbsp;&nbsp;&nbsp;resource:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;loadBalanced:&nbsp;true
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;id:&nbsp;rest-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;prefer-token-info:&nbsp;true
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;token-info-uri:&nbsp;${cas-server-url}/oauth/check_token
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;#user-info-uri:&nbsp;${cas-server-url}/api/v1/user
&nbsp;&nbsp;&nbsp;&nbsp;authorization:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;check-token-access:&nbsp;${cas-server-url}/oauth/check_token
Configuration in progress ,
user-info-uri
token-info-uri
  It is used to authenticate the service client , But it can't exist at the same time , But for the native
user-info-uri
, No reasonable authentication logic is provided , There may be some problems : When the user logs in , It is found that all interfaces can be accessed normally , Whether you need permission , Or you don't need permission ,
There are certain problems
.
here , We will no longer use the method of obtaining user information for authentication 、 to grant authorization . Let's see
check_token
How does this method perform authentication and authorization ?
original , When the user carries  token  When requesting resources from the resource server , 
OAuth2AuthenticationProcessingFilter
  Will intercept  token:
null
It will eventually enter  loadAuthentication  Go ahead  token  The inspection process :
null
As for verification  Token  The processing logic of is very simple , It's called  redisTokenStore  Inquire about  token  Legitimacy , And return some information of the user :
null
Finally, if  ok  Words , Back here  RemoteTokenServices, most important of all  **userTokenConverter.extractAuthentication(map)**, To determine if there is  userDetailsService  Realization , If so, query all the user information once according to the returned information , No direct return is implemented  username:
null
Based on this , Conduct  token  and  userdetails  The process , Put the stateless  token  Into user information .
In fact, this is the mutual call between services , To ensure high availability of such calls , It is nothing more than a multi node service 、Redis  High availability . Of course, if you are using  Redis  Words , If it is  JWT  Pattern , That's easier , Direct stateless storage  Token. We can make the unified certification center  K8s  Medium  Deployment  type :
apiVersion:&nbsp;apps/v1
kind:&nbsp;Deployment
metadata:
&nbsp;&nbsp;name:&nbsp;cas-server-deployment
&nbsp;&nbsp;namespace:&nbsp;system-server
&nbsp;&nbsp;labels:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;cas-server
spec:
&nbsp;&nbsp;replicas:&nbsp;3
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;matchLabels:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;cas-server
&nbsp;&nbsp;template:
&nbsp;&nbsp;&nbsp;&nbsp;metadata:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;labels:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;cas-server
&nbsp;&nbsp;&nbsp;&nbsp;spec:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;nodeSelector:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;cas-server:&nbsp;&quot;true&quot;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containers:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;cas-server
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;image:&nbsp;{{&nbsp;cluster_cfg['cluster']['docker-registry']['prefix']&nbsp;}}cas-server
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;imagePullPolicy:&nbsp;Always
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ports:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;cas-server01
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containerPort:&nbsp;2000
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;volumeMounts:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/home/cas-server
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;cas-server-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/data/cas-server
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;cas-server-log-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/etc/kubernetes
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;kube-config-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/abnormal_data_dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;abnormal-data-dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;args:&nbsp;[&quot;sh&quot;,&nbsp;&quot;-c&quot;,&nbsp;&quot;nohup&nbsp;java&nbsp;$JAVA_OPTS&nbsp;-jar&nbsp;-XX:MetaspaceSize=128m&nbsp;-XX:MaxMetaspaceSize=128m&nbsp;-Xms1024m&nbsp;-Xmx1024m&nbsp;-Xmn256m&nbsp;-Xss256k&nbsp;-XX:SurvivorRatio=8&nbsp;-XX:+UseConcMarkSweepGC&nbsp;cas-server.jar&nbsp;--spring.profiles.active=dev&quot;,&nbsp;&quot;&&quot;]
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;volumes:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;cas-server-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/var/pai/cas-server
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;cas-server-log-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/data/cas-server
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;kube-config-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/etc/kubernetes
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;abnormal-data-dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/data/images/detect_result/defect
ad locum , We defined a name as  cas-server-deployment  Resources for , meanwhile , We defined when creating it , Three copies will be created :
replicas: 3
, To guarantee  cas-server  High availability . meanwhile , We want it to be better discovered , We make use of  Service  Resources for service load balancing :
apiVersion:&nbsp;v1
kind:&nbsp;Service
metadata:
&nbsp;&nbsp;name:&nbsp;cas-server-service
&nbsp;&nbsp;namespace:&nbsp;system-server
spec:
&nbsp;&nbsp;ports:
&nbsp;&nbsp;-&nbsp;name:&nbsp;cas-server01
&nbsp;&nbsp;&nbsp;&nbsp;port:&nbsp;2000
&nbsp;&nbsp;&nbsp;&nbsp;targetPort:&nbsp;cas-server01
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;cas-server
What is defined here is a target that is  http  agreement , Port is  2000  Of  pod  The resources of the replica set of , The default is  ClusterIP  Mode  Service, adopt  Service  Access directly within the cluster :
cas-server-service.system-server.svc.cluster.local:2000/api
. such , utilize  K8s  Of  Service  To achieve service registration and discovery . meanwhile , combination  Deployment  Resources are deployed to serve multiple nodes , We can achieve high availability of services .
How to access services across namespaces
stay  K8s  in , I talked about it before. , Other requests can only be made through namespace access  namespace  Next service , For native  K8s  The service invocation of is like this , however , We are based on  spring-cloud, It can be transformed here . We introduce  
spring-cloud-k8s
  after , abandon   be based on  Ribbon  Load balancing of , We use based on  
spring-cloud-loadbalancer
  Strategy to try :
spring:
&nbsp;&nbsp;application:
&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;cas-server
&nbsp;&nbsp;cloud:
&nbsp;&nbsp;&nbsp;&nbsp;loadbalancer:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ribbon:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;enabled:&nbsp;false
&nbsp;&nbsp;&nbsp;&nbsp;kubernetes:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ribbon:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mode:&nbsp;SERVICE
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;discovery:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;all-namespaces:&nbsp;true
Here's a configuration :
spring.cloud.kubernetes.ribbon.mode=SERVICE
, What's this for ? It actually disables  Ribbon  Of  LB  Ability , It will not take effect at this time , Is it time to go  Spring cloud LoadBalancer. In addition to  Service, It's all set to  NodePort  type , If it is the default type, whether it can implement  LB, Need to be confirmed , Because for now , It didn't come true , It could be a network problem , It's not the default type  Service  Not achievable  LB. meanwhile , We still need to configure :
spring.cloud.loadbalancer.ribbon.enabled = false
, Because the default is  true  Of .
Of course , Since it is abandoned here  Ribbon, That requires introducing dependencies :
<dependency>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<groupId>org.springframework.cloud</groupId>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<artifactId>spring-cloud-starter-kubernetes-loadbalancer</artifactId>
</dependency>
such , When we access the resource server , The resource server will call the unified certification center for  token  The check , here , You can go through  
http://cas-server-service/oauth/check_token
  To test , This enables high availability of services . meanwhile , Even if the resource server and the unified authentication center are not in the same  namespace, You can also request access in this way . The principle is , It will get  K8s  All discoverable in the cluster  Service, So for different  namespace  Under the  Service  There is also the possibility of being intermodulated .
For example, here we visit  A  Namespace , adopt  Serice  See the log after accessing :
null
This means that , Can pass  Service  How to achieve
Cross namespace
Service intermodulation .
How to realize the gray scale of distributed services 、 Blue green release
In cloud native best practices , covers
Grayscale Publishing
、 Stretch and stretch 、 Cluster migration 、 Network communication 、 Application of container transformation and other scenarios , Today we're going to use  K8s  Native technology to implement distributed microservices
Grayscale Publishing
as well as
Blue green release
.
Usually use stateless loads  Deployment、 Stateful load  StatefulSet  etc.  Kubernetes  Object to deploy the business , Each workload manages a set of  Pod. With  Deployment  For example , The schematic diagram is as follows :
null
We create... For this work service  Service,Service  adopt  selector  To select the service node  Pod, Next , We do grayscale publishing of the workload .
Grayscale Publishing
Usually for each  Deployment  Type of application load to create a  Service, but  K8s  Not limited  Service  Need and  Deployment  Load is a one-to-one correspondence .Service  Only pass  selector  Matching load nodes  Pod, If different  Deployment  Load nodes for  Pod  By the same  selector  Choose , You can achieve one  Service  Corresponding to multiple versions  Deployment. Adjust different versions  Deployment  Number of copies , You can adjust the weight of different versions of services , To achieve grayscale Publishing .
that , Now that you know the principle of gray Publishing , Let's do the actual combat , Suppose there is a service provider  diss-ns-service, At this point, we can create different versions of  Deployment  Type load  Pod:
apiVersion:&nbsp;apps/v1
kind:&nbsp;Deployment
metadata:
&nbsp;&nbsp;name:&nbsp;diff-ns-service-v1-deployment
&nbsp;&nbsp;namespace:&nbsp;ns-app
&nbsp;&nbsp;labels:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
spec:
&nbsp;&nbsp;replicas:&nbsp;3
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;matchLabels:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;version:&nbsp;v1
&nbsp;&nbsp;template:
&nbsp;&nbsp;&nbsp;&nbsp;metadata:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;labels:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;version:&nbsp;v1
&nbsp;&nbsp;&nbsp;&nbsp;spec:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;nodeSelector:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service:&nbsp;&quot;true&quot;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containers:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;image:&nbsp;diff-ns-service:v1
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;imagePullPolicy:&nbsp;Always
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ports:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containerPort:&nbsp;2001
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;volumeMounts:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/home/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;diff-ns-service-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/data/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;diff-ns-service-log-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/etc/kubernetes
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;kube-config-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/abnormal_data_dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;abnormal-data-dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;args:&nbsp;[&quot;sh&quot;,&nbsp;&quot;-c&quot;,&nbsp;&quot;nohup&nbsp;java&nbsp;$JAVA_OPTS&nbsp;-jar&nbsp;-XX:MetaspaceSize=128m&nbsp;-XX:MaxMetaspaceSize=128m&nbsp;-Xms1024m&nbsp;-Xmx1024m&nbsp;-Xmn256m&nbsp;-Xss256k&nbsp;-XX:SurvivorRatio=8&nbsp;-XX:+UseConcMarkSweepGC&nbsp;diff-ns-service.jar&nbsp;--spring.profiles.active=dev&quot;,&nbsp;&quot;&&quot;]
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;volumes:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-service-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/var/pai/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-service-log-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/data/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;kube-config-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/etc/kubernetes
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;abnormal-data-dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/data/images/detect_result/defect
Above is  v1  Version of the application load , Next  v2  The version is the same :
apiVersion:&nbsp;apps/v1
kind:&nbsp;Deployment
metadata:
&nbsp;&nbsp;name:&nbsp;diff-ns-service-v2-deployment
&nbsp;&nbsp;namespace:&nbsp;ns-app
&nbsp;&nbsp;labels:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
spec:
&nbsp;&nbsp;replicas:&nbsp;3
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;matchLabels:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;version:&nbsp;v2
&nbsp;&nbsp;template:
&nbsp;&nbsp;&nbsp;&nbsp;metadata:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;labels:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;version:&nbsp;v2
&nbsp;&nbsp;&nbsp;&nbsp;spec:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;nodeSelector:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service:&nbsp;&quot;true&quot;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containers:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;image:&nbsp;diff-ns-service:v2
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;imagePullPolicy:&nbsp;Always
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ports:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;containerPort:&nbsp;2001
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;volumeMounts:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/home/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;diff-ns-service-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/data/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;diff-ns-service-log-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/etc/kubernetes
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;kube-config-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;mountPath:&nbsp;/abnormal_data_dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name:&nbsp;abnormal-data-dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;args:&nbsp;[&quot;sh&quot;,&nbsp;&quot;-c&quot;,&nbsp;&quot;nohup&nbsp;java&nbsp;$JAVA_OPTS&nbsp;-jar&nbsp;-XX:MetaspaceSize=128m&nbsp;-XX:MaxMetaspaceSize=128m&nbsp;-Xms1024m&nbsp;-Xmx1024m&nbsp;-Xmn256m&nbsp;-Xss256k&nbsp;-XX:SurvivorRatio=8&nbsp;-XX:+UseConcMarkSweepGC&nbsp;diff-ns-service.jar&nbsp;--spring.profiles.active=dev&quot;,&nbsp;&quot;&&quot;]
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;volumes:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-service-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/var/pai/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-service-log-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/data/diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;kube-config-path
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/etc/kubernetes
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;-&nbsp;name:&nbsp;abnormal-data-dir
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;hostPath:
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path:&nbsp;/data/images/detect_result/defect
meanwhile , Here we set the number of copies of the load to  3, Express  3  Load nodes  pod, here , After we create it, we can see :
ns-app&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service-v1-deployment-d88b9c4fd-22lgb&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;12s
ns-app&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service-v1-deployment-d88b9c4fd-cgsqw&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;12s
ns-app&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service-v1-deployment-d88b9c4fd-hmcbq&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;12s

ns-app&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service-v2-deployment-37bf53d4b-43w23&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;12s
ns-app&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service-v2-deployment-37bf53d4b-ce33g&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;12s
ns-app&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;diff-ns-service-v2-deployment-37bf53d4b-scds6&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1/1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Running&nbsp;&nbsp;&nbsp;0&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;12s
such , We're looking at the load  diff-ns-service  Created different versions of resources  Pod, Next , We create a  Service, This is like this  YAML:
apiVersion:&nbsp;v1
kind:&nbsp;Service
metadata:
&nbsp;&nbsp;name:&nbsp;diff-ns-service-service
&nbsp;&nbsp;namespace:&nbsp;ns-app
spec:
&nbsp;&nbsp;ports:
&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-svc
&nbsp;&nbsp;&nbsp;&nbsp;port:&nbsp;2001
&nbsp;&nbsp;&nbsp;&nbsp;targetPort:&nbsp;2001
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
We can see in the  selector  Version is not specified in , such , It can make  Service  Select both versions of  Deployment  Of  Pod. here , We perform access through script commands :
for&nbsp;i&nbsp;in&nbsp;{1..10};&nbsp;do&nbsp;curl&nbsp;http://diff-ns-service-service/getservicedetail?servicename=aaa;&nbsp;done;
Let's take a look at the printed log :
diff-ns-service-v1
diff-ns-service-v2
diff-ns-service-v1
diff-ns-service-v2
diff-ns-service-v1
diff-ns-service-v2
diff-ns-service-v1
diff-ns-service-v2
diff-ns-service-v1
diff-ns-service-v2
You can see that half of the returned result is  v1  Version response , Half of it is  v2  Version response .
Next , We go through  kubectl  How to modify the number of copies of the load :
kubectl&nbsp;scale&nbsp;deployment/diff-ns-service-v2-deployment&nbsp;--replicas=4

kubectl&nbsp;scale&nbsp;deployment/diff-ns-service-v1-deployment&nbsp;--replicas=1
Because we need to update the version , So put the new version  v2  Set to  4, The old version  v1  The Settings for  1, Next , We continue through  curl  Order to test :
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v1
diff-ns-service-v2
diff-ns-service-v1
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
We can find from the results that ,10  Request access , Only  2  The second visit is  v1  The old version of ,v1  And  v2  The response proportion of the version is consistent with the proportion of the number of copies , by  4:1. By controlling the number of copies of different version services, gray-scale publishing is realized .
Blue green release
Let's take a look at the blue-green release , The principle of blue-green publishing is slightly different from that of gray publishing , Two different versions of... Have been deployed in the cluster  Deployment, Its load  Pod  Have a common  label. But there's one  label  Values are different , Used to distinguish different versions .Service  Use  selector  One of the versions of  Deployment  Of  Pod, At this point, by modifying  Service  Of  selector  Which determines the service version  label  To change  Service  The back end corresponds to  Pod, You can directly switch services from one version to another .
So in principle , We created  Service  In addition to including some information about itself , You also need to include version information :
apiVersion:&nbsp;v1
kind:&nbsp;Service
metadata:
&nbsp;&nbsp;name:&nbsp;diff-ns-service-service
&nbsp;&nbsp;namespace:&nbsp;ns-app
spec:
&nbsp;&nbsp;ports:
&nbsp;&nbsp;-&nbsp;name:&nbsp;diff-ns-svc
&nbsp;&nbsp;&nbsp;&nbsp;port:&nbsp;2001
&nbsp;&nbsp;&nbsp;&nbsp;targetPort:&nbsp;2001
&nbsp;&nbsp;selector:
&nbsp;&nbsp;&nbsp;&nbsp;app:&nbsp;diff-ns-service
&nbsp;&nbsp;&nbsp;&nbsp;version:&nbsp;v1
Again , Execute the following command , Test access .
for&nbsp;i&nbsp;in&nbsp;{1..10};&nbsp;do&nbsp;curl&nbsp;http://diff-ns-service-service/getservicedetail?servicename=aaa;&nbsp;done;
The results are as follows , Are all  v1  Version response :
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
diff-ns-service-v1
We go through  kubectl  Way to modify  Service  Of  label:
kubectl&nbsp;patch&nbsp;service&nbsp;diff-ns-service-service&nbsp;-p&nbsp;'{&quot;spec&quot;:{&quot;selector&quot;:{&quot;version&quot;:&quot;v2&quot;}}}'
Again , Execute the following command , Test access .
for&nbsp;i&nbsp;in&nbsp;{1..10};&nbsp;do&nbsp;curl&nbsp;http://diff-ns-service-service/getservicedetail?servicename=aaa;&nbsp;done;
The results are as follows :
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
diff-ns-service-v2
The results are  v2  Version response , The blue-green release has been successfully realized .

Conclusion

Cloud native technology and micro service architecture are seamless
Cloud based microservice architecture is a perfect combination of cloud based technology and microservice architecture . Microservices as an architectural style , The problem solved is the architecture and design of complex software system ; Cloud native technology is a way to achieve , The problem solved is the operation of the software system 、 Maintenance and Governance . Microservice architecture can be implemented in different ways , Such as  Java  Medium  Dubbo、Spring Cloud、Spring Cloud Alibaba,Golang  Medium  Beego,Python  Medium  Flask  etc. . But there may be some difficulties and complexities in accessing and running these services in different languages . but , The combination of cloud native and micro Service Architecture , Make them complement each other . The reason for this is : Cloud native technology can effectively make up for the implementation complexity brought by micro service architecture ; An important reason why microservice architecture is difficult to implement is that it is too complex , Organization and management of the development team 、 The technical level and operation and maintenance ability have put forward extremely high requirements . therefore , For a long time, only a few large enterprises with strong technical strength will adopt micro service architecture . With the popularity of cloud native technology , After making up for this shortcoming of microservice Architecture , It greatly reduces the complexity of microservice architecture implementation , So that the majority of small and medium-sized enterprises have the ability to apply micro service architecture in practice . Cloud native technology promotes the promotion of microservice architecture , It is also the best match for the implementation of micro service architecture .
The future of micro services in the cloud based era
The first development trend of cloud Nativity : Standardization and Standardization , The technology is based on containerization and container choreography , The most frequently used technology is  Kubernetes  and  Docker  etc. . With the development of cloud native technology , The standardization and normalization of cloud native technology is constantly advancing , Its purpose is to promote the development of technology and avoid the problem of supplier lock-in , This is crucial for the entire ecosystem of cloud native technology .
The second development trend of cloud Nativity : platform , Represented by service grid technology , The starting point of this trend is to enhance the capabilities of cloud platforms , So as to reduce the complexity of operation and maintenance . flow control 、 Authentication and access control 、 Performance index data collection 、 Distributed service tracing and centralized log management , Can be provided by the underlying platform , This greatly reduces the complexity of SMEs in running and maintaining cloud native applications , Service grid to  Istio  and  Linkerd  Open source represents .
The third development trend of cloud Nativity : Progress in application management technology , If in  Kubernetes  Deploying and updating applications on the platform has always been complicated , Traditional resource based declaration  YAML  The practice of the document , Has been gradually  Helm  Replace . Operator mode in  Helm  On the basis of that , To be more efficient 、 Manage application deployment in an automated and scalable way .
原网站

版权声明
本文为[InfoQ]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/160/202206091449486153.html