当前位置:网站首页>Let you understand several common traffic exposure schemes in kubernetes cluster
Let you understand several common traffic exposure schemes in kubernetes cluster
2022-07-29 04:56:00 【nginx】
adopt kube-proxy Acting as agent Usually in the simplest test or personal development environment , Can pass kubectl port-forward To start a kube-proxy The service inside the process agent to the host node of the execution , If the host has a public network IP, And the forwarding listening port is 0.0.0.0 You can access the service through the public network , This method can represent a single Pod, perhaps Deployment, perhaps Servcie.
NodePort The way The second most commonly used is NodePort The way , take K8s in service The type of is changed to NodePort The way , You will get a port range of 30000-32767 Host port within the port range , The same host has a public network IP Service exposure can be realized , however NodePort It will occupy the host port , One Service Corresponding to one NodePort, This method has only four layers , It can't be done SSL Uninstallation of certificate , If the service is forwarded to a single Node Node NodePort High availability is not possible , It's usually needed in NodePort Add multiple back ends with load balancing before NodePort High availability has been achieved .
$ kubectl port-forward -hForward one or more local ports to a pod. This command requires the node to have 'socat' installed.
Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted.
If there are multiple pods matching the criteria, a pod will be selected automatically. The forwarding session ends
when the selected pod terminates, and rerun of the command is needed to resume forwarding.
Examples:
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
kubectl port-forward pod/mypod 5000 6000
# Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the
deployment
kubectl port-forward deployment/mydeployment 5000 6000
# Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by
the service
kubectl port-forward service/myservice 8443:https
# Listen on port 8888 locally, forwarding to 5000 in the pod
kubectl port-forward pod/mypod 8888:5000
# Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
# Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod
kubectl port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000
# Listen on a random port locally, forwarding to 5000 in the pod
kubectl port-forward pod/mypod :5000
LoadBalancer four layers Layer 4 traffic forwarding one LB The port of can only correspond to one Service,Servcie Of Type by NodePort, For example, see the figure below ,LoadBalancer Upper 88 The port is forwarded to the back end NodePort Of 32111 port , Corresponding to servcieA;LB Upper 8080 The port is forwarded to the back end NodePort32001 port ; This scheme can be implemented by adding multiple NodePort To achieve high availability , But because of the four layers, it is impossible to realize SSL The uninstall , Corresponding NodePort Need to be in LB Occupy one port .
LoadBalancer Seven layers The seventh floor can be achieved by LB Domain name forwarding , One domain name port corresponds to multiple ports Service, As shown in the figure, it can be according to path route ,/cmp Corresponding NodePort Of 32111,/gateway Corresponding NodePort Of 32000 port , Not only can you achieve high availability , And seven layers can realize SSL uninstall .
Currently, the general public cloud LB Levels have four and seven levels of functionality , When used together, flexible business traffic exposure can be achieved .
Ingress stay K8s in , There are Ingress Resources to forward a single domain name to according to different paths or other configuration rules K8 Different within the cluster Service, But user requests require access Ingress Realize the controller's NodePort for example Ingress- Nginx Of Controller Of Service Of NodePort, For specific business domain names, there is generally no port , Therefore, it usually needs a layer in front 80/443 Port forwarding .
commonly Ingress Of Controller There are many solutions in the industry , For example, the more famous Ingress—nginx/Ingress-traefik etc. .
LoadBalancer + Ingress As shown in the figure below, there is a four layer at the front LB Implement port 80/443 Forward to ingress-provider Of Service Of NodePort,K8s There are multiple configurations in the cluster service.
Ingress-nginx Detailed explanation In the above schemes , Are useful to Ingress,Nginx-ingress by Nginx Officially provided implementation K8s ingress Resource plan , meanwhile Kubernetes Officials have also provided information based on Nginx Realized Ingress programme .
Nginx Ingress By the resource object Ingress、Ingress controller 、Nginx Three parts ,Ingress The goal of the controller is to build and complete a configuration file (nginx.conf), It mainly detects that the configuration file is overloaded after being changed Nginx Realization , But not just in Upstream Overload on change Nginx( Modify when deploying the application Endpoints), Use lua-nginx-module Realization .
According to the figure below, we can better understand Ingress-nginx Usage scenarios of .
The following information is shown in the figure :
- 1, One K8s colony
- 2, Cluster user management 、 user A And the user B, They use Kubernetes API Using clusters .
- 3, client A And the client B, They connect to the application deployed by the corresponding user A and B.
- 4,IC, from Admin Deploy in namespace nginx-ingress Medium pod in , And pass ConfigMap nginx-ingress To configure .Admin Typically, at least two... Are deployed pod To achieve redundancy .IC Use Kubernetes API Get the latest entry resources created in the cluster , And then allocate according to these resources NGINX.
- 5, Applications A By the user A In namespace A Two pods are deployed in the middle . In order to pass through the host A.example.com To their clients ( The client A) Expose applications , user A Create an entry A.
- 6, user B In namespace B There's one in the middle pod Applications for B. In order to pass through the host B.example.com To their clients ( The client B) Expose applications , user B establish VirtualServer B.
- 7, Public endpoint , It is located in IC In front of the pod . This is usually a TCP Load Balancer ( cloud 、 Software or hardware ), Or this load balancer and NodePort Combination of services . client A and B Connect to their applications through public endpoints .
For simplicity , There are not many necessary Kubernetes resources , Such as deployment and service , Administrators and users also need to create these resources .
other stay K8s in , Usually cloud vendors LB General cloud vendors provide adaptation CNI, Will be creating K8s Cluster will be created automatically LB Type of servcie, Like Ali's ACK, Tencent's TKE, Huawei's CCE etc. , But in our self built or personal test scenarios , Open source Metallb[1] It's a good choice , Its function is through K8s The original way to provide LB Type of Service Support , Open the box , Of course, there is Qingyun technology KubeSphere The team's open source load balancer plug-in OpenELB[2], For physical machines (Bare-metal)、 edge (Edge) And a load balancer plug-in designed for a privatized environment , Can be used as Kubernetes、K3s、KubeSphere Of LB The plug-in is exposed outside the cluster “LoadBalancer” Type of service . stay 2021 year 11 Month entered CNCF The sandbox (Sandbox) trusteeship , It also solves the problem that users will Kubernetes The cluster is deployed on bare metal , Or privatize the environment, especially the physical machine or edge cluster ,Kubernetes It does not provide LoadBalancer The pain points , Provide the same user experience as cloud based load balancers .
边栏推荐
- ssm整合增删改查
- MySQL time calculation function
- C语言实现三子棋
- IDEA中使用注解Test
- 五个关联分析,领略数据分析师一大重要必会处理技能
- Ethernet of network
- Reply from the Secretary of jindawei: the company is optimistic about the market prospect of NMN products and has launched a series of products
- Mysql:The user specified as a definer (‘root‘@‘%‘) does not exist 的解决办法
- How to monitor micro web services
- JDBC statement + resultset introduction
猜你喜欢
Mysql各版本下载地址及多版本共存安装
荣耀2023内推,内推码ambubk
Big silent event Google browser has no title
Introduction of JDBC preparestatement+ database connection pool
如何安装office2010安装包?office2010安装包安装到电脑上的方法
带你一文理解JS数组
【无标题】
A little knowledge about management
央企建筑企业数字化转型核心特征是什么?
spinning up安装完使用教程测试是否成功,出现Library“GLU“ not found和‘from pyglet.gl import *错误解决办法
随机推荐
在线教育的推荐系统
谷歌浏览器 打开网页出现 out of memory
How to open IE browser by running win command
如何避免示波器电流探头损坏
Makefile+make Basics
金达威董秘回复:公司看好NMN产品的市场前景,已推出系列产品
MySQL定时调用预置函数完成数据更新
IOS interview preparation - other articles
Improve the readability of your regular expressions a hundred times
Introduction of JDBC preparestatement+ database connection pool
SGuard64.exe ACE-Guard Client EXE:造成磁盘经常读写,游戏卡顿,及解决方案
Reveal安装配置调试
How to build a mobile studio network?
央企建筑企业数字化转型核心特征是什么?
Makefile+Make基础知识
如何让照片中的人物笑起来?HMS Core视频编辑服务一键微笑功能,让人物笑容更自然
How to monitor micro web services
输入的查询SQL语句,是如何执行的?
IOS interview preparation - Objective-C
EF Core: 一对一,多对多的配置