当前位置:网站首页>Self built DNS to realize the automatic intranet resolution of tke cluster apiserver domain name

Self built DNS to realize the automatic intranet resolution of tke cluster apiserver domain name

2022-06-24 03:19:00 Nieweixing

Tencent cloud tke Cluster access apiserver Addresses are accessed by domain names , Support intranet and public network access apiserver, For public network access, a public network type will be created clb, Then automatically resolve the domain name to clb Of vip On . If it is an intranet access, an intranet will be created clb Type of service,default Under the namespace kube-user, However, the intranet does not automatically resolve domain names , Therefore, it is usually necessary to configure on the client side hosts The cluster can only be accessed through resolution . When there are many client machines , Each configuration is required host Parsing is troublesome , Is it possible to achieve tke colony apiserver The domain name is automatically resolved on the intranet ?

Tencent cloud has launched a Private DNS The service of is used for automatic resolution of intranet , We can do it in Private DNS Add the cluster domain name and the corresponding intranet clb Of A Record , That is to say vpc The intranet resolves itself , Refer to the documentation for specific configuration https://cloud.tencent.com/document/product/457/55348

Of course, you can also build your own dns To achieve tke colony apiserver The domain name is automatically resolved on the intranet , Today, let's talk about how to tke Cluster self built dns Come to the intranet for automatic resolution , The concrete implementation is to deploy a dnsmasq Into the cluster ,dnsmasq Mirror project address https://github.com/jpillora/docker-dnsmasq, Then provide an intranet clb Of service As dns Entrance , Finally at the node or vpc Configure lower Intranet clb Of vip As nameserver, That is to say vpc Automatic resolution of the intranet .

1. establish namesapce Deploy dnsmasq

# kubectl create ns dnsmasq

2. To configure dnsmasq The configuration file

dnsmasq The configuration file of configmap Mount it in the same way , Concrete configmap The configuration is as follows , For the description of the configuration file, please refer to http://oss.segetech.com/intra/srv/dnsmasq.conf

apiVersion: v1
data:
  dnsmasq.conf: |-
    #dns Parsing log 
    log-queries
    # The domain name and IP mapping 
    address=/cls-b3mg1p92.ccs.tencent-cloud.com/10.0.0.60
    address=/cls-jmdg96ew.ccs.tencent-cloud.com/10.0.0.71
kind: ConfigMap
metadata:
  name: dnsmasq-conf
  namespace: dnsmasq

3. establish dnsmasq The workload

Next, deploy a dnsmasq Of deployment, It's configured here HTTP_USER and HTTP_PASS The environment variable is used for authentication login of front-end configuration

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: dnsmasq
    qcloud-app: dnsmasq
  name: dnsmasq
  namespace: dnsmasq
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: dnsmasq
      qcloud-app: dnsmasq
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: dnsmasq
        qcloud-app: dnsmasq
    spec:
      containers:
      - env:
        - name: HTTP_USER
          value: admin
        - name: HTTP_PASS
          value: "123456"
        image: jpillora/dnsmasq
        imagePullPolicy: Always
        name: dnsmasq
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 256Mi
        securityContext:
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/dnsmasq.conf
          name: vol
          subPath: dnsmasq.conf
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: qcloudregistrykey
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: dnsmasq-conf
        name: vol

4. Create... That provides access service

Because Tencent cloud clb type serivce Rules for deploying different protocols are not supported ,dnsmasq Of 53 The port needs to use udp, The port of the front-end management page 8080 yes tcp, So here we create 2 individual service To expose the service . Service udp agreement 53 Port we use the intranet clb type , Front end configuration page 8080 For port clusterip type service, Then configure a ingress Just visit .

dnsmasq Service for service, The intranet here clb vip yes 10.0.21.13, Later, we will this ip Configuration to nameserve that will do

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx
  name: dnsmasq
  namespace: dnsmasq
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: 53-53-udp
    nodePort: 31198
    port: 53
    protocol: UDP
    targetPort: 53
  selector:
    k8s-app: dnsmasq
    qcloud-app: dnsmasq
  sessionAffinity: None
  type: LoadBalancer

The front-end configuration webui Of service as follows

apiVersion: v1
kind: Service
metadata:
  name: dashboaed
  namespace: dnsmasq
spec:
  ports:
  - name: 8080-8080-tcp
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    k8s-app: dnsmasq
    qcloud-app: dnsmasq
  sessionAffinity: None
  type: ClusterIP

5. by dnsmasq Front page configuration ingress

Let's go through a ingress Let's use the domain name model dnsmasq Of webui

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: ingress
  name: dnsmasq-ingress
  namespace: dnsmasq
spec:
  rules:
  - host: dnsmasq.tke.niewx.cn
    http:
      paths:
      - backend:
          serviceName: dashboaed
          servicePort: 8080
        path: /

Browser access domain name access webui, The login account password is the environment variable of the previous workload configuration

After logging in , We can see dnsmasq Configuration and logging of , If modification is needed dnsmasq Configuration of , You can also modify it directly in the front end , Then save and restart the service .

6. Node or vpc To configure nameserver

To use our custom dns To automatically resolve domain names , It also needs to be in the node /etc/resolv.conf Under configuration nameserver, If you want the whole vpc All nodes are configured , Can be in vpc To configure , stay dns Configuration item plus 10.0.21.13, This is effective for the stock node , The node needs to be restarted , The new node will add this by default .

If some stock nodes cannot be restarted , We need to manually set the node /etc/resolv.conf To configure 10.0.21.13 This nameserver.

7. Test resolution access domain name

Finally, let's test the domain name resolution

[[email protected] kubernetes]# nslookup cls-b3mg1p92.ccs.tencent-cloud.com 10.0.21.13
Server:         10.0.21.13
Address:        10.0.21.13#53

Name:   cls-b3mg1p92.ccs.tencent-cloud.com
Address: 10.0.0.60

At the same time, we turn off the cluster public network access , And then use kubectl Visit the cluster to see if it succeeds

Normal access indicates that the automatic resolution configuration is successful .

原网站

版权声明
本文为[Nieweixing]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/10/20211011151200350c.html