当前位置:网站首页>Kubernetes deploys elk and collects container logs using filebeat

Kubernetes deploys elk and collects container logs using filebeat

2022-06-11 17:29:00 wx61eaae213a986

The experimental environment of this paper is CentOS 7.3,Kubernetes Cluster is 1.11.2, For the installation procedure, please refer to kubeadm install kubernetes V1.11.1 colony

In order to facilitate the systematic study of everyone Kubernetes , I made a copy Kubernetes Learning Series article , covers Kubernetes Basic knowledge of 、 Installation steps and the whole process Kubernetes Relevant contents of the system , I believe that after reading this series , Can be right Kubernetes Have a deeper understanding .

1. Environmental preparation

Elasticsearch Runtime requirements vm.max_map_count The kernel parameter must be greater than 262144, Therefore, it is necessary to ensure that this parameter has been properly adjusted before starting .

       
$ sysctl -w vm.max_map_count=262144
  • 1.

It can also be in ES Add a... To the layout file of initContainer To modify kernel parameters , But this requires kublet Must be added when starting --allow-privileged Parameters , But this parameter will not be added in general production , Therefore, it is better to require this parameter to be modified when the system is supplied .

ES Configuration mode

  • Use Cluster Update Setting API Dynamically modify the configuration
  • How to use a profile , The default configuration file is config Under the folder , The specific location depends on the installation method .
  • elasticsearch.yml To configure Elasticsearch
  • jvm.options To configure ES JVM Parameters
  • log4j.properties To configure ES logging Parameters
  • Use Prompt Mode is entered at startup

The most commonly used configuration method is to use the configuration file ,ES The configuration file is yaml Format , Format requirements and Kubernetes The same as the orchestration file . Environment variables can be referenced in the configuration file , for example node.name: ${HOSTNAME}

ES The node of

ES The node of Node Can be divided into several roles :

  • Master-eligible node, Is eligible to be selected as Master Node Node, May be collectively referred to as Master node . Set up node.master: true
  • Data node, Nodes that store data , Set as node.data: true.
  • Ingest node, Nodes for data processing , Set as node.ingest: true.
  • Trible node, For cluster integration .

For single node Node, The default is master-eligible and data, For multi node clusters , It is necessary to carefully plan the role of each node .

2. Single instance deployment ELK

Single instance deployment ELK It's very simple , You can refer to me. Github Upper elk-single.yaml file , The whole is to create a ES Deployment of , Create a Kibana Deployment of , Create a ES Of Headless service , Create a Kiana Of NodePort service , Locally through the node NodePort visit Kibana.

       
[[email protected] ~]# curl -L -O https://raw.githubusercontent.com/cocowool/k8s-go/master/elk/elk-single.yaml
[[email protected] ~]# kubectl apply -f elk-single.yaml
deployment.apps/kb-single created
service/kb-single-svc unchanged
deployment.apps/es-single created
service/es-single-nodeport unchanged
service/es-single unchanged
[[email protected] ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/es-single-5b8b696ff8-9mqrz 1/1 Running 0 26s
pod/kb-single-69d6d9c744-sxzw9 1/1 Running 0 26s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/es-single ClusterIP None <none> 9200/TCP,9300/TCP 19m
service/es-single-nodeport NodePort 172.17.197.237 <none> 9200:31200/TCP,9300:31300/TCP 13h
service/kb-single-svc NodePort 172.17.27.11 <none> 5601:32601/TCP 19m
service/kubernetes ClusterIP 172.17.0.1 <none> 443/TCP 14d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/es-single 1 1 1 1 26s
deployment.apps/kb-single 1 1 1 1 26s

NAME DESIRED CURRENT READY AGE
replicaset.apps/es-single-5b8b696ff8 1 1 1 26s
replicaset.apps/kb-single-69d6d9c744 1 1 1 26s
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.

You can see the effect as follows :

Kubernetes Deploy ELK And use Filebeat Collect container logs _kubernetes

3. Cluster deployment ELK

3.1 Do not distinguish the roles of nodes in the cluster

       
[[email protected] ~]# curl -L -O https://raw.githubusercontent.com/cocowool/k8s-go/master/elk/elk-cluster.yaml
[[email protected] ~]# kubectl apply -f elk-cluster.yaml
deployment.apps/kb-single created
service/kb-single-svc created
statefulset.apps/es-cluster created
service/es-cluster-nodeport created
service/es-cluster created
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.

The effect is as follows

Kubernetes Deploy ELK And use Filebeat Collect container logs _filebeat_02

3.2 Distinguish the roles of nodes in the cluster

If you need to distinguish the roles of nodes , You need to create two StatefulSet Deploy , One is Master colony , One is Data colony .Data Cluster storage is used here for simplicity emptyDir​, have access to localStorage​ perhaps hostPath​, Introduction to storage , You can refer to Kubernetes Introduction to storage system ​. This can be avoided Data When the node is rebooted, the index is rebuilt due to data loss , But if migration happens , If you want to keep data , We can only adopt the scheme of shared storage . The specific layout documents are here elk-cluster-with-role

       
[[email protected] ~]# curl -L -O https://raw.githubusercontent.com/cocowool/k8s-go/master/elk/elk-cluster-with-role.yaml
[[email protected] ~]# kubectl apply -f elk-cluster-with-role.yaml
deployment.apps/kb-single created
service/kb-single-svc created
statefulset.apps/es-cluster created
statefulset.apps/es-cluster-data created
service/es-cluster-nodeport created
service/es-cluster created
[[email protected] ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/es-cluster-0 1/1 Running 0 13s
pod/es-cluster-1 0/1 ContainerCreating 0 2s
pod/es-cluster-data-0 1/1 Running 0 13s
pod/es-cluster-data-1 0/1 ContainerCreating 0 2s
pod/kb-single-5848f5f967-w8hwq 1/1 Running 0 14s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/es-cluster ClusterIP None <none> 9200/TCP,9300/TCP 13s
service/es-cluster-nodeport NodePort 172.17.207.135 <none> 9200:31200/TCP,9300:31300/TCP 13s
service/kb-single-svc NodePort 172.17.8.137 <none> 5601:32601/TCP 14s
service/kubernetes ClusterIP 172.17.0.1 <none> 443/TCP 16d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kb-single 1 1 1 1 14s

NAME DESIRED CURRENT READY AGE
replicaset.apps/kb-single-5848f5f967 1 1 1 14s

NAME DESIRED CURRENT AGE
statefulset.apps/es-cluster 3 2 14s
statefulset.apps/es-cluster-data 2 2 13s
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.

The effect is as follows

Kubernetes Deploy ELK And use Filebeat Collect container logs _filebeat_03

4. Use Filebeat Monitor and collect container logs

Use Logstash, It can monitor log files with certain naming rules , But for container logs , Many file names are irregular , This situation is more suitable for Filebeat To monitor the log directory , When an updated log is found, it will be uploaded to Logstash Process or feed directly into ES in .

Every Node Container application log on node , The default will be /var/log/containers​ Directory to create a soft link , Here I have two small problems , The first is to mount at that time hostPath The destination folder of the soft link is not attached when , As a result, soft links can be seen in the container , But the corresponding file could not be found ; The second problem is that these log permissions on the host machine are root, and Pod The default with filebeat User initiated applications , So set it separately .

The effect is as follows

Kubernetes Deploy ELK And use Filebeat Collect container logs _elk_04

Kubernetes Deploy ELK And use Filebeat Collect container logs _Container Logs_05

For specific arrangement documents, please refer to my Github Home page , Provides Deployment​ The arrangement and DaemonSet Arrangement of modes .

For specific log formats , Because there is no further analysis on the time problem , If a friend here has done , It can be shared .

The main contents of the arrangement documents are excerpted as follows .

       
kind: List
apiVersion: v1
items:
- apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
app: filebeat-config
data:
filebeat.yml: |
processors:
- add_cloud_metadata:
filebeat.modules:
- module: system
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.log
symlinks: true
# json.message_key: log
# json.keys_under_root: true
output.elasticsearch:
hosts: ['es-single:9200']
logging.level: info
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
name: filebeat
labels:
app: filebeat
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: docker.elastic.co/beats/filebeat:6.4.0
name: filebeat
args: [
"-c", "/home/filebeat-config/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
volumeMounts:
- name: filebeat-storage
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: "filebeat-volume"
mountPath: "/home/filebeat-config"
nodeSelector:
role: front
volumes:
- name: filebeat-storage
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: filebeat-volume
configMap:
name: filebeat-config
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.

Reference material :

  1. Elasticsearch cluster on top of Kubernetes made easy
  2. Install Elasticseaerch with Docker
  3. Docker Elasticsearch
  4. Running Kibana on Docker
  5. Configuring Elasticsearch
  6. Elasticsearch Node
  7. Loggin Using Elasticsearch and kibana
  8. Configuring Logstash for Docker
  9. Running Filebeat on Docker
  10. Filebeat Chinese guide
  11. Add experimental symlink support

原网站

版权声明
本文为[wx61eaae213a986]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203011903110425.html