当前位置:网站首页>Deploy a production cluster using Loki microservice pattern

Deploy a production cluster using Loki microservice pattern

2022-06-25 01:06:00 K8s technology circle

Image We mentioned that earlier Loki Deployed single mode and There are two modes of read-write separation , When your daily log size exceeds TB The magnitude of , Then maybe we need to use the microservice pattern to deploy Loki 了 .

The microservice deployment model will Loki Instantiate components of as different processes , Each process is called and its target is specified , Each component generates a for internal requests gRPC Server and one for external API Requested HTTP service .

  • ingester
  • distributor
  • query-frontend
  • query-scheduler
  • querier
  • index-gateway
  • ruler
  • compactor
Image

Running components as separate microservices allows for expansion by increasing the number of microservices , The customized cluster has better observability for each component . Microservice mode deployment is the most efficient Loki install , however , Their setup and maintenance are also the most complex .

For oversized Loki Clusters or clusters that require more control over expansion and cluster operations , It is recommended to use the microservice mode .

The microservice model is most suitable for Kubernetes Deployment in cluster , Provides Jsonnet and Helm Chart Two installation methods .

Helm Chart

Again here we use Helm Chart To install the microservice mode Loki, Remember to install the in the previous chapter before installation Loki Related services are deleted .

First, get the information about the microservice pattern Chart package :

$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm pull grafana/loki-distributed --untar --version 0.48.4
$ cd loki-simple-scalable

The Chart The package supports the components shown in the following table ,Ingester、distributor、querier and query-frontend Components are always installed , Other components are optional .

Components Optional Default on ?
gateway
ingestern/a
distributorn/a
queriern/a
query-frontendn/a
table-manager
compactor
ruler
index-gateway
memcached-chunks
memcached-frontend
memcached-index-queries
memcached-index-writes

The Chart The package is configured in microservice mode Loki, Has been tested , It can be done with boltdb-shipper and memberlist Use it together , Other storage and discovery options are also available , however , The chart does not support setting Consul or Etcd For discovery , They need to be configured separately , contrary , Can be used without the need for separate keys / Value stored memberlist. By default Chart The package creates a... For the member list Headless Service,ingester、distributor、querier and ruler It's part of it .

install minio

For example, we use memberlist、boltdb-shipper and minio For storage , Because of this Chart Package does not contain minio, So we need to install it separately minio:

$ helm repo add minio https://helm.min.io/
$ helm pull minio/minio --untar --version 8.0.10
$ cd minio

Create an values file :

# ci/loki-values.yaml
accessKey: "myaccessKey"
secretKey: "mysecretKey"

persistence:
  enabled: true
  storageClass: "local-path"
  accessMode: ReadWriteOnce
  size: 5Gi

service:
  type: NodePort
  port: 9000
  nodePort: 32000

resources:
  requests:
    memory: 1Gi

Use the above configuration directly values Files installed minio:

$ helm upgrade --install minio -n logging -f ci/loki-values.yaml .
Release "minio" does not exist. Installing it now.
NAME: minio
LAST DEPLOYED: Sun Jun 19 16:56:28 2022
NAMESPACE: logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Minio can be accessed via port 9000 on the following DNS name from within your cluster:
minio.logging.svc.cluster.local

To access Minio from localhost, run the below commands:

  1. export POD_NAME=$(kubectl get pods --namespace logging -l "release=minio" -o jsonpath="{.items[0].metadata.name}")

  2. kubectl port-forward $POD_NAME 9000 --namespace logging

Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/

You can now access Minio server on http://localhost:9000. Follow the below steps to connect to Minio server with mc client:

  1. Download the Minio mc client - https://docs.minio.io/docs/minio-client-quickstart-guide

  2. Get the ACCESS_KEY=$(kubectl get secret minio -o jsonpath="{.data.accesskey}" | base64 --decode) and the SECRET_KEY=$(kubectl get secret minio -o jsonpath="{.data.secretkey}" | base64 --decode)

  3. mc alias set minio-local http://localhost:9000 "$ACCESS_KEY" "$SECRET_KEY" --api s3v4

  4. mc ls minio-local

Alternately, you can use your browser or the Minio SDK to access the server - https://docs.minio.io/categories/17

After installation, check the corresponding Pod state :

$ kubectl get pods -n logging
NAME                     READY   STATUS    RESTARTS   AGE
minio-548656f786-gctk9   1/1     Running   0          2m45s
$ kubectl get svc -n logging
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
minio   NodePort   10.111.58.196   <none>        9000:32000/TCP   3h16m

You can use the specified 32000 Port to access minio:

Image
minio

Then remember to create a file named loki-data Of bucket.

install Loki

Now when our object storage is ready , Next, let's install the microservice mode Loki, First, create a as shown below values file :

# ci/minio-values.yaml
loki:
  structuredConfig:
    ingester:
      max_transfer_retries: 0
      chunk_idle_period: 1h
      chunk_target_size: 1536000
      max_chunk_age: 1h
    storage_config:
      aws:
        endpoint: minio.logging.svc.cluster.local:9000
        insecure: true
        bucketnames: loki-data
        access_key_id: myaccessKey
        secret_access_key: mysecretKey
        s3forcepathstyle: true
      boltdb_shipper:
        shared_store: s3
    schema_config:
      configs:
        - from: 2022-06-21
          store: boltdb-shipper
          object_store: s3
          schema: v12
          index:
            prefix: loki_index_
            period: 24h

distributor:
  replicas: 2

ingester:
  replicas: 2
  persistence:
    enabled: true
    size: 1Gi
    storageClass: local-path

querier:
  replicas: 2
  persistence:
    enabled: true
    size: 1Gi
    storageClass: local-path

queryFrontend:
  replicas: 2

gateway:
  nginxConfig:
    httpSnippet: |-
      client_max_body_size 100M;
    serverSnippet: |-
      client_max_body_size 100M;

The above configuration will selectively override loki.config Default values in the template file , Use loki.structuredConfig Most configuration parameters can be set externally .loki.configloki.schemaConfig and loki.storageConfig with loki.structuredConfig Use a combination of .loki.structuredConfig Values in have higher priority .

Here we go through loki.structuredConfig.storage_config.aws Specifies the for saving data minio To configure , For high availability , Several core components are configured 2 Copies ,ingester and querier Configured persistent storage .

Now use the above values File for one click installation :

$ helm upgrade --install loki -n logging -f ci/minio-values.yaml .
Release "loki" does not exist. Installing it now.
NAME: loki
LAST DEPLOYED: Tue Jun 21 16:20:10 2022
NAMESPACE: logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
 Welcome to Grafana Loki
 Chart version: 0.48.4
 Loki version: 2.5.0
***********************************************************************

Installed components:
* gateway
* ingester
* distributor
* querier
* query-frontend

Several components will be installed respectively :gateway、ingester、distributor、querier、query-frontend, Corresponding Pod The status is as follows :

$ kubectl get pods -n logging
NAME                                                    READY   STATUS    RESTARTS       AGE
loki-loki-distributed-distributor-5dfdd5bd78-nxdq8      1/1     Running   0              2m40s
loki-loki-distributed-distributor-5dfdd5bd78-rh4gz      1/1     Running   0              116s
loki-loki-distributed-gateway-6f4cfd898c-hpszv          1/1     Running   0              21m
loki-loki-distributed-ingester-0                        1/1     Running   0              96s
loki-loki-distributed-ingester-1                        1/1     Running   0              2m38s
loki-loki-distributed-querier-0                         1/1     Running   0              2m2s
loki-loki-distributed-querier-1                         1/1     Running   0              2m33s
loki-loki-distributed-query-frontend-6d9845cb5b-p4vns   1/1     Running   0              4s
loki-loki-distributed-query-frontend-6d9845cb5b-sq5hr   1/1     Running   0              2m40s
minio-548656f786-gctk9                                  1/1     Running   1 (123m ago)   47h
$ kubectl get svc -n logging
NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
loki-loki-distributed-distributor         ClusterIP   10.102.156.127   <none>        3100/TCP,9095/TCP            22m
loki-loki-distributed-gateway             ClusterIP   10.111.73.138    <none>        80/TCP                       22m
loki-loki-distributed-ingester            ClusterIP   10.98.238.236    <none>        3100/TCP,9095/TCP            22m
loki-loki-distributed-ingester-headless   ClusterIP   None             <none>        3100/TCP,9095/TCP            22m
loki-loki-distributed-memberlist          ClusterIP   None             <none>        7946/TCP                     22m
loki-loki-distributed-querier             ClusterIP   10.101.117.137   <none>        3100/TCP,9095/TCP            22m
loki-loki-distributed-querier-headless    ClusterIP   None             <none>        3100/TCP,9095/TCP            22m
loki-loki-distributed-query-frontend      ClusterIP   None             <none>        3100/TCP,9095/TCP,9096/TCP   22m
minio                                     NodePort    10.111.58.196    <none>        9000:32000/TCP               47h

Loki The corresponding configuration file is as follows :

$ kubectl get cm -n logging loki-loki-distributed -o yaml
apiVersion: v1
data:
  config.yaml: |
    auth_enabled: false
    chunk_store_config:
      max_look_back_period: 0s
    compactor:
      shared_store: filesystem
    distributor:
      ring:
        kvstore:
          store: memberlist
    frontend:
      compress_responses: true
      log_queries_longer_than: 5s
      tail_proxy_url: http://loki-loki-distributed-querier:3100
    frontend_worker:
      frontend_address: loki-loki-distributed-query-frontend:9095
    ingester:
      chunk_block_size: 262144
      chunk_encoding: snappy
      chunk_idle_period: 1h
      chunk_retain_period: 1m
      chunk_target_size: 1536000
      lifecycler:
        ring:
          kvstore:
            store: memberlist
          replication_factor: 1
      max_chunk_age: 1h
      max_transfer_retries: 0
      wal:
        dir: /var/loki/wal
    limits_config:
      enforce_metric_name: false
      max_cache_freshness_per_query: 10m
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      split_queries_by_interval: 15m
    memberlist:
      join_members:
      - loki-loki-distributed-memberlist
    query_range:
      align_queries_with_step: true
      cache_results: true
      max_retries: 5
      results_cache:
        cache:
          enable_fifocache: true
          fifocache:
            max_size_items: 1024
            validity: 24h
    ruler:
      alertmanager_url: https://alertmanager.xx
      external_url: https://alertmanager.xx
      ring:
        kvstore:
          store: memberlist
      rule_path: /tmp/loki/scratch
      storage:
        local:
          directory: /etc/loki/rules
        type: local
    schema_config:
      configs:
      - from: "2022-06-21"
        index:
          period: 24h
          prefix: loki_index_
        object_store: s3
        schema: v12
        store: boltdb-shipper
    server:
      http_listen_port: 3100
    storage_config:
      aws:
        access_key_id: myaccessKey
        bucketnames: loki-data
        endpoint: minio.logging.svc.cluster.local:9000
        insecure: true
        s3forcepathstyle: true
        secret_access_key: mysecretKey
      boltdb_shipper:
        active_index_directory: /var/loki/index
        cache_location: /var/loki/cache
        cache_ttl: 168h
        shared_store: s3
      filesystem:
        directory: /var/loki/chunks
    table_manager:
      retention_deletes_enabled: false
      retention_period: 0s
kind: ConfigMap
# ......

Again, one of them gateway Components will help us route requests to the correct components , This component is also a nginx service , The corresponding configuration is as follows :

$ kubectl -n logging exec -it loki-loki-distributed-gateway-6f4cfd898c-hpszv -- cat /etc/nginx/nginx.conf
worker_processes  5;  ## Default: 1
error_log  /dev/stderr;
pid        /tmp/nginx.pid;
worker_rlimit_nofile 8192;

events {
  worker_connections  4096;  ## Default: 1024
}

http {
  client_body_temp_path /tmp/client_temp;
  proxy_temp_path       /tmp/proxy_temp_path;
  fastcgi_temp_path     /tmp/fastcgi_temp;
  uwsgi_temp_path       /tmp/uwsgi_temp;
  scgi_temp_path        /tmp/scgi_temp;

  default_type application/octet-stream;
  log_format   main '$remote_addr - $remote_user [$time_local]  $status '
        '"$request" $body_bytes_sent "$http_referer" '
        '"$http_user_agent" "$http_x_forwarded_for"';
  access_log   /dev/stderr  main;

  sendfile     on;
  tcp_nopush   on;
  resolver kube-dns.kube-system.svc.cluster.local;

  client_max_body_size 100M;

  server {
    listen             8080;

    location = / {
      return 200 'OK';
      auth_basic off;
    }

    location = /api/prom/push {
      proxy_pass       http://loki-loki-distributed-distributor.logging.svc.cluster.local:3100$request_uri;
    }

    location = /api/prom/tail {
      proxy_pass       http://loki-loki-distributed-querier.logging.svc.cluster.local:3100$request_uri;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }

    # Ruler
    location ~ /prometheus/api/v1/alerts.* {
      proxy_pass       http://loki-loki-distributed-ruler.logging.svc.cluster.local:3100$request_uri;
    }
    location ~ /prometheus/api/v1/rules.* {
      proxy_pass       http://loki-loki-distributed-ruler.logging.svc.cluster.local:3100$request_uri;
    }
    location ~ /api/prom/rules.* {
      proxy_pass       http://loki-loki-distributed-ruler.logging.svc.cluster.local:3100$request_uri;
    }
    location ~ /api/prom/alerts.* {
      proxy_pass       http://loki-loki-distributed-ruler.logging.svc.cluster.local:3100$request_uri;
    }

    location ~ /api/prom/.* {
      proxy_pass       http://loki-loki-distributed-query-frontend.logging.svc.cluster.local:3100$request_uri;
    }

    location = /loki/api/v1/push {
      proxy_pass       http://loki-loki-distributed-distributor.logging.svc.cluster.local:3100$request_uri;
    }

    location = /loki/api/v1/tail {
      proxy_pass       http://loki-loki-distributed-querier.logging.svc.cluster.local:3100$request_uri;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }

    location ~ /loki/api/.* {
      proxy_pass       http://loki-loki-distributed-query-frontend.logging.svc.cluster.local:3100$request_uri;
    }

    client_max_body_size 100M;
  }
}

From the above configuration, you can see the corresponding Push Endpoint /api/prom/push And /loki/api/v1/push It will be forwarded to http://loki-loki-distributed-distributor.logging.svc.cluster.local:3100$request_uri;, Which is the corresponding distributor service :

$ kubectl get pods -n logging -l app.kubernetes.io/component=distributor,app.kubernetes.io/instance=loki,app.kubernetes.io/name=loki-distributed
NAME                                                 READY   STATUS    RESTARTS   AGE
loki-loki-distributed-distributor-5dfdd5bd78-nxdq8   1/1     Running   0          8m20s
loki-loki-distributed-distributor-5dfdd5bd78-rh4gz   1/1     Running   0          7m36s

So if we want to write log data , Nature is now written to gateway Of Push End up . To verify whether the application is normal , Next we install Promtail and Grafana To read and write data .

install Promtail

obtain promtail Of Chart Pack and unzip :

$ helm pull grafana/promtail --untar
$ cd promtail

Create an values file :

# ci/minio-values.yaml
rbac:
  pspEnabled: false
config:
  clients:
    - url: http://loki-loki-distributed-gateway/loki/api/v1/push

Note that we need to Promtail Configured in Loki The address is http://loki-loki-distributed-gateway/loki/api/v1/push, This is the Promtail Send log data to first gateway The above to , then gateway According to our Endpoints Forward to write node , Use the above values File to install Promtail:

$ helm upgrade --install promtail -n logging -f ci/minio-values.yaml .
Release "promtail" does not exist. Installing it now.
NAME: promtail
LAST DEPLOYED: Tue Jun 21 16:31:34 2022
NAMESPACE: logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
 Welcome to Grafana Promtail
 Chart version: 5.1.0
 Promtail version: 2.5.0
***********************************************************************

Verify the application is working by running these commands:

* kubectl --namespace logging port-forward daemonset/promtail 3101
* curl http://127.0.0.1:3101/metrics

After the normal installation is completed, one... Will be run on each node promtail:

$ kubectl get pods -n logging -l app.kubernetes.io/name=promtail
NAME             READY   STATUS    RESTARTS   AGE
promtail-gbjzs   1/1     Running   0          38s
promtail-gjn5p   1/1     Running   0          38s
promtail-z6vhd   1/1     Running   0          38s

normal promtail You are already collecting all container logs on the node , Then the log data Push to gateway,gateway Forward to write node , We can see gateway Log :

$ kubectl logs -f loki-loki-distributed-gateway-6f4cfd898c-hpszv -n logging
10.244.2.26 - - [21/Jun/2022:08:41:24 +0000]  204 "POST /loki/api/v1/push HTTP/1.1" 0 "-" "promtail/2.5.0" "-"
10.244.2.1 - - [21/Jun/2022:08:41:24 +0000]  200 "GET / HTTP/1.1" 2 "-" "kube-probe/1.22" "-"
10.244.2.26 - - [21/Jun/2022:08:41:25 +0000]  204 "POST /loki/api/v1/push HTTP/1.1" 0 "-" "promtail/2.5.0" "-"
10.244.1.28 - - [21/Jun/2022:08:41:26 +0000]  204 "POST /loki/api/v1/push HTTP/1.1" 0 "-" "promtail/2.5.0" "-"
......

You can see gateway Now in a direct reception /loki/api/v1/push Request , That is to say promtail Sent by , Normally, the log data has been distributed to write The node ,write Nodes store data in minio in , You can check minio There is already log data in , The previous installation is minio The service specifies a 32000 Of NodePort port :

Image

From here, you can see that the data can be written normally .

install Grafana

Let's verify the read path , install Grafana docking Loki:

$ helm pull grafana/grafana --untar
$ cd grafana

Create the following values The configuration file :

# ci/minio-values.yaml
service:
  type: NodePort
  nodePort: 32001
rbac:
  pspEnabled: false
persistence:
  enabled: true
  storageClassName: local-path
  accessModes:
    - ReadWriteOnce
  size: 1Gi

Use the above values Files installed Grafana:

$ helm upgrade --install grafana -n logging -f ci/minio-values.yaml .
Release "grafana" does not exist. Installing it now.
NAME: grafana
LAST DEPLOYED: Tue Jun 21 16:47:54 2022
NAMESPACE: logging
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:

   kubectl get secret --namespace logging grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:

   grafana.logging.svc.cluster.local

   Get the Grafana URL to visit by running these commands in the same shell:
export NODE_PORT=$(kubectl get --namespace logging -o jsonpath="{.spec.ports[0].nodePort}" services grafana)
     export NODE_IP=$(kubectl get nodes --namespace logging -o jsonpath="{.items[0].status.addresses[0].address}")
     echo http://$NODE_IP:$NODE_PORT


3. Login with the password from step 1 and the username: admin

You can obtain the login password through the command in the prompt above :

$ kubectl get secret --namespace logging grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Then use the password above and admin User name login Grafana:

Image

Log in and enter Grafana Add a data source , Please fill in here gateway The address of http://loki-loki-distributed-gateway

Image

After saving the data source , Can enter the Explore Page filter log , For example, let's check in real time gateway Log of this application , As shown in the figure below :

Image

If you can see the latest log data, it means that we have successfully deployed the microservice mode Loki, This model is very flexible , Different components can be expanded or shrunk as required , But the operation and maintenance cost will also increase a lot .

In addition, we can also cache queries and writes , What we use here is Helm Chart It's supporting memcached Of , We can also replace it with redis.

原网站

版权声明
本文为[K8s technology circle]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/176/202206242014563485.html