当前位置:网站首页>Backup tidb cluster to persistent volume

Backup tidb cluster to persistent volume

2022-07-07 21:25:00 Tianxiang shop

This document describes how to put Kubernetes On TiDB The data of the cluster is backed up to Persistent volume On . Persistent volumes described in this article , Means any  Kubernetes Supported persistent volume types . This paper aims to backup data to the network file system (NFS) For example, storage .

The backup method described in this document is based on TiDB Operator Of CustomResourceDefinition (CRD) Realization , Bottom use  BR  Tools to obtain cluster data , Then store the backup data to a persistent volume .BR Its full name is Backup & Restore, yes TiDB Command line tools for distributed backup and recovery , Used to deal with TiDB Cluster for data backup and recovery .

Use scenarios

If you have the following requirements for data backup , Consider using BR take TiDB Cluster data in  Ad-hoc Backup or Scheduled full backup Backup to persistent volumes :

  • The amount of data that needs to be backed up is large , And it requires faster backup
  • Need to backup data directly SST file ( Key value pair )

If there are other backup requirements , Reference resources Introduction to backup and recovery Choose the right backup method .

Be careful

  • BR Only support TiDB v3.1 And above .
  • Use BR The data backed up can only be restored to TiDB In the database , Cannot recover to other databases .

Ad-hoc Backup

Ad-hoc Backup supports full backup and incremental backup .Ad-hoc Backup by creating a custom  Backup custom resource (CR) Object to describe a backup .TiDB Operator According to this  Backup  Object to complete the specific backup process . If an error occurs during the backup , The program will not automatically retry , At this time, it needs to be handled manually .

This document assumes that the deployment is in Kubernetes test1  In this namespace TiDB colony  demo1  Data backup , The following is the specific operation process .

The first 1 Step : Get ready Ad-hoc Backup environment

  1. Download the file  backup-rbac.yaml  To the server performing the backup .

  2. Execute the following command , stay  test1  In this namespace , To create a backup RBAC Related resources :

    kubectl apply -f backup-rbac.yaml -n test1

  3. Confirm that you can start from Kubernetes Access the... Used to store backup data in the cluster NFS The server , And you have configured TiKV Mounting is the same as backup NFS Share the directory to the same local directory .TiKV mount NFS The specific configuration method of can refer to the following configuration :

    spec: tikv: additionalVolumes: # Specify volume types that are supported by Kubernetes, Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes - name: nfs nfs: server: 192.168.0.2 path: /nfs additionalVolumeMounts: # This must match `name` in `additionalVolumes` - name: nfs mountPath: /nfs

  4. If you use it TiDB Version below v4.0.8, You also need to do the following . If you use it TiDB by v4.0.8 And above , You can skip this step .

    1. Make sure you have a backup database  mysql.tidb  Tabular  SELECT  and  UPDATE  jurisdiction , Used to adjust before and after backup GC Time .

    2. establish  backup-demo1-tidb-secret secret For storing access TiDB Password corresponding to the user of the cluster .

      kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password} --namespace=test1

The first 2 Step : Back up data to persistent volumes

  1. establish  Backup CR, And back up the data to NFS:

    kubectl apply -f backup-nfs.yaml

    backup-nfs.yaml  The contents of the document are as follows , This example will TiDB The data of the cluster is fully exported and backed up to NFS:

    --- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-nfs namespace: test1 spec: # # backupType: full # # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # options: # - --lastbackupts=420134118382108673 local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfs

    In the configuration  backup-nfs.yaml  When you file , Please refer to the following information :

    • If you need incremental backup , Only need  spec.br.options  Specifies the last backup timestamp  --lastbackupts  that will do . Restrictions on incremental backups , May refer to Use BR Back up and restore .

    • .spec.local  Indicates the persistent volume related configuration , Detailed explanation reference  Local Storage field introduction .

    • spec.br  Some parameter items in can be omitted , Such as  logLevelstatusAddrconcurrencyrateLimitchecksumtimeAgo. more  .spec.br  For detailed explanation of fields, please refer to  BR Field is introduced .

    • If you use it TiDB by v4.0.8 And above , BR Will automatically adjust  tikv_gc_life_time  Parameters , No configuration required  spec.tikvGCLifeTime  and  spec.from  Field .

    • more  Backup CR Detailed explanation of fields , Reference resources  Backup CR Field is introduced .

  2. Create good  Backup CR after ,TiDB Operator Will be based on  Backup CR Automatically start backup . You can check the backup status through the following command :

    kubectl get bk -n test1 -owide

Backup example

Back up all cluster data

Backing up data from a single database

Back up the data of a single table

Use the table library filtering function to back up the data of multiple tables

Scheduled full backup

Users set backup policies to TiDB The cluster performs scheduled backup , At the same time, set the retention policy of backup to avoid too many backups . Scheduled full backup through customized  BackupSchedule CR Object to describe . A full backup will be triggered every time the backup time point , The bottom layer of scheduled full backup passes Ad-hoc Full backup . The following are the specific steps to create a scheduled full backup :

The first 1 Step : Prepare a scheduled full backup environment

Same as  Ad-hoc Backup environment preparation .

The first 2 Step : Regularly back up the data to the persistent volume

  1. establish  BackupSchedule CR, Turn on TiDB Scheduled full backup of the cluster , Back up the data to NFS:

    kubectl apply -f backup-schedule-nfs.yaml

    backup-schedule-nfs.yaml  The contents of the document are as follows :

    --- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-nfs namespace: test1 spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfs

    From above  backup-schedule-nfs.yaml  The file configuration example shows ,backupSchedule  The configuration of consists of two parts . Part of it is  backupSchedule  Unique configuration , The other part is  backupTemplate.

  2. After the scheduled full backup is created , Check the status of the backup through the following command :

    kubectl get bks -n test1 -owide

    Check all the backup pieces below the scheduled full backup :

    kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-nfs -n test1

原网站

版权声明
本文为[Tianxiang shop]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207071811197135.html