当前位置:网站首页>Restore backup data on persistent volumes

Restore backup data on persistent volumes

2022-07-07 21:24:00 Tianxiang shop

This document describes how to store in Persistent volume The backup data on is restored to Kubernetes In the environment TiDB colony . The persistent volume described in this article refers to any  Kubernetes Supported persistent volume types . In this paper, from the network file system (NFS) Store recovery data to TiDB For example .

The recovery method described in this document is based on TiDB Operator Of CustomResourceDefinition (CRD) Realization , Bottom use  BR  Tools to recover data .BR Its full name is Backup & Restore, yes TiDB Command line tools for distributed backup and recovery , Used to deal with TiDB Cluster for data backup and recovery .

Use scenarios

When using BR take TiDB After the cluster data is backed up to the persistent volume , If you need to backup from a persistent volume SST ( Key value pair ) Restore files to TiDB colony , Please refer to this article to use BR Resume .

Be careful

  • BR Only support TiDB v3.1 And above .
  • BR The recovered data cannot be synchronized to the downstream , because BR Direct import SST file , At present, the downstream cluster has no way to obtain the upstream SST file .

The first 1 Step : Prepare to restore the environment

Use BR take PV The backup data on is restored to TiDB front , Please follow these steps to prepare the recovery environment .

  1. Download the file  backup-rbac.yaml  To the server performing the recovery .

  2. Execute the following command in the  test2  In this namespace, create the required RBAC Related resources :

    kubectl apply -f backup-rbac.yaml -n test2

  3. Confirm that you can start from Kubernetes Access the... Used to store backup data in the cluster NFS The server .

  4. If you use it TiDB Version below v4.0.8, You also need to do the following . If you use it TiDB by v4.0.8 And above , You can skip this step .

    1. Make sure you have the recovery database  mysql.tidb  Tabular  SELECT  and  UPDATE  jurisdiction , Used for adjusting before and after recovery GC Time .

    2. establish  restore-demo2-tidb-secret secret:

      kubectl create secret generic restore-demo2-tidb-secret --from-literal=user=root --from-literal=password=<password> --namespace=test2

The first 2 Step : Recover data from persistent volumes

  1. establish Restore custom resource (CR), Restore the specified backup data to TiDB colony :

    kubectl apply -f restore.yaml

    restore.yaml  The contents of the document are as follows :

    --- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-nfs namespace: test2 spec: # backupType: full br: cluster: demo2 clusterNamespace: test2 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # to: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: restore-demo2-tidb-secret local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_if} path: /nfs volumeMount: name: nfs mountPath: /nfs

    In the configuration  restore.yaml  When you file , Please refer to the following information :

    • In the example above , Stored in NFS On  local://${.spec.local.volume.nfs.path}/${.spec.local.prefix}/  Backup data under folder , Restored to  test2  In namespace TiDB colony  demo2. More persistent volume storage related configurations , Reference resources  Local Storage field introduction .

    • .spec.br  Some parameter items in can be omitted , Such as  logLevelstatusAddrconcurrencyrateLimitchecksumtimeAgosendCredToTikv. more  .spec.br  Detailed explanation of fields , Reference resources  BR Field is introduced .

    • If you use TiDB >= v4.0.8, BR Will automatically adjust  tikv_gc_life_time  Parameters , Don't need to Restore CR Middle configuration  spec.to  Field .

    • more  Restore CR Detailed explanation of fields , Reference resources  Restore CR Field is introduced .

  2. Create good Restore CR after , View the status of the recovery through the following command :

    kubectl get rt -n test2 -owide

原网站

版权声明
本文为[Tianxiang shop]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/188/202207071811197054.html