当前位置:网站首页>Restore backup data on S3 compatible storage with br

Restore backup data on S3 compatible storage with br

2022-07-06 08:03:00 Tianxiang shop

This paper introduces how to combine S3 On compatible storage SST Restore the backup data to AWS Kubernetes In the environment TiDB colony .

The recovery method used in this article is based on TiDB Operator Of Custom Resource Definition (CRD) Realization , Bottom use  BR  Data recovery .BR Its full name is Backup & Restore, yes TiDB Command line tools for distributed backup and recovery , Used to deal with TiDB Cluster for data backup and recovery .

Use scenarios

When using BR take TiDB The cluster data is backed up to Amazon S3 after , If needed from Amazon S3 Will back up SST( Key value pair ) Restore files to TiDB colony , Please refer to this article to use BR Resume .

Be careful

  • BR Only support TiDB v3.1 And above .
  • BR The recovered data cannot be synchronized to the downstream , because BR Direct import SST file , At present, the downstream cluster has no way to obtain the upstream SST file .

This article assumes that it will be stored in Amazon S3 Specify the path on  spec.s3.bucket  In the bucket  spec.s3.prefix  The backup data under the folder is restored to namespace test2  Medium TiDB colony  demo2. The following is the specific operation process .

The first 1 Step : Prepare to restore the environment

Use BR take S3 Restore the backup data on the compatible storage to TiDB front , Please follow these steps to prepare the recovery environment .

  1. Download the file  backup-rbac.yaml, And execute the following command in  test2  This namespace Create what is needed for recovery RBAC Related resources :

    kubectl apply -f backup-rbac.yaml -n test2

  2. Grant remote storage access .

  3. If you use it TiDB Version below v4.0.8, You also need to do the following . If you use it TiDB by v4.0.8 And above , Please skip this step .

    1. Make sure you have the recovery database  mysql.tidb  Tabular  SELECT  and  UPDATE  jurisdiction , Used for adjusting before and after recovery GC Time .

    2. establish  restore-demo2-tidb-secret secret For storing access TiDB Clustered root Account and key .

      kubectl create secret generic restore-demo2-tidb-secret --from-literal=password=${password} --namespace=test2

The first 2 Step : Restore the specified backup data to TiDB colony

According to the remote storage access authorization method selected in the previous step , You need to use the corresponding method below to restore the backup data to TiDB:

  • Method 1: If it passes accessKey and secretKey Method of authorization , You can create  Restore CR Recover cluster data :

    kubectl apply -f resotre-aws-s3.yaml

    restore-aws-s3.yaml  The contents of the document are as follows :

    --- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-s3 namespace: test2 spec: br: cluster: demo2 clusterNamespace: test2 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # sendCredToTikv: true # # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # to: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: restore-demo2-tidb-secret s3: provider: aws secretName: s3-secret region: us-west-1 bucket: my-bucket prefix: my-folder

  • Method 2: If it passes IAM binding Pod Method of authorization , You can create  Restore CR Recover cluster data :

    kubectl apply -f restore-aws-s3.yaml

    restore-aws-s3.yaml  The contents of the document are as follows :

    --- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-s3 namespace: test2 annotations: iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user spec: br: cluster: demo2 sendCredToTikv: false clusterNamespace: test2 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 to: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: restore-demo2-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder

  • Method 3: If it passes IAM binding ServiceAccount Method of authorization , You can create  Restore CR Recover cluster data :

    kubectl apply -f restore-aws-s3.yaml

    restore-aws-s3.yaml  The contents of the document are as follows :

    --- apiVersion: pingcap.com/v1alpha1 kind: Restore metadata: name: demo2-restore-s3 namespace: test2 spec: serviceAccount: tidb-backup-manager br: cluster: demo2 sendCredToTikv: false clusterNamespace: test2 # logLevel: info # statusAddr: ${status_addr} # concurrency: 4 # rateLimit: 0 # timeAgo: ${time} # checksum: true # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 to: host: ${tidb_host} port: ${tidb_port} user: ${tidb_user} secretName: restore-demo2-tidb-secret s3: provider: aws region: us-west-1 bucket: my-bucket prefix: my-folder

In the configuration  restore-aws-s3.yaml  When you file , Please refer to the following information :

  • About compatibility S3 Storage related configuration , Please refer to  S3 Storage field introduction .
  • .spec.br  Some parameters in are optional , Such as  logLevelstatusAddrconcurrencyrateLimitchecksumtimeAgosendCredToTikv. more  .spec.br  Detailed explanation of fields , Please refer to  BR Field is introduced .
  • If you use it TiDB by v4.0.8 And above ,BR Will automatically adjust  tikv_gc_life_time  Parameters , Don't need to Restore CR Middle configuration  spec.to  Field .
  • more  Restore CR Detailed explanation of fields , Please refer to  Restore CR Field is introduced .

Create good  Restore CR after , You can view the status of the recovery through the following command :

kubectl get rt -n test2 -o wide

原网站

版权声明
本文为[Tianxiang shop]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/187/202207060758156791.html