当前位置:网站首页>Backup tidb cluster to persistent volume
Backup tidb cluster to persistent volume
2022-07-07 21:25:00 【Tianxiang shop】
This document describes how to put Kubernetes On TiDB The data of the cluster is backed up to Persistent volume On . Persistent volumes described in this article , Means any Kubernetes Supported persistent volume types . This paper aims to backup data to the network file system (NFS) For example, storage .
The backup method described in this document is based on TiDB Operator Of CustomResourceDefinition (CRD) Realization , Bottom use BR Tools to obtain cluster data , Then store the backup data to a persistent volume .BR Its full name is Backup & Restore, yes TiDB Command line tools for distributed backup and recovery , Used to deal with TiDB Cluster for data backup and recovery .
Use scenarios
If you have the following requirements for data backup , Consider using BR take TiDB Cluster data in Ad-hoc Backup or Scheduled full backup Backup to persistent volumes :
- The amount of data that needs to be backed up is large , And it requires faster backup
- Need to backup data directly SST file ( Key value pair )
If there are other backup requirements , Reference resources Introduction to backup and recovery Choose the right backup method .
Be careful
- BR Only support TiDB v3.1 And above .
- Use BR The data backed up can only be restored to TiDB In the database , Cannot recover to other databases .
Ad-hoc Backup
Ad-hoc Backup supports full backup and incremental backup .Ad-hoc Backup by creating a custom Backup
custom resource (CR) Object to describe a backup .TiDB Operator According to this Backup
Object to complete the specific backup process . If an error occurs during the backup , The program will not automatically retry , At this time, it needs to be handled manually .
This document assumes that the deployment is in Kubernetes test1
In this namespace TiDB colony demo1
Data backup , The following is the specific operation process .
The first 1 Step : Get ready Ad-hoc Backup environment
Download the file backup-rbac.yaml To the server performing the backup .
Execute the following command , stay
test1
In this namespace , To create a backup RBAC Related resources :kubectl apply -f backup-rbac.yaml -n test1
Confirm that you can start from Kubernetes Access the... Used to store backup data in the cluster NFS The server , And you have configured TiKV Mounting is the same as backup NFS Share the directory to the same local directory .TiKV mount NFS The specific configuration method of can refer to the following configuration :
spec: tikv: additionalVolumes: # Specify volume types that are supported by Kubernetes, Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes - name: nfs nfs: server: 192.168.0.2 path: /nfs additionalVolumeMounts: # This must match `name` in `additionalVolumes` - name: nfs mountPath: /nfs
If you use it TiDB Version below v4.0.8, You also need to do the following . If you use it TiDB by v4.0.8 And above , You can skip this step .
Make sure you have a backup database
mysql.tidb
TabularSELECT
andUPDATE
jurisdiction , Used to adjust before and after backup GC Time .establish
backup-demo1-tidb-secret
secret For storing access TiDB Password corresponding to the user of the cluster .kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=${password} --namespace=test1
The first 2 Step : Back up data to persistent volumes
establish
Backup
CR, And back up the data to NFS:kubectl apply -f backup-nfs.yaml
backup-nfs.yaml
The contents of the document are as follows , This example will TiDB The data of the cluster is fully exported and backed up to NFS:--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-nfs namespace: test1 spec: # # backupType: full # # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb-host} # port: ${tidb-port} # user: ${tidb-user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true # options: # - --lastbackupts=420134118382108673 local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfs
In the configuration
backup-nfs.yaml
When you file , Please refer to the following information :If you need incremental backup , Only need
spec.br.options
Specifies the last backup timestamp--lastbackupts
that will do . Restrictions on incremental backups , May refer to Use BR Back up and restore ..spec.local
Indicates the persistent volume related configuration , Detailed explanation reference Local Storage field introduction .spec.br
Some parameter items in can be omitted , Such aslogLevel
、statusAddr
、concurrency
、rateLimit
、checksum
、timeAgo
. more.spec.br
For detailed explanation of fields, please refer to BR Field is introduced .If you use it TiDB by v4.0.8 And above , BR Will automatically adjust
tikv_gc_life_time
Parameters , No configuration requiredspec.tikvGCLifeTime
andspec.from
Field .more
Backup
CR Detailed explanation of fields , Reference resources Backup CR Field is introduced .
Create good
Backup
CR after ,TiDB Operator Will be based onBackup
CR Automatically start backup . You can check the backup status through the following command :kubectl get bk -n test1 -owide
Backup example
Back up all cluster data
Backing up data from a single database
Back up the data of a single table
Use the table library filtering function to back up the data of multiple tables
Scheduled full backup
Users set backup policies to TiDB The cluster performs scheduled backup , At the same time, set the retention policy of backup to avoid too many backups . Scheduled full backup through customized BackupSchedule
CR Object to describe . A full backup will be triggered every time the backup time point , The bottom layer of scheduled full backup passes Ad-hoc Full backup . The following are the specific steps to create a scheduled full backup :
The first 1 Step : Prepare a scheduled full backup environment
Same as Ad-hoc Backup environment preparation .
The first 2 Step : Regularly back up the data to the persistent volume
establish
BackupSchedule
CR, Turn on TiDB Scheduled full backup of the cluster , Back up the data to NFS:kubectl apply -f backup-schedule-nfs.yaml
backup-schedule-nfs.yaml
The contents of the document are as follows :--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-nfs namespace: test1 spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: # Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8 # from: # host: ${tidb_host} # port: ${tidb_port} # user: ${tidb_user} # secretName: backup-demo1-tidb-secret br: cluster: demo1 clusterNamespace: test1 # logLevel: info # statusAddr: ${status-addr} # concurrency: 4 # rateLimit: 0 # checksum: true local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfs
From above
backup-schedule-nfs.yaml
The file configuration example shows ,backupSchedule
The configuration of consists of two parts . Part of it isbackupSchedule
Unique configuration , The other part isbackupTemplate
.backupSchedule
For specific introduction of unique configuration items, please refer to BackupSchedule CR Field is introduced .backupTemplate
Specify the configuration related to cluster and remote storage , Fields and Backup CR Mediumspec
equally , Please refer to Backup CR Field is introduced .
After the scheduled full backup is created , Check the status of the backup through the following command :
kubectl get bks -n test1 -owide
Check all the backup pieces below the scheduled full backup :
kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-nfs -n test1
边栏推荐
- Unity3d 4.3.4f1 execution project
- 智能交通焕发勃勃生机,未来会呈现哪些巨变?[通俗易懂]
- How can big state-owned banks break the anti fraud dilemma?
- Feature generation
- SQL injection error report injection function graphic explanation
- I wrote a markdown command line gadget, hoping to improve the efficiency of sending documents by garden friends!
- Problems encountered in installing mysql8 for Ubuntu and the detailed installation process
- Arlo's troubles
- [paper reading] maps: Multi-Agent Reinforcement Learning Based Portfolio Management System
- 万字总结数据存储,三大知识点
猜你喜欢
Virtual machine network configuration in VMWare
Onespin | solve the problems of hardware Trojan horse and security trust in IC Design
【OpenCV 例程200篇】223. 特征提取之多边形拟合(cv.approxPolyDP)
Problems encountered in installing mysql8 for Ubuntu and the detailed installation process
Implement secondary index with Gaussian redis
Ubuntu安装mysql8遇到的问题以及详细安装过程
[200 opencv routines] 223 Polygon fitting for feature extraction (cv.approxpolydp)
MySQL storage expression error
Focusing on safety in 1995, Volvo will focus on safety in the field of intelligent driving and electrification in the future
The little money made by the program ape is a P!
随机推荐
Le capital - investissement est - il légal en Chine? C'est sûr?
Micro service remote debug, nocalhost + rainbow micro service development second bullet
Details of C language integer and floating-point data storage in memory (including details of original code, inverse code, complement, size end storage, etc.)
awk处理JSON处理
Solve the problem of using uni app mediaerror mediaerror errorcode -5
Mysql子查询关键字的使用方式(exists)
margin 等高布局
Data sorting in string
权限不足
C语言 整型 和 浮点型 数据在内存中存储详解(内含原码反码补码,大小端存储等详解)
Écrivez une liste de sauts
Codeforces round 296 (Div. 2) A. playing with paper[easy to understand]
Mahout-Pearson correlation的实现
私募基金在中国合法吗?安全吗?
OpenGL super classic learning notes (1) the first triangle "suggestions collection"
浅解ARC中的 __bridge、__bridge_retained和__bridge_transfer
Using enumeration to realize English to braille
Devil daddy A0 English zero foundation self-improvement Road
Make this crmeb single merchant wechat mall system popular, so easy to use!
[uvalive 6663 count the regions] (DFS + discretization) [easy to understand]