当前位置:网站首页>Upgrade tidb with tiup

Upgrade tidb with tiup

2022-07-06 08:02:00 Tianxiang shop

This document applies to the following upgrade paths :

  • Use TiUP from TiDB 4.0 Version upgrade to TiDB 6.1 And subsequent revisions .
  • Use TiUP from TiDB 5.0-5.4 Version upgrade to TiDB 6.1 And subsequent revisions .
  • Use TiUP from TiDB 6.0 Version upgrade to TiDB 6.1 And subsequent revisions .

Warning

  • No support will be made. TiFlash Component slave 5.3 The previous old version is upgraded online to 5.3 And later , Only shutdown upgrade . If other components in the cluster ( Such as tidb,tikv) No downtime upgrade , Reference resources Upgrade without downtime Notice in .
  • The upgrade TiDB In the process of clustering , Do not perform  DDL sentence , Otherwise, there may be problems with undefined behavior .
  • In the cluster has DDL When the statement is being executed ( Usually it is  ADD INDEX  And column type changes, which take a long time DDL sentence ), Do not proceed Upgrade operations . Before upgrading , It is recommended to use  ADMIN SHOW DDL  Command to check whether there is any in progress in the cluster DDL Job. To upgrade , Please wait DDL Perform complete or use  ADMIN CANCEL DDL  Command to cancel the DDL Job Then upgrade .

Be careful

If the original cluster is 3.0 or 3.1 Or earlier , Direct upgrade to 6.1.0 And subsequent revisions . You need to upgrade from an earlier version to 4.0 after , Again from 4.0 Upgrade to 6.1.0 And subsequent revisions .

1. Upgrade compatibility instructions

  • TiDB At present, it is not supported to downgrade the version or roll back after upgrading .
  • Use TiDB Ansible Managed 4.0 Version cluster , It needs to be followed first  4.0 Description of the version document Import cluster to TiUP (tiup cluster) After management , Then follow the instructions in this document to upgrade to 6.1.0 Version and subsequent revisions .
  • If you want to 3.0 Upgrade the previous version to 6.1.0 edition :
    1. First adopt TiDB Ansible Upgrade to 3.0 edition .
    2. And then according to  4.0 Description of the version document , Use TiUP (tiup cluster) take TiDB Ansible Configure import .
    3. Upgrade the cluster to 4.0 edition .
    4. Follow the instructions in this document to upgrade the cluster to 6.1.0 edition .
  • Support TiDB Binlog,TiCDC,TiFlash And other component versions .
  • Specific compatibility instructions for different versions , Please check the versions of  Release Note. Please according to each version of Release Note Adjust the cluster configuration for compatibility changes .
  • upgrade v5.3 Previous versions of cluster to v5.3 And subsequent versions , Default deployed Prometheus From v2.8.1 Upgrade to v2.27.1,v2.27.1 Provide more functions and solve security risks .Prometheus v2.27.1 be relative to v2.8.1 There is Alert Time format changes , For details, see  Prometheus commit.

2. Prepare for upgrade

This section describes the updates that need to be made before actually starting the upgrade TiUP and TiUP Cluster Preparation work such as component version .

2.1 upgrade TiUP Or update TiUP Offline mirroring

upgrade TiUP and TiUP Cluster

Be careful

If the central control computer of the original cluster cannot access  https://tiup-mirrors.pingcap.com  Address , You can skip this step , then to update TiUP Offline mirroring .

  1. Upgrade first TiUP edition ( Suggest  tiup  Version no less than  1.10.0):

    tiup update --self tiup --version

  2. Upgrade again TiUP Cluster edition ( Suggest  tiup cluster  Version no less than  1.10.0):

    tiup update cluster tiup cluster --version

to update TiUP Offline mirroring

Be careful

If the original cluster is not deployed offline , This step can be ignored .

You can refer to Use TiUP Deploy TiDB colony Download and deploy the new version TiUP Offline mirroring , Upload to the central control computer . In execution  local_install.sh  after ,TiUP Will complete the overlay upgrade .

tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz sh tidb-community-server-${version}-linux-amd64/local_install.sh source /home/tidb/.bash_profile

After the overwrite upgrade is completed , Need to server and toolkit Two offline images are merged , Execute the following command to merge offline components into server Under the table of contents .

tar xf tidb-community-toolkit-${version}-linux-amd64.tar.gz ls -ld tidb-community-server-${version}-linux-amd64 tidb-community-toolkit-${version}-linux-amd64 cd tidb-community-server-${version}-linux-amd64/ cp -rp keys ~/.tiup/ tiup mirror merge ../tidb-community-toolkit-${version}-linux-amd64

After offline image merging , Execute the following command to upgrade Cluster Components :

tiup update cluster

The offline image has been updated successfully . If it is found that TiUP Operation error reporting , May be manifest Not updated , Try  rm -rf ~/.tiup/manifests/*  Reuse later .

2.2 edit TiUP Cluster Topology profile

Be careful

This step can be skipped in the following cases :

  • The configuration parameters of the original cluster have not been modified , Or through tiup cluster The parameters have been modified but do not need to be adjusted .
  • After upgrading, you want to use... For unmodified configuration items  6.1.0  Default parameters .
  1. Enter the... Of the topology file  vi  Edit mode :

    tiup cluster edit-config <cluster-name>

  2. Reference resources  topology  Configure the format of the template , Fill the parameters you want to modify into the... Of the topology file  server_configs  below .

After the modification is completed  :wq  Save and exit edit mode , Input  Y  Confirm change .

Be careful

Upgrade to 6.1.0 Before version , Please confirm that you have 4.0 The modified parameters are in 6.1.0 Version is compatible , May refer to  TiKV Profile description .

following TiKV Parameter in TiDB v5.0 obsolete . If the following parameters have been configured in the original cluster , Need to pass through  edit-config  Edit mode deletes these parameters :

  • pessimistic-txn.enabled
  • server.request-batch-enable-cross-command
  • server.request-batch-wait-duration

2.3 Check the health of the current cluster

To avoid undefined behavior or other failures during the upgrade , It is recommended that the current region Check your health , This can be done through  check  Subcommand complete .

tiup cluster check <cluster-name> --cluster

After execution , Finally, it will output region status Examination result . If the result is "All regions are healthy", Then it indicates all region Are in a healthy state , You can continue to perform the upgrade ; If the result is "Regions are not fully healthy: m miss-peer, n pending-peer" And prompt "Please fix unhealthy regions before other operations.", It means that there are region In an abnormal state , The corresponding abnormal state should be eliminated first , And check again and the result is "All regions are healthy" Then continue to upgrade .

3. upgrade TiDB colony

This section describes how to roll upgrade TiDB Cluster and how to verify after upgrading .

3.1 Upgrade the cluster to the specified version

There are two ways to upgrade : Upgrade without shutdown and upgrade without shutdown .TiUP Cluster Default upgrade TiDB The cluster is upgraded without downtime , That is, during the upgrade process, the cluster can still provide external services . During upgrade, each node will be migrated one by one leader Then upgrade and restart , Therefore, for large-scale clusters, it takes a long time to complete the whole upgrade operation . If the business has a maintenance window for database downtime maintenance , You can use the shutdown upgrade method to quickly upgrade .

Upgrade without downtime

tiup cluster upgrade <cluster-name> <version>

To upgrade to 6.1.0 Version as an example :

tiup cluster upgrade <cluster-name> v6.1.0

Be careful

  • Rolling upgrade will upgrade all components one by one . upgrade TiKV period , Will one by one TiKV All the leader Cut away and then stop TiKV example . The default timeout is 5 minute (300 second ), The instance will be stopped directly after the timeout .
  • Use  --force  Parameters can be used without expelling leader To quickly upgrade the cluster to the new version , But this method will ignore all errors in the upgrade , No valid prompt after the upgrade fails , Please use with caution .
  • If you want to keep performance stable , You need to guarantee TiKV All the leader After the deportation is completed, the TiKV example , You can specify  --transfer-timeout  For a larger value , Such as  --transfer-timeout 3600, The unit is in seconds .
  • from 5.3 Upgrade the previous version to 5.3 And later versions , Online upgrade is not supported TiFlash, Only the first TiFlash Instance close , Then upgrade the cluster offline , Last reload Entire cluster , Guaranteed addition TiFlash Other components are upgraded without downtime .

Stop and upgrade

Before shutdown and upgrading , First, you need to shut down the entire cluster .

tiup cluster stop <cluster-name>

After through  upgrade  Command addition  --offline  Parameters to upgrade the shutdown .

tiup cluster upgrade <cluster-name> <version> --offline

The cluster will not start automatically after the upgrade , Need to use  start  Command to start the cluster .

tiup cluster start <cluster-name>

3.2 Post upgrade validation

perform  display  Command to view the latest cluster version  TiDB Version

tiup cluster display <cluster-name>

Cluster type: tidb Cluster name: <cluster-name> Cluster version: v6.1.0

Be careful

TiUP And TiDB By default, usage information is collected , And share this information with PingCAP Used to improve the product . To learn more about the information collected and how to disable this behavior , Please see the telemetering .

4. upgrade FAQ

This section describes the use of TiUP upgrade TiDB Common problems encountered by clusters .

4.1 Upgrade times error interrupt , After handling the error report , How to continue upgrading

Re execution  tiup cluster upgrade  Command to upgrade , The upgrade operation will restart the nodes that have been upgraded before . If you don't want to restart the upgraded node , have access to  replay  Subcommand to retry the operation , The specific method is as follows :

  1. Use  tiup cluster audit  Command to view operation records :

    tiup cluster audit

    Find the failed upgrade operation record , And make a note of ID, In the next step, you will use  <audit-id>  Indicates the operation record ID Value .

  2. Use  tiup cluster replay <audit-id>  Command to retry the corresponding operation :

    tiup cluster replay <audit-id>

4.2 During the upgrade evict leader Waiting time is too long , How to skip this step to quickly upgrade

You can specify  --force, Upgrade will skip  PD transfer leader  and  TiKV evict leader  The process , Restart and upgrade the version directly , It has a great impact on the performance of online clusters . The order is as follows :

tiup cluster upgrade <cluster-name> <version> --force

4.3 When the upgrade is complete , How to update pd-ctl Wait for the peripheral tool version

It can be done by TiUP Install the corresponding version of  ctl  Component to update the relevant tool version :

tiup install ctl:v6.1.0

5. TiDB 6.1.0 Compatibility changes

  • For compatibility changes, please refer to 6.1.0 Release Notes.
  • Please avoid using TiDB Binlog The cluster index table is newly created during the rolling upgrade of the cluster .
原网站

版权声明
本文为[Tianxiang shop]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/187/202207060758157238.html