当前位置:网站首页>Tidb database Quick Start Guide

Tidb database Quick Start Guide

2022-06-27 06:34:00 Tianxiang shop

This guide describes how to get started quickly TiDB database . For non production environments , You can choose one of the following ways to deploy TiDB database :

Be careful

  • TiDB、TiUP And TiDB Dashboard By default, usage information is collected , And share this information with PingCAP Used to improve the product . To learn more about the information collected and how to disable this behavior , Please see the telemetering .

  • In this guide TiDB The deployment method is only applicable to the Quick Start experience , Not suitable for production environment .

Get to know... Quickly TiUP Basic functions of 、 Use TiUP Quickly build TiDB Cluster method and connection TiDB Cluster and execute SQL Methods , It is recommended to watch the following training video first ( Duration 15 minute ). Note that this video is for reference only , If you need to know  TiUP  The specific use method and  TiDB Get started quickly with specific operation steps , Please refer to the contents of the document .

Deploy a local test cluster

  • Applicable scenario : Use local macOS Or stand alone Linux Rapid deployment of environment TiDB Test cluster , Experience TiDB The basic architecture of the cluster , as well as TiDB、TiKV、PD、 Monitoring and other basic components .

TiDB It's a distributed system . The most basic TiDB Test clusters are usually composed of 2 individual TiDB example 、3 individual TiKV example 、3 individual PD Instance and optional TiFlash Example composition . adopt TiUP Playground, It can quickly build a set of basic test clusters mentioned above , Steps are as follows :

  1. Download and install TiUP.

    curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

    The following information will be prompted after installation :

    Successfully set mirror to https://tiup-mirrors.pingcap.com Detected shell: zsh Shell profile: /Users/user/.zshrc /Users/user/.zshrc has been modified to add tiup to PATH open a new terminal or source /Users/user/.zshrc to use it Installed path: /Users/user/.tiup/bin/tiup =============================================== Have a try: tiup playground ===============================================

  2. Declare global environment variables .

    Be careful

    TiUP After the installation is complete, you will be prompted Shell profile The absolute path to the file . In performing the following  source  Before the command , Need to put  ${your_shell_profile}  It is amended as follows Shell profile The actual location of the file .

    source ${your_shell_profile}

  3. At present session Execute the following command to start the cluster .

    • Direct execution  tiup playground  The command will run the latest version of TiDB colony , among TiDB、TiKV、PD and TiFlash Each instance 1 individual :

      tiup playground

    • You can also specify TiDB Version and number of component instances , The order is similar to :

      tiup playground v6.1.0 --db 2 --pd 3 --kv 3

      The above command will download and start a version of the cluster locally ( for example v6.1.0). The latest version can be executed by  tiup list tidb  Check it out. . The results will show how the cluster is accessed :

      CLUSTER START SUCCESSFULLY, Enjoy it ^-^ To connect TiDB: mysql --comments --host 127.0.0.1 --port 4001 -u root -p (no password) To connect TiDB: mysql --comments --host 127.0.0.1 --port 4000 -u root -p (no password) To view the dashboard: http://127.0.0.1:2379/dashboard PD client endpoints: [127.0.0.1:2379 127.0.0.1:2382 127.0.0.1:2384] To view the Prometheus: http://127.0.0.1:9090 To view the Grafana: http://127.0.0.1:3000

      Be careful

      • Support v5.2.0 And above TiDB stay Apple M1 The chip runs on the machine  tiup playground.
      • Executed in this way playground, At the end of deployment testing TiUP The original cluster data will be cleaned up , After re executing the command, you will get a new cluster .
      • If you want to persist data , It can be executed TiUP Of  --tag  Parameters :tiup --tag <your-tag> playground ..., Details refer to  TiUP Reference manual .
  4. Open a new one session To visit TiDB database .

    • Use TiUP client  Connect TiDB:

      tiup client

    • You can also use MySQL Client connection TiDB:

      mysql --host 127.0.0.1 --port 4000 -u root

  5. adopt  http://127.0.0.1:9090  visit TiDB Of Prometheus Management interface .

  6. adopt  http://127.0.0.1:2379/dashboard  visit  TiDB Dashboard  page , The default user name is  root, The password is empty. .

  7. adopt  http://127.0.0.1:3000  visit TiDB Of Grafana Interface , The default username and password are  admin.

  8. ( Optional ) Load data into TiFlash  Analyze .

  9. Once the test is complete , You can clean up the cluster by performing the following steps :

    1. Press down  Control+C  Key to stop the above enabled TiDB service .

    2. Wait for the service exit operation to complete , Execute the following command :

      tiup clean --all

Be careful

TiUP Playground Default listening  127.0.0.1, The service is only locally accessible ; If you need to make the service externally accessible , You can use  --host  Parameter specifies to listen to the externally accessible network adapter binding IP.

Simulate the deployment of a production environment cluster on a single machine

  • Applicable scenario : Hope to use a single Linux The server , Experience TiDB The smallest cluster with complete topology , And simulate the deployment steps in the production environment .

This section describes how to reference TiUP A minimal topology YAML File deployment TiDB colony .

Prepare the environment

Prepare a deployment host , Ensure that its software meets the requirements :

  • Recommended installation CentOS 7.3 And above
  • The operating environment can support Internet access , For downloading TiDB And related software installation packages

The smallest TiDB Cluster topology :

Be careful

The topology instances in the following table IP For example IP. In actual deployment , Please replace with the actual IP.

example Number IP To configure
TiKV310.0.1.1
10.0.1.1
10.0.1.1
Avoid port and directory conflicts
TiDB110.0.1.1 Default port
Global directory configuration
PD110.0.1.1 Default port
Global directory configuration
TiFlash110.0.1.1 Default port
Global directory configuration
Monitor110.0.1.1 Default port
Global directory configuration

Deployment host software and environment requirements :

  • Deployment requires the use of the deployment host root User and password
  • Deploy the host Turn off firewall Or open TiDB Required ports between nodes of the cluster
  • at present TiUP Support in x86_64(AMD64 and ARM) Deploy on the architecture TiDB colony
    • stay AMD64 Under the architecture , It is recommended to use CentOS 7.3 And above Linux operating system
    • stay ARM Under the architecture , It is recommended to use CentOS 7.6 1810 edition Linux operating system

Implementation deployment

Be careful

You can use Linux Any ordinary user of the system or root The user logs in to the host , The following steps are as follows root The user, for example .

  1. Download and install TiUP:

    curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

  2. Declare global environment variables :

    Be careful

    TiUP After the installation, you will be prompted with the corresponding Shell profile The absolute path to the file . In performing the following  source  Before the command , Need to put  ${your_shell_profile}  It is amended as follows Shell profile The actual location of the file .

    source ${your_shell_profile}

  3. install TiUP Of cluster Components :

    tiup cluster

  4. If the machine has been installed TiUP cluster, Need to update software version :

    tiup update --self && tiup update cluster

  5. Due to the simulation of multi machine deployment , Need to pass through root User increase sshd The number of connections to the service is limited :

    1. modify  /etc/ssh/sshd_config  take  MaxSessions  Transfer to 20.

    2. restart sshd service :

      service sshd restart

  6. Create and start the cluster

    Configure the template as follows , Edit profile , Name it  topo.yaml, among :

    • user: "tidb": Said by  tidb  System users ( The deployment is automatically created ) To do the internal management of the cluster , By default 22 Port by ssh Log in to the target machine
    • replication.enable-placement-rules: Set this PD Parameter to ensure TiFlash The normal operation
    • host: Set as... Of this deployment host IP

    The configuration template is as follows :

    # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 10.0.1.1 tidb_servers: - host: 10.0.1.1 tikv_servers: - host: 10.0.1.1 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 10.0.1.1 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 10.0.1.1 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 10.0.1.1 monitoring_servers: - host: 10.0.1.1 grafana_servers: - host: 10.0.1.1

  7. Execute the cluster deployment command :

    tiup cluster deploy <cluster-name> <tidb-version> ./topo.yaml --user root -p

    • Parameters  <cluster-name>  Indicates to set the cluster name

    • Parameters  <tidb-version>  Indicates to set the cluster version , Can pass  tiup list tidb  Command to view the currently deployed TiDB edition

    • Parameters  -p  Means to log in with a password when connecting to the target machine

      Be careful

      If the host uses a key SSH authentication , Please use  -i  Parameter specifies the key file path ,-i  And  -p  Do not use at the same time .

    Follow the lead , Input ”y” And root password , To complete the deployment :

    Do you want to continue? [y/N]: y Input SSH password:

  8. Start cluster :

    tiup cluster start <cluster-name>

  9. Access cluster :

    • install MySQL client . If installed MySQL The client can skip this step .

      yum -y install mysql

    • visit TiDB database , The password is empty. :

      mysql -h 10.0.1.1 -P 4000 -u root

    • visit TiDB Of Grafana monitor :

      adopt  http://{grafana-ip}:3000  Access cluster Grafana Monitoring the page , The default user name and password are  admin.

    • visit TiDB Of Dashboard:

      adopt  http://{pd-ip}:2379/dashboard  Access cluster  TiDB Dashboard  Monitoring the page , The default user name is  root, The password is empty. .

    • Execute the following command to confirm the list of currently deployed clusters :

      tiup cluster list

    • Execute the following command to view the topology and status of the cluster :

      tiup cluster display <cluster-name>

Explore more

原网站

版权声明
本文为[Tianxiang shop]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/178/202206270608219181.html