当前位置:网站首页>Tidb single machine simulation deployment production environment cluster (closed pit practice, personal test is effective)
Tidb single machine simulation deployment production environment cluster (closed pit practice, personal test is effective)
2022-07-01 06:11:00 【Incoming coders】
List of articles
Preface
Last article , I introduced in detail TIDB This open source distributed relational database , It focuses on TIDB Many excellent features of , For example, highly compatible MySQL、 Distributed transaction support 、 Cloud native SQL database 、 Horizontal scalability 、 High availability, etc . Everyone must want to uncover TIDB The mysterious veil of , In this article, I will take you to simulate the deployment of production environment clusters .
One 、 Recommended configuration of software and hardware environment
Read it on the official website TIDB After the software and hardware environment recommendations , It's really amazing ——TIDB The performance requirements are so high . The first is the development and testing environment recommendations ,TiDB Support deployment and running in Intel x86-64 Architecturally 64 Bit universal hardware server platform or ARM Architecture of the hardware server platform . For development 、 Server hardware configuration for test and production environment ( No operating system OS The occupation of itself ) There are the following requirements and suggestions :
The production environment is more terrible :
After reading the requirements of the environment configuration, people sigh , Want to get good things , There must be a price to pay !
I have only one 2 nucleus 4G6M Personal servers with bandwidth , And one in the lab 8 nucleus 16G Server for ( Many projects and applications have been deployed on this server ). When deploying a production environment cluster through stand-alone simulation , Without exception , Both servers failed to complete the task , And they were stuck for a time , The server stuck for hours , It took half a day to get better .
Therefore, the author decided to use the local virtual machine to simulate the deployment ( to Centos7.6 The virtual machine is on 24G Memory ).
Two 、 Deploy a local test cluster
Because the official website recommends TiUP For cluster deployment management , Therefore, this article does not use V4.0 Previous docker Deployment way , Instead, use the official recommendation TiUP The way .
TiDB It's a distributed system . The most basic TiDB Test clusters are usually composed of 2 individual TiDB example 、3 individual TiKV example 、3 individual PD Instance and optional TiFlash Example composition . adopt TiUP Playground, It can quickly build a set of basic test clusters mentioned above .
- Download and install TiUP
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
After the execution is completed, the following message will appear :
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile: /home/user/.bashrc
/home/user/.bashrc has been modified to add tiup to PATH
open a new terminal or source /home/user/.bashrc to use it
Installed path: /home/user/.tiup/bin/tiup
===============================================
Have a try: tiup playground
===============================================
- Declare global variables
source /root/.bash_profile
- There are two ways to start a cluster
(1)tiup playground, Executing this command directly will run the latest version of TIDB colony , among TIDB、TiKV、PD and TiFlash Each instance has one .
(2) You can do it yourself TIDB Version and the number of instances of each component , for example :
tiup playground v5.4.1 --db 2 --pd 3 --kv 3
After executing the command , The following appears at the end , Description deployment successful :
CLUSTER START SUCCESSFULLY, Enjoy it ^-^
To connect TiDB: mysql --host 127.0.0.1 --port 4000 -u root -p (no password) --comments
To view the dashboard: http://127.0.0.1:2379/dashboard
PD client endpoints: [127.0.0.1:2379]
To view the Prometheus: http://127.0.0.1:9090
To view the Grafana: http://127.0.0.1:3000
then ,
a. Can pass MySQL Client connection TIDB:
mysql --host 127.0.0.1 --port 4000 -u root
b. adopt http://127.0.0.1:9090 visit TiDB Of Prometheus Management interface .
c. adopt http://127.0.0.1:2379/dashboard visit TiDB Dashboard page , The default user name is root, The password is empty. .
d. adopt http://127.0.0.1:3000 visit TiDB Of Grafana Interface , The default username and password are admin.
Once the test is complete
(1) You can stop the cluster first ,control+C that will do
(2) eliminate TIDB colony , Use command tiup clean --all
I have been tested , Found this way to start TIDB The cluster can only be tested , Out of commission Navicat Tools to connect , That is, it has no actual development and production capacity .
The following describes the simulated deployment of a production environment cluster , This can be done by Navicat Make connection access , Have practical development ability .
3、 ... and 、 Simulate the deployment of a production environment cluster
Hard requirements :
- install Centos7.3 And above
- Access to the Internet , Easy to download TIDB And related software installation packages .
- Deploy the host to turn off the firewall , Or open TIDB The ports required by the nodes of the cluster .
The smallest TIDB Cluster topology :
3.1 TiUP install
It has been explained in detail before deploying the test cluster , No more details here
3.2 Prepare for deployment
- cluster Component installation
tiup cluster
If already installed TiUP cluster, Need to update software version
tiup update --self && tiup update cluster
- Due to the simulation of multi machine deployment , Need to pass through root User increase sshd The number of connections to the service is limited
vim /etc/ssh/sshd_config
The inside MaxSessions Set to 20, Then save and restart sshd service
service sshd restart
3.3 Create and start the cluster
(1) First, you need a configuration file , Configure the composition of the cluster , Name it topo.yaml, among :
- user:“tidb”: Said by tidb user ( After deployment, the system will automatically create ) To be the internal manager of the cluster , By default 22 Port by ssh Log in to the target machine .
- replication.enable-palcement-rules: Set this PD Parameter to ensure TiFlash The normal operation .
- host: Set to where the deployment machine is located IP
topo.yaml The configuration file is as follows (192.168.117.6 It's my virtual machine IP Address , You can change it into your own , The rest doesn't need to be changed ):
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
log.slow-threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
tiflash:
logger.level: "info"
pd_servers:
- host: 192.168.117.6
tidb_servers:
- host: 192.168.117.6
tikv_servers:
- host: 192.168.117.6
port: 20160
status_port: 20180
config:
server.labels: {
host: "logic-host-1" }
- host: 192.168.117.6
port: 20161
status_port: 20181
config:
server.labels: {
host: "logic-host-2" }
- host: 192.168.117.6
port: 20162
status_port: 20182
config:
server.labels: {
host: "logic-host-3" }
tiflash_servers:
- host: 192.168.117.6
monitoring_servers:
- host: 192.168.117.6
grafana_servers:
- host: 192.168.117.6
(2) Execute the cluster deployment command :
tiup cluster deploy tidb-test v5.4.1 ./topo.yaml --user root -p
tidb-test Is the name of the cluster ,v5.4.1 Is the version of the cluster , So here's what I chose v5.4.1, -p It is the password you need to use when connecting the machine , I'm here to test , So set it to blank , You can log in without a password .
(3) The following prompt appears
Do you want to continue? [y/N]: y
Input SSH password:
Fill in y, Then input SSH Login password is enough .
Here, as long as the server or virtual machine has sufficient performance , Can be successfully implemented .
(4) Start cluster
tiup cluster start tidb-test
I met a pit here , Tortured me for a long time :
After all the previous startup procedures have been executed , At the end :
+ [ Serial ] - UpdateTopology: cluster=liking
Started cluster `liking` successfully
Failed to set root password of TiDB database to 'G^174F*P!3t2sz&Wd5'
Error: dial tcp 192.168.117.6:4000: connect: connection refused
Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2022-05-18-15-01-41.log.
Searching the Internet for many solutions fails to solve this problem , Wasted hours , In the end in TiDB Community “TiDB Community” Found a solution . The reason for this is that the permissions are insufficient , namely TiDB Cluster is used tidb User deployed ,/tmp/tidb-4000.sock as well as /tmp/tidb-4001.sock These two documents are root The powers of the , namely tidb The user does not have permission to these two files , The result is that the service doesn't work . Delete these two files , And give tidb User permissions chmod -R 777 /tmp, Start again TiDB colony , It was successful .
Carry out orders :
tiup cluster display tidb-test
see tidb-test Cluster information :
[[email protected] ~]# tiup cluster display tidb-test
tiup is checking updates for component cluster ...
A new version of cluster is available:
The latest version: v1.9.6
Local installed version: v1.9.5
Update current component: tiup update cluster
Update all components: tiup update --all
Starting component `cluster`: /root/.tiup/components/cluster/v1.9.5/tiup-cluster /root/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v5.4.1
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.117.6:2379/dashboard
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.117.6:3000 grafana 192.168.117.6 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.117.6:2379 pd 192.168.117.6 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.117.6:9090 prometheus 192.168.117.6 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.117.6:4000 tidb 192.168.117.6 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.117.6:9000 tiflash 192.168.117.6 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.117.6:20160 tikv 192.168.117.6 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.117.6:20161 tikv 192.168.117.6 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.117.6:20162 tikv 192.168.117.6 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
Total nodes: 8
(5) Use on the host Navicat Connect :
And use tiup playground The command simulation deployment test environment cluster is different , This method allows remote connection , I.e. have the ability to develop .
(6) Enter in the host browser :192.168.117.6:2379/dashboard, View the cluster monitoring page , The default user name is root, The password is empty. :
(7) Enter in the host browser :192.168.117.6:3000, Access cluster Grafana Monitoring the page , The default user name and password are admin:
summary
This paper introduces TiDB Deploy local test cluster and simulate production environment cluster on a single machine TiDB Simulate deployment mode , These two methods have certain requirements for server performance , Especially the second way , laboratory 8 nucleus 16G Your server is directly stuck . The author has successfully completed these two deployments on the virtual machine , And encountered and solved a permission problem in the deployment . I hope this article can help you .
边栏推荐
- Using Baidu map to query national subway lines
- MinIO纠错码、分布式MinIO集群搭建及启动
- 解决麒麟V10上传文件乱码问题
- Servlet
- 论文学习记录随笔 多标签之LSML
- srpingboot security demo
- Transformer le village de tiantou en un village de betteraves sucrières
- 讓田頭村變甜頭村的特色農產品是仙景芋還是白菜
- 数据库er图组成要素
- One of the characteristic agricultural products that make Tiantou village, Guankou Town, Xiamen into a "sweet" village is
猜你喜欢

This is the necessary software for college students 𞓜 knowledge management

Arcserver password reset (account cannot be reset)

让田头村变甜头村的特色农产品是仙景芋还是白菜

Advanced drawing skills of Excel lecture 100 (1) - use Gantt chart to show the progress of the project

Understanding of C manualresetevent class

Chip, an empire built on sand!

The row and column numbers of each pixel of multi-source grid data in the same area are the same, that is, the number of rows and columns are the same, and the pixel size is the same

Make Tiantou village sweet. Is Xianjing taro or cabbage the characteristic agricultural product of Tiantou Village

My experience from technology to product manager

68 cesium code datasource loading czml
随机推荐
数据库er图组成要素
C XML help class
Advanced drawing skills of Excel lecture 100 (1) - use Gantt chart to show the progress of the project
three. JS summary
Enter an expression (expressed as a string) and find the value of this expression.
One of the characteristic agricultural products that make Tiantou village, Guankou Town, Xiamen into a "sweet" village is
Crossing pie · pie pan + Mountain duck = local data management
2022 the 8th China International "Internet +" college student innovation and entrepreneurship competition industry proposition track is open for registration!
OpenGL ES: (3) EGL、EGL绘图的基本步骤、EGLSurface、ANativeWindow
SOE spatial analysis server MySQL and PostGIS geospatial database of Postgres anti injection attack
three.js小结
freeswitch拨打分机号
C language beginner level - realize the minesweeping game
Small guide for rapid completion of mechanical arm (VI): stepping motor driver
Freeswitch dial the extension number
相同区域 多源栅格数据 各个像元行列号一致,即行数列数相同,像元大小相同
OpenGL es: (1) origin of OpenGL es (transfer)
Highmap gejson data format conversion script
linux 关闭redis 进程 systemd+
DHT11 温湿度传感器