当前位置:网站首页>Glusterfs version 4.1 selection and deployment

Glusterfs version 4.1 selection and deployment

2022-06-24 01:37:00 jackxiao

1 The preface is related to

1.1 glusterfs advantage

1、 No metadata design GlusterFS The design has no centralized or distributed metadata , Instead, elastic hashing . Any server in the cluster 、 The client can use the hash algorithm 、 Path and file name , Then you can locate the data , And perform read and write access operations .

Conclusion :

  • The advantage of metadata free design is that it greatly improves the scalability , At the same time, it also improves the performance and reliability of the system .
  • If you need to list files or directories , The performance will be greatly reduced , Because listing files or directories , You need to query the node and aggregate the information in the node .
  • But if given a certain file name , Finding file locations can be very fast .

2、 Deployment between servers GlusterFS Cluster servers are peer-to-peer , Each node server has the configuration information of the cluster . All information can be queried locally . Each node's information update will be announced to other nodes , Ensure the consistency of node information . However, after the cluster is large in scale , Information synchronization efficiency will decrease , The probability of inconsistency will increase .

3、 Client access First, the program reads and writes data by accessing mount points , For users and programs , The cluster file system is transparent , Users and programs have no idea whether the file system is on a local or remote server .

Read and write operations will be handed over to VFS(Virtual File System, Virtual file system ) To deal with it ,VFS Will give the request to FUSE The kernel module , and FUSE And then through the device /dev/fuse Give the data to GlusterFS Client. Last pass GlusterFS Client Calculation , And finally send the request or data to GlusterFS Servers On .

About glusterfs Details of the principle of , You can refer to the following article glusterfs Architecture and principles Change your perspective to understand GlusterFS,GlusterFS Defect analysis glusterfs Chinese materials recommend Dr. liuaigui's GlusterFS Original resource series

1.2 Version selection

Most articles on the Internet are based on 3.x Version deployed , however 3.x The version is already in centos7 Alibaba cloud in epel Disappeared from the source , The lowest is also 4.0 edition

[[email protected] ~]# yum search  centos-release-gluster
......
centos-release-gluster-legacy.noarch : Disable unmaintained Gluster repositories from the CentOS Storage SIG
centos-release-gluster40.x86_64 : Gluster 4.0 (Short Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster41.noarch : Gluster 4.1 (Long Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster5.noarch : Gluster 5 packages from the CentOS Storage SIG repository
centos-release-gluster6.noarch : Gluster 6 packages from the CentOS Storage SIG repository
centos-release-gluster7.noarch : Gluster 7 packages from the CentOS Storage SIG repository

And a clear hint ,4.0 Version is also a short-term support board , So we chose to update some 4.1 Version to deploy

1.3 volume knowledge

For details of storage types, see :Setting Up Volumes - Gluster Docs

In the old version , share 7 Volume types In the new version , share 5 Volume types The common volume types are :

  • Distributed ( Distributed volumes according to hash The results are stored , No backup , Can be read directly )
  • Replicated ( Copy volume similar RAID1, Can be read directly )
  • Distributed Replicated ( Distributed replication volumes analogy RAID10, Can be read directly )

The different volume types are :

  • In the old version stripe( Strip roll ), Block storage mode , No direct reading
  • And a distributed striped volume based on the combination of striped volumes , Copy striped volumes , Distributed replication striped volumes
  • In the new version stripe, Enabled based on EC Error correcting code Dispersed( Error correction volume )
  • And the combination of Distributed Dispersed( Distributed error correction volume )

But we don't have to think about that much , Because we usually use Distributed replication volumes , Advantages as follows

  • Distributed storage , Efficient
  • Based on replication volume , Data is backed up
  • Documents can be read directly
  • All versions support

Of course Dispersed( Error correction volume Be similar to RAID5) from 3.6 Start updating until 7.x edition , It cost gluster Swallow a lot of hard work , You can read this article if you want to know

2 Service deployment

Reference resources official : Quick deployment Guide

2.1 Service planning

operating system

IP

Host name

Additional hard disk

centos 7.4

10.0.0.101

gf-node1

sdb:5G

centos 7.4

10.0.0.102

gf-node2

sdb:5G

centos 7.4

10.0.0.103

gf-node3

sdb:5G

2.2 Environmental preparation

5 All servers do the same

#  Turn off firewall 、selinux Wait for no explanation 
#  complete hosts analysis 
cat >>/etc/hosts <<EOF
10.0.0.101  gf-node01
10.0.0.102  gf-node02
10.0.0.103  gf-node03
EOF

#  install 4.1yum Source and program 
yum install -y centos-release-gluster41
yum install -y glusterfs glusterfs-libs glusterfs-server

#  Start the service and start up 
systemctl start  glusterd.service
systemctl enable glusterd.service
systemctl status glusterd.service

2.3 Format the mount disk

Create a total of 3 A catalog ,brick1 For mounting sdb, The other two directories act as local folders

Format disk

#  Check out the disk list 
[[email protected] ~]# fdisk -l
Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors

#  Format the disk directly without partition 
mkfs.xfs  -i size=512 /dev/sdb

Mount the disk

#  Create directory and mount 
mkdir -p /data/brick{1..3}
echo '/dev/sdb /data/brick1 xfs defaults 0 0' >> /etc/fstab
mount -a && mount

#  View results 
[[email protected] ~]# df -h|grep sd
/dev/sda2        48G  1.7G   47G   4% /
/dev/sdb        5.0G   33M  5.0G   1% /data/brick1

2.4 Establish the host trust pool

On any host , The trust pool can be established by executing the following commands , The account and password are not required for establishment , Because the default is to consider the deployment environment as a secure and trusted environment

#  Create a trusted pool 
gluster peer probe gf-node02
gluster peer probe gf-node03

#  Check the status 
[[email protected] ~]# gluster peer status
......
[[email protected] ~]# gluster pool list
UUID					Hostname 	State
4068e219-5141-43a7-81ba-8294536fb054	gf-node02	Connected 
e3faffeb-4b16-45e2-9ff3-1922791e05eb	gf-node03	Connected 
3e6a4567-eda7-4001-a5d5-afaa7e08ed93	localhost	Connected

Be careful : Once the trust pool is established , Only nodes in the trust pool can be added to the new server trust pool

3 Use distributed replication volumes

Just experiment GlusterFs Distributed replication volumes for , For other volume types, please refer to Baidu self test if necessary

3.1 Distributed replication volume creation instructions

  1. command gluster volume create gv1 replica 3 DIR1 DIR2 DIR3 ....
  2. The number of copies cannot be less than 3 replica 3, Otherwise, the creation of , Because the brain may be cracked , Will prompt Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this
  3. If the number of copies is equal to the number of copies (3 individual ), Distributed volumes , A multiple is a distributed replication volume
  4. Will be will be 3 A set of replicas is created as a replication volume , Then, multiple replicated volumes are combined into distribution volumes
  5. Replica order for distributed replication volumes , Related to creating commands , Not random
  6. If not all replica volumes are independent hard disks , Need to add force Parameters , Otherwise, the error will be prompted volume create: gv1: failed: The brick gf-node01:/data/brick2/gv1 is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.

3.2 Distributed replication volume creation

#  Create a distributed replication volume 
gluster volume create gv1 replica 3 \
  gf-node01:/data/brick1/gv1 \
  gf-node01:/data/brick2/gv1 \
  gf-node02:/data/brick1/gv1 \
  gf-node02:/data/brick2/gv1 \
  gf-node03:/data/brick1/gv1 \
  gf-node03:/data/brick2/gv1 \
  force

#  Boot volume  
gluster volume start gv1
  
#  Viewing the status of a volume 
[[email protected] ~]# gluster volume info 
Volume Name: gv1
Type: Distributed-Replicate
Volume ID: e1e004fa-5588-4629-b7ff-048c4e17de91
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: gf-node01:/data/brick1/gv1
Brick2: gf-node01:/data/brick2/gv1
Brick3: gf-node02:/data/brick1/gv1
Brick4: gf-node02:/data/brick2/gv1
Brick5: gf-node03:/data/brick1/gv1
Brick6: gf-node03:/data/brick2/gv1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

3.3 Use of distributed replication volumes

#  Mount the volume 
[[email protected] ~]# mount -t glusterfs gf-node01:/gv1 /mnt

#  Write data test 
[[email protected] ~]# touch /mnt/test{1..9}
[[email protected] ~]# ls /mnt/test{1..9}
/mnt/test1  /mnt/test2  /mnt/test3  /mnt/test4  /mnt/test5  /mnt/test6  /mnt/test7  /mnt/test8  /mnt/test9

#  Verify test data 
[[email protected] ~]# ls /data/brick*/*
/data/brick1/gv1:
test1  test2  test4  test5  test8  test9
/data/brick2/gv1:
test1  test2  test4  test5  test8  test9

[[email protected] ~]# ls /data/brick*/*
/data/brick1/gv1:
test1  test2  test4  test5  test8  test9
/data/brick2/gv1:

[[email protected] ~]# ls /data/brick*/*
/data/brick1/gv1:
test3  test6  test7
/data/brick2/gv1:
test3  test6  test7

Conclusion : You can see that the first three are a replica set , The last three are a replica set , So when you create a volume , The order of volumes is critical

原网站

版权声明
本文为[jackxiao]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/11/20211116174018876w.html

随机推荐