当前位置:网站首页>Openstack Learning Series 12: installing CEPH and docking openstack

Openstack Learning Series 12: installing CEPH and docking openstack

2022-06-09 04:36:00 Have a cup of tea

    Ceph It's an excellent performance 、 Designed for reliability and scalability 、 distributed file system .Ceph It can provide file system 、 Block storage and object storage , The distributed system can be expanded dynamically . In the cloud environment of some domestic companies , Usually Ceph As OpenStack To improve the efficiency of data forwarding .

1. install ceph( edition :nautilus)

#  stay node1、node2、node3 Installation on ceph And form a cluster 
yum -y install ceph

The monitor monitor(node1 On the operation )

    stay node1 Profile available on ceph.conf
[[email protected] ~]# cat /etc/ceph/ceph.conf
[global]
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
mon initial members = node1
mon host = 192.168.31.101
public_network = 192.168.31.0/24
cluster_network = 172.16.100.0/24
      Create a secret key for the cluster and generate a monitoring secret key

ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
    Generate administrator secret key , Generate client.admin User and add the user to the secret key
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \
    --gen-key -n client.admin \
    --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'

      Generate bootstrap-osd Secret key , Generate client.bootstrap-osd User and add the user to the secret key

sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring \
    --gen-key -n client.bootstrap-osd \
    --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
    Add the generated secret key to ceph.mon.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
sudo chown ceph:ceph /tmp/ceph.mon.keyring
    Use host name , host IP Address and FSID Generate monitor mapping , Save it as /tmp/monmap
monmaptool --create --add node1 192.168.31.101 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
    Create a default data directory on the monitor host
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-node1

     Populate the monitor daemon with monitor mappings and secret keys

sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

     Start the monitor

sudo systemctl start [email protected]
sudo systemctl enable [email protected]
    Verify that the monitor is running
ceph -s
    Turn on mon Of v2 edition
ceph mon enable-msgr2
    Disable unsafe mode mon is allowing insecure global_id reclaim
ceph config set mon auth_allow_insecure_global_id_reclaim false
Warning :Module 'restful' has failed dependency: No module named 'pecan', Restart after installation as follows mgr Or restart the system

pip3 install pecan werkzeug

Monitor daemon configuration mgr(node1 On the operation )

mkdir /var/lib/ceph/mgr/ceph-node1
ceph auth get-or-create mgr.node1 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > \
    /var/lib/ceph/mgr/ceph-node1/keyring
-------------------------------------
systemctl start [email protected] 
systemctl enable [email protected] 

add to OSD(node1 On the operation )

Use here bluestore Back end , Reduce occupancy , Than filestore Save a space  
#  stay node1 Will be added 3 individual 100G Hard disk join ceph
ceph-volume lvm create --data /dev/sdb
ceph-volume lvm create --data /dev/sdc
ceph-volume lvm create --data /dev/sdd
Add... On other nodes OSD

#  Synchronize configuration to other nodes 
scp /etc/ceph/* node2:/etc/ceph/
scp /etc/ceph/* node3:/etc/ceph/
scp /var/lib/ceph/bootstrap-osd/* node2:/var/lib/ceph/bootstrap-osd/
scp /var/lib/ceph/bootstrap-osd/* node3:/var/lib/ceph/bootstrap-osd/
# node2、node3 Add on node OSD
ceph-volume lvm create --data /dev/sdb
ceph-volume lvm create --data /dev/sdc
ceph-volume lvm create --data /dev/sdd

Check the status

[[email protected] ~]# ceph -s
  cluster:
    id:     a7f64266-0894-4f1e-a635-d0aeaca0e993
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum node1 (age 8m)
    mgr: node1(active, since 7m)
    osd: 9 osds: 9 up (since 5s), 9 in (since 7s)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   9.0 GiB used, 891 GiB / 900 GiB avail
    pgs: 

Expand the monitor ( take node2、node3 Also extended to monitor )

modify ceph.conf Medium mon initial members、mon host、public_network, And synchronize to other nodes ,
[[email protected] ~]# cat /etc/ceph/ceph.conf 
[global]
fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
mon initial members = node1,node2,node3
mon host = 192.168.31.101,192.168.31.102,192.168.31.103
public_network = 192.168.31.0/24
cluster_network = 172.16.100.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
------------------------------------#  Copy files to other nodes 
scp /etc/ceph/ceph.conf  node2:/etc/ceph/
scp /etc/ceph/ceph.conf  node3:/etc/ceph/
stay node2、node3 Add a monitor on the node

-------------------------#  Here is given node2 The operation of ,node3 The operation is similar to , Modify the corresponding name 
#  Get the existing mon.keyring
ceph auth get mon. -o mon.keyring
#  Get the existing mon.map
ceph mon getmap -o mon.map
#  Create a monitor data directory , Automatically created /var/lib/ceph/mon/ceph-node2
ceph-mon -i node2 --mkfs --monmap mon.map --keyring mon.keyring
chown ceph.ceph /var/lib/ceph/mon/ceph-node2 -R
#  start-up mon
systemctl start [email protected]
systemctl enable [email protected]

-------------------------------------------
#  Check the status 
[[email protected] ~]# ceph -s
  cluster:
    id:     a7f64266-0894-4f1e-a635-d0aeaca0e993
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node1,node2,node3 (age 2m)
    mgr: node1(active, since 20m)
    osd: 9 osds: 9 up (since 29s), 9 in (since 13m)
 
  task status:
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   9.1 GiB used, 891 GiB / 900 GiB avail
    pgs:

install Dashboard

 

# node1 install Dashboard
yum install ceph-mgr-dashboard -y
#  Turn on mgr function 
ceph mgr module enable dashboard
#  Generate and install self signed certificates 
ceph dashboard create-self-signed-cert  
#  Create a dashboard Login username password (ceph、123456)
echo 123456 > ceph-dashboard-password.txt
ceph dashboard ac-user-create ceph -i ceph-dashboard-password.txt administrator 
#  See how the service is accessed 
[[email protected] ~]# ceph mgr services
{
    "dashboard": "https://node1:8443/"
}
Access address :https://node1:8443/, And use the account and password added above ceph、123456 Sign in

2.OpenStack Back end storage docking ceph

 

---------------------------------------------# OpenStack docking ceph Preparation before 
#  The first three nodes node1,node2,ndoe3 yes ceph colony , Already installed components , therefore node4 and node5 To install ceph-common
yum -y install ceph-common

---  Copy configuration file , stay node1 Copy to all nova node 
for i in $(seq 2 5); do scp /etc/ceph/* node$i:/etc/ceph;done

-- Create storage pools   64
ceph osd pool create images 64
ceph osd pool create vms 64
ceph osd pool create volumes 64
ceph osd pool application enable images rbd
ceph osd pool application enable vms rbd
ceph osd pool application enable volumes rbd

--- Configure Authentication 
ceph auth get-or-create client.cinder mon 'allow r' \
    osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' \
    -o /etc/ceph/ceph.client.cinder.keyring
# ceph auth caps client.cinder mon 'allow *' osd 'allow *'    #  Open all permissions , The two commands here decompose the above commands 
# ceph auth get client.cinder -o /etc/ceph/ceph.client.cinder.keyring   #  Export the certificate to the service 
ceph auth get-or-create client.glance mon 'allow r' \
    osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' \
    -o /etc/ceph/ceph.client.glance.keyring
    
--- The generated key Copy , Copy to all nova node 
for i in $(seq 2 5); do scp /etc/ceph/*.keyring node$i:/etc/ceph;done
---  Modify the permissions , Copy to all nova Node key file 
for i in $(seq 2 5); do ssh node$i chown glance:glance /etc/ceph/ceph.client.glance.keyring ;done      #  These two documents glance and cinder To use 
for i in $(seq 2 5); do ssh node$i chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring;done

--- Add the secret key to libvcirt in , all nova Node execution (node2,node3.node4,node5)
ceph auth get-key client.cinder | tee client.cinder.key
uuidgen #  Generate uuid, Here we use... In the document uuid
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>ae3d9d0a-df88-4168-b292-c07cdc2d8f02</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

virsh secret-define --file secret.xml

virsh secret-set-value --secret ae3d9d0a-df88-4168-b292-c07cdc2d8f02 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

---------------------------------------------# OpenStack docking ceph, To configure  Glance  node (node1), After docking, you need to upload the image again 
# crudini --set /etc/glance/glance-api.conf DEFAULT "show_image_direct_url" "True"
# crudini --set /etc/glance/glance-api.conf glance_store "default_store" "rbd"
# crudini --set /etc/glance/glance-api.conf glance_store "rbd_store_user" "glance"
# crudini --set /etc/glance/glance-api.conf glance_store "rbd_store_pool" "images"
# crudini --set /etc/glance/glance-api.conf glance_store "stores" "glance.store.filesystem.Store, glance.store.http.Store, glance.store.rbd.Store"
# crudini --set /etc/glance/glance-api.conf paste_deploy "flavor" "keystone"

---------------------------------------------# OpenStack docking ceph, To configure  Cinder  node (node4,node5)
crudini --set /etc/cinder/cinder.conf DEFAULT "enabled_backends" "lvm,nfs,ceph"   #  Support at the same time lvm、nfs、ceph
crudini --set /etc/cinder/cinder.conf ceph "volume_driver" "cinder.volume.drivers.rbd.RBDDriver"
crudini --set /etc/cinder/cinder.conf ceph "volume_backend_name" "ceph"
crudini --set /etc/cinder/cinder.conf ceph "rbd_pool" "volumes"
crudini --set /etc/cinder/cinder.conf ceph "rbd_ceph_conf" "/etc/ceph/ceph.conf"
crudini --set /etc/cinder/cinder.conf ceph "rbd_flatten_volume_from_snapshot" "false"
crudini --set /etc/cinder/cinder.conf ceph "rbd_max_clone_depth" "5"
crudini --set /etc/cinder/cinder.conf ceph "rados_connect_timeout" "-1"
crudini --set /etc/cinder/cinder.conf ceph "glance_api_version" "2"
crudini --set /etc/cinder/cinder.conf ceph "rbd_user" "cinder"
crudini --set /etc/cinder/cinder.conf ceph "rbd_secret_uuid" "ae3d9d0a-df88-4168-b292-c07cdc2d8f02"

---------------------------------------------# OpenStack docking ceph, To configure  Nova  node (node2,node3,node4,node5)
crudini --set /etc/nova/nova.conf libvirt "images_type" "rbd"
crudini --set /etc/nova/nova.conf libvirt "images_rbd_pool" "vms"
crudini --set /etc/nova/nova.conf libvirt "images_rbd_ceph_conf" "/etc/ceph/ceph.conf"
crudini --set /etc/nova/nova.conf libvirt "rbd_user" "cinder"
crudini --set /etc/nova/nova.conf libvirt "rbd_secret_uuid" "ae3d9d0a-df88-4168-b292-c07cdc2d8f02"
crudini --set /etc/nova/nova.conf libvirt "inject_password" "false"
crudini --set /etc/nova/nova.conf libvirt "inject_key" "false"
crudini --set /etc/nova/nova.conf libvirt "inject_partition" "-2"
crudini --set /etc/nova/nova.conf libvirt "live_migration_flag" "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"

---------------------------------------------# OpenStack Restart the service 
---  The control node 
systemctl restart openstack-glance-api openstack-nova-api openstack-cinder-api openstack-cinder-scheduler

---  Computing node 
for i in $(seq 2 5); do ssh node$i systemctl restart openstack-nova-compute;done

---  Storage nodes 
for i in 4 5; do ssh node$i systemctl restart openstack-cinder-volume;done

---------------------------------------------# OpenStack verification , stay node1 Node execution 
[[email protected] ~]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 228269c3-6008-4e62-9408-a2fb04d74c1a | ceph | -           | True      |
+--------------------------------------+------+-------------+-----------+

[[email protected] ~]# cinder type-key ceph set volume_backend_name=ceph 
---  Yes glance、cinder、nova Confirm after operation  rbd ls images / vms / images

 Confirm after creating a volume 
[[email protected] ~]# openstack volume create  --size 1 --type ceph volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2022-03-01T10:22:13.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 3a0ee405-ad4b-4453-8b66-029aa67f7af0 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | ceph                                 |
| updated_at          | None                                 |
| user_id             | 5a44718261844cbd8a65621b9e3cea8d     |
+---------------------+--------------------------------------+
[[email protected] ~]# rbd -p volumes ls -l     #  stay ceph View the created block device in 
NAME                                        SIZE  PARENT FMT PROT LOCK 
volume-3a0ee405-ad4b-4453-8b66-029aa67f7af0 1 GiB          2 

 

原网站

版权声明
本文为[Have a cup of tea]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203021700455462.html