当前位置:网站首页>CEPH principle
CEPH principle
2022-07-02 17:24:00 【30 hurdles in life】
ceph It's a unified distributed storage system , Provide better performance , reliability , Extensibility .
brief introduction :
High performance
Abandon the traditional centralized storage addressing scheme , use CRUSH Algorithm , Data is evenly distributed , High degree of parallelism
Considering the isolation of disaster tolerance area , It can realize the replica setting rules of various loads , For example, cross machine room , Rack sense, etc
Capable of supporting thousands of storage nodes , Support TB To PB The data of
High availability
The number of copies can be flexibly controlled
Support fault domain separation , Strong data consistency
A variety of fault scenarios are automatically repaired and self-healing
No single point of failure , Automatic management
high scalability
De centralization
Expand flexibility
It grows linearly as the number of nodes increases
Rich features
Three storage interfaces are supported , Block storage , File store , Object storage
Support for custom interfaces , Support multiple language drivers
framework
Three interfaces are supported
object : It's original API And compatible swift and s3 Of API
block : Support thin provisioning , snapshot , clone
file: posix Interface Support for snapshots
Components
monitor: One ceph Clusters need more than one monitor It's a small cluster , They passed paxos Synchronous data , For preservation OSD Metadata
OSD: Full name object storage device, That is, the process responsible for returning specific data in response to client requests , One ceph Generally, there are many clusters osd. Main functions: storage of user data , When directly using the hard disk seat to store the target , A hard disk is called OSD, When using a directory seat to store targets , This directory is also called OSD
MDS: Full name ceph metadata server , yes i One cephFs Metadata services that services depend on , Object storage and fast device storage need to change services
object: ceph The bottom storage unit is object object , A piece of data , A configuration is an object , Every object Contains a ID Metadata Raw data
pool:pool Is a logical partition for storing objects , It usually specifies the type of data redundancy depends on the number of copies , The default is 3 copy , For different storage types , Need separate pool, Such as RBD
PG: Full name placement Groups It 's a logical concept , One OSD Contains multiple PG, introduce PG This layer is actually for better distribution of data and positioning data , Every pool It contains many pg, It is a collection of objects , The smallest unit of data balance and recovery at the server is PG.
pool yes ceph Logical partitions that store data , It starts namespace The role of
Every pool Contain a certain amount of PG
PG Objects in are mapped to different object On
pool It's distributed throughout the cluster
filestore On bulestore:fileStore It is the back-end storage engine used by default in the old version , If you use filestore It is recommended to use xfs file system ,bulestore Is a new backend storage engine , Can directly manage raw hard disk , Abandoned ext4 On XFS Wait for the local file system . You can directly operate the physical hard disk , At the same time, the efficiency is also much higher .
RADOS: Full name Reliable Autonomic Distributed Object Store. yes Ceph The essence of clusters , Used to realize data distribution ,FailOver Cluster operation .
Librados:librados yes rados Provide library , because rados It's hard to access the protocol directly , Because of the top RBD,RGW,cephFs It's all through librados Access to the , At present php,ruby,java,python,c,C++ Support
crush:crush yes ceph The data distribution algorithm used , Similar to consistent hashing , Let the data go where it's supposed to be
RBD: Full name rados block device yes ceph Block equipment services provided to the outside world , Such as virtual machine hard disk , Support snapshot function
RGW: Full name rados gateway, yes ceph Object storage services provided externally , Interface and s3 and swfit compatible .
cephFs: Full name ceph file system yes ceph File system services provided externally
Block storage :
Typical equipment disk array , disk , It is mainly used to map the raw disk space to the host
advantage : adopt raid And lvm methods , Data protection
How fast and cheap hard drives are combined , Provide capacity
How fast are the logical disks combined , Provide reading and writing efficiency
shortcoming
use SAN When building a network , Optical switches The cost is high
Data cannot be shared between hosts
Use scenarios :
docker Containers , Virtual machine disk storage allocation
The logging stored
File store
File store :
Typical equipment ,FTP,NFS The server , For customer service, the problem of storing files that cannot be shared , So with file storage , On the server FTP NFS The server , File storage .
advantage : Low cost , Any machine can . Convenient files can be shared .
shortcoming : Low efficiency of reading and writing . The transmission rate is slow .
Use scenarios
The logging stored
File storage with directory structure
Object storage
Typical equipment :
Distributed server with large capacity hard disk ,swift,s3. Multiple servers have built-in high-capacity hard disks , Install the object storage management software , Provide external read-write access
advantage : High speed of reading and writing with block storage ; With file storage sharing and other features
Use scenarios : Picture storage , Video storage
Deploy
Because we are here kubernetes Use in cluster , Also for the convenience of management , Here we use rock To deploy ceph colony ,rook Is an open source cloud native storage orchestration tool , Provide platform framework , And support for various storage solutions . Use the cloud native environment for this integration .
rook Turn storage software into group management , Self expanding and self-healing storage services , Through automated deployment , start-up , To configure , supply , Expand , upgrade , transfer , disaster recovery , Monitoring and resource management .rook The underlying uses cloud native container management , Scheduling and orchestration provide the ability to provide these functions . In fact, it is what we usually say operator.rook Use the extension function to deeply integrate it into the cloud native environment , And for scheduling , Life cycle management , Resource management , Security , Monitoring provides a seamless experience .
rook Contains multiple components :
rook operator:rook Core components ,rook operator It's a simple container , Automatically start the storage cluster , And monitor the storage daemon , To ensure the health of the storage cluster .
rook agent: Run on each storage node , And configure a flexvolume perhaps CSI plug-in unit , and kubernetes Storage volume control framework ,Agent Handle all storage operations , For example, add network storage devices , Record the storage volume and format file system on the host .
rook discovers: Monitor storage devices attached to storage nodes
rook It will also be used kubernetes pod In the form of , Deploy ceph Of MON,OSD as well as MGR Daemon .ROOK operator Let users through CRD To create and manage storage clusters , Each resource defines itself CRD
rookcluster: It provides the ability to configure storage clusters , Use cases provide fast storage , Object storage , And shared file systems , Each cluster has multiple pool
pool: Support block storage ,pool It also provides internal support for files and online storage
object store: use s3 Compatible interface open storage service
file system: For multiple kubernetes pod Provide shared storage
边栏推荐
- What is generics- Introduction to generics
- Un an à dix ans
- The difference between class and getClass ()
- 几行代码搞定RPC服务注册和发现
- Eth data set download and related problems
- 伟立控股港交所上市:市值5亿港元 为湖北贡献一个IPO
- Youzan won the "top 50 Chinese enterprise cloud technology service providers" together with Tencent cloud and Alibaba cloud [easy to understand]
- executescalar mysql_ ExecuteScalar()
- Niuke JS2 file extension
- Seven charts, learn to do valuable business analysis
猜你喜欢
The difference of message mechanism between MFC and QT
剑指 Offer 22. 链表中倒数第k个节点
福元医药上交所上市:市值105亿 胡柏藩身价超40亿
博客主题 “Text“ 夏日清新特别版
Understand one article: four types of data index system
【Leetcode】14. Longest Common Prefix
Listing of chaozhuo Aviation Technology Co., Ltd.: raising 900million yuan, with a market value of more than 6billion yuan, becoming the first science and technology innovation board enterprise in Xia
Win10系统使用pip安装juypter notebook过程记录(安装在系统盘以外的盘)
Sword finger offer 22 The penultimate node in the linked list
After meeting a full stack developer from Tencent, I saw what it means to be proficient in MySQL tuning
随机推荐
关于我
体验居家办公完成项目有感 | 社区征文
牛客JS2 文件扩展名
Briefly introduce the use of base64encoder
【Leetcode】14. 最长公共前缀
Visibilitychange – refresh the page data when the specified tab is visible
Introduction to nexus and detailed tutorial of Xiaobai using idea to package and upload to nexus3 private server
Method of C language self defining function
A few lines of code to complete RPC service registration and discovery
R and rstudio download and installation tutorial (super detailed)
2、 Expansion of mock platform
例题 非线性整数规划
< IV & gt; H264 decode output YUV file
TCP拥塞控制详解 | 2. 背景
si446使用记录(二):使用WDS3生成头文件
牛客 JS3 分隔符
Sword finger offer 25 Merge two sorted linked lists
2022 interview questions
剑指 Offer 26. 树的子结构
SSB threshold_ SSB modulation "suggestions collection"