当前位置:网站首页>Knowledge * review
Knowledge * review
2022-07-06 23:23:00 【Daily Xiaoxin】
Knowledge review
1、 Please briefly HBase Data structure and storage architecture
data structure :hbase The data structure includes : Namespace , The line of key , Column cluster , Column , Time stamp , data ceil.
- Namespace : Similar to relational database , Space for storing tables
- The line of key : That is to say rowkey, Unique identification
- Column cluster : That is, a large class , A data set , The quantity is fixed
- Column : Column is a popular column , A column cluster can have multiple columns , Columns can be added
Time stamp : Each data update will be followed by a new timestamp , You can get the latest data through timestamp , It also solved hdfs Disadvantages that cannot be modified over time- data ceil: That is to say hbase The data in is all string type
Storage architecture :Client( client )、Master( Master node )、HRegionServer( maintain HRegion)、HLog( journal )、HRegion( Maintain several Store)、Store( Specific data storage )、StoreFile( Persist to HDFS)、MemStore( Memory storage )、Zookeeper( Monitor the cluster and storage root The mapping information )
2、 Please briefly HBase The process of querying data
First visit zookeeper, obtain meta Where are the documents HRegionServer The location of , obtain meta Memory for file loading , obtain rowkey Corresponding HRegion Due to the existence of multiple HRegion in , So create multiple HRegionScanner,StoreScanner Scanner , Scan first MemStore Whether it is stored in , Scan again StoreFile, Last result returned
3、 Please briefly HBase The process of writing inquiry data
Connect first , Append the write operation to HLog In the log , In obtaining zookeeper in meta File location information , obtain meta Specified in the rowkey The mapping of HRegion After the message , Write data , writes MemStore, Reach by default 128MB when , Perform a brush write to the hard disk , become StoreFile, With constant StoreFile More ,StoreFile It will merge data
4、 Please elaborate Spark Caching mechanism in cache and persist And checkpoint The difference and connection
cache:One of the control operators ,cache() = persist() = persist(StorageLevel.Memory_Only), amount to persist A case of , It belongs to inert loading , The cache will not be used for the first time, but only for the second operation ( The code implementation is as follows )persist:One of the control operators : Support persistence , Common patterns are Memory_Only and Memory_and_Diskcheckpoint:Mainly used for persistence RDD, Persist the results to specific files , Also inert loading ( The code implementation is as follows )- All three are control operators , A control and persistence of different forms of data , among cache Memory based ,checkpoints Based on hard disk , and persist The most comprehensive multiple modes can be realized
/* Control operator cache() Lazy loading */
object CtrlCache {
def main(args: Array[String]): Unit = {
// Create connection
val context = new SparkContext(new SparkConf().setMaster("local").setAppName("cache" + System.currentTimeMillis()))
// Get data element
val value: RDD[String] = context.textFile("src/main/resources/user.log")
// Start cache
value.cache()
// Recording time
val start: Long = System.currentTimeMillis()
// Count the number of data rows
val count: Long = value.count()
// Record the end time
val end: Long = System.currentTimeMillis()
// Output results
println(" The data is "+count+" That's ok , Time consuming :"+(end-start)+"s")
// Recording time
val start1: Long = System.currentTimeMillis()
// Count the number of data rows
val count1: Long = value.count()
// Record the end time
val end1: Long = System.currentTimeMillis()
// Output results
println(" The data is "+count1+" That's ok , Time consuming :"+(end1-start1)+"s")
}
}

/* Control operator checkpoint*/
object CheckPoint {
def main(args: Array[String]): Unit = {
// Create connection
val context = new SparkContext(new SparkConf().setMaster("local").setAppName("cache" + System.currentTimeMillis()))
// Get data element
val value: RDD[String] = context.textFile("src/main/resources/user.log")
// Set checkpoint path
context.setCheckpointDir("./point")
// Partition the data
val partiton: RDD[String] = value.flatMap(_.split(" "))
// Get the number of partitions
println(" Partition number :"+partiton.getNumPartitions)
// Persistence
value.checkpoint()
// Number of persistence
value.count()
context.stop()
}
}

5、RDD The five attributes of ? Please list the commonly used RDD Operator and action ?
- Five attributes :
① RDD By a group partition Partition composition
② RDD Interdependence between
③ RDD Calculate the best calculation position
④ The partition is used for key -value Of RDD On
⑤ Function acts on each partition- RDD Common operators and functions :
– Conversion operator :
map: In one out one , Data segmentation and other processing
flatMap: And map Similar to first map after flat, It is mostly used for partition
sortByKey: be used for k-vRDD On , Sort
reduceByKey: Will be the same Key Data processing
– Action operator :
count: Returns the number of elements in the dataset
foreach: Loop through each element in the dataset
collect: The calculation results are recycled to Driver End
– Control operator :cache ,persist,checkpoint( A little )
6、Spark What is the role of width dependence ?
Wide dependence :It means the father RDD And son RDD Between partition Partition relationship is one to many , And that leads to shuffle The generation of shuffleNarrow dependence :It means the father RDD And son RDD Between partition The relationship between partitions is one-to-one or many to one , Will not produce shuffle Shuffle operation
effect :spark Divide by width dependence stage
边栏推荐
- Stop saying that microservices can solve all problems
- 食品里的添加剂品种越多,越不安全吗?
- MySQL implementation of field segmentation from one line to multiple lines of example code
- 借助这个宝藏神器,我成为全栈了
- Modules that can be used by both the electron main process and the rendering process
- B 站弹幕 protobuf 协议还原分析
- MySQL实现字段分割一行转多行的示例代码
- Is "applet container technology" a gimmick or a new outlet?
- The application of machine learning in software testing
- 机器人材料整理中的套-假-大-空话
猜你喜欢

Dayu200 experience officer runs the intelligent drying system page based on arkui ETS on dayu200

Efficient ETL Testing

C three ways to realize socket data reception
mysql拆分字符串作为查询条件的示例代码
docker中mysql开启日志的实现步骤
MySQL实现字段分割一行转多行的示例代码
Example code of MySQL split string as query condition

Up to 5million per person per year! Choose people instead of projects, focus on basic scientific research, and scientists dominate the "new cornerstone" funded by Tencent to start the application

ICLR 2022 | 基于对抗自注意力机制的预训练语言模型
MySQL中正则表达式(REGEXP)使用详解
随机推荐
同构+跨端,懂得小程序+kbone+finclip就够了!
How does crmeb mall system help marketing?
asp读取oracle数据库问题
Up to 5million per person per year! Choose people instead of projects, focus on basic scientific research, and scientists dominate the "new cornerstone" funded by Tencent to start the application
「小程序容器技术」,是噱头还是新风口?
js对JSON数组的增删改查
AI表现越差,获得奖金越高?纽约大学博士拿出百万重金,悬赏让大模型表现差劲的任务...
Pytest unit test series [v1.0.0] [pytest execute unittest test case]
None of the strongest kings in the monitoring industry!
食品里的添加剂品种越多,越不安全吗?
Dockermysql modifies the root account password and grants permissions
Graphite document: four countermeasures to solve the problem of enterprise document information security
存币生息理财dapp系统开发案例演示
Coscon'22 community convening order is coming! Open the world, invite all communities to embrace open source and open a new world~
Introduction to network basics
Station B Big utilise mon monde pour faire un réseau neuronal convolutif, Le Cun Forward! Le foie a explosé pendant 6 mois, et un million de fois.
Children's pajamas (Australia) as/nzs 1249:2014 handling process
Matlab tips (27) grey prediction
The problem of ASP reading Oracle Database
js导入excel&导出excel