当前位置:网站首页>Idea view bytecode configuration
Idea view bytecode configuration
2022-07-02 09:25:00 【niceyz】
File-Settings-Tool-External Tools

show byte code
$JDKPath$\bin\javap.exe
-c $FileClass$
$OutputPath$

/********************** kafka **********************/
Kafka cluster colony
Point to point mode , Thread monitoring is required
Release / subscribe , The push speed is inconsistent with the client speed
Save the message according to topic categorize .
sender producer
The recipient consumer
Multiple instances , Each instance (server) be called broker
kafka rely on zookeeper Cluster save meta Information , Ensure system availability . Client requests can only Leader Handle
partition: Partition
Kafka Cluster colony :
Broker1 :
TopcicA(partition0) Leader
TopcicA(partition1) Follower
Broker2 :
TopcicA(partition0) Follower
TopcicA(partition1) Leader
Broker3 :
Partition0(message0、message1) topic It's divided into one area
Follower Do nothing
Consumer groups cannot consume the same partition at the same time
ConsumerA
Consumer group
ConsumerB
One consumer can consume more than one topic
mkdir logs
cd config
Set up the cluster , modify : server.properties
broker.id=0
delete.topic.enable=true
log.dirs=/opt/module
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181
cd ..
xsync kafka/
vi server.properties
broker.id=1
broker.id=2
start-up kafka Start before Zookeeper:
zkstart.sh
Check whether the startup is successful :
/opt/module/zookeeper-3.4.10/bin/zkServer.sh status
Mode:follower Indicating successful startup .
start-up kafka:
machine 1:bin/kafka-server-start.sh config/server/properties
machine 2:bin/kafka-server-start.sh config/server/properties
machine 3:bin/kafka-server-start.sh config/server/properties
Each machine needs to be started separately
util.sh Check the startup process of three machines
establish Partition number 2 individual replications 2 individual topic name
bin/kafka-topics.sh --create --zookeeper hadoop102:2181 --partitions 2 -- replication-factor 2 --topic first
see
bin/kafka-topics.sh --list --zookeeper hadoop102:2181
Check the log
cd logs
Three machines , Number of copies set 5, Will report a mistake , Prompt for up to copies 3.
establish Partition 2 copy 5 topic name
bin/kafka-topics.sh --create --zookeeper hadoop102:2181 --partitions 2 -- replication-factor 5 --topic second
Start producer : To which topic send out
bin/kafka-console-producer.sh --broker -list hadoop102:9092 --topic first
>hello
>yz
Start consumer ( colony ) stay 103 Machine start up
Which one to consume topic, Get the latest data Consume from the beginning
bin/kafka-console-consumer.sh --zookeeper hadoop102:2181 --topic first --from-beginning
Use bootstrap-server Instead of zookeeper To eliminate the warning
Which one to consume topic, Get the latest data Consume from the beginning
bin/kafka-console-consumer.sh --bootstrap-server hadoop102:2181 --topic first --from-beginning
Data exists topic, Using machines 104 See how many topic
bin/kafka-topics.sh --list --zookeeper hadoop102:2181
__consumer_offsets explain : The system automatically creates , This should be used bootstrap after , Save to local topic
first
see topic details Description specifies topic
bin/kafka-topics.sh --zookeeper hadoop102:2181 --describeopic first
Partition Leader machine copy For election
Topic:first Partition: 0 Leader: 0 Replicas: 0,2 Isr: 0,2
Topic:first Partition: 1 Leader: 1 Replicas: 1,0 Isr: 1,0
Isr: 0,2( These two copies and leader Closest , Forward row ,leader After hanging up , Take the next one instead Leader)
machine whether Leader,Leader Only responsible for the producer to write data ,Follower Initiative from Leader Pull data .
0 Leader
1 Follower
2 Follower
Delete topic, You will be prompted to set to true
bin/kafka-topics.sh --delete --zookeeper hadoop102:2181 -topic first
Create from New topic
Partition 1 individual copy 3 individual Appoint topic name
bin/kafka-topics.sh --create --zookeeper hadoop102:2181 --partitions 1 --replication-factro 3 --topic first
边栏推荐
- cmd窗口中中文呈现乱码解决方法
- Servlet全解:继承关系、生命周期、容器和请求转发与重定向等
- Cloudreve自建云盘实践,我说了没人能限制得了我的容量和速度
- Matplotlib剑客行——容纳百川的艺术家教程
- VIM operation command Encyclopedia
- Attributes of classfile
- Flink - use the streaming batch API to count the number of words
- 西瓜书--第六章.支持向量机(SVM)
- 【Go实战基础】gin 如何绑定与使用 url 参数
- Break the cocoon | one article explains what is the real cloud primordial
猜你喜欢
![[go practical basis] how to customize and use a middleware in gin](/img/fb/c0a4453b5d3fda845c207c0cb928ae.png)
[go practical basis] how to customize and use a middleware in gin

Microservice practice | teach you to develop load balancing components hand in hand

Cloudreve自建云盘实践,我说了没人能限制得了我的容量和速度

Redis installation and deployment (windows/linux)

Talk about the secret of high performance of message queue -- zero copy technology

【Go实战基础】如何安装和使用 gin

cmd窗口中中文呈现乱码解决方法

Enterprise level SaaS CRM implementation

西瓜书--第五章.神经网络

Machine learning practice: is Mermaid a love movie or an action movie? KNN announces the answer
随机推荐
DTM distributed transaction manager PHP collaboration client V0.1 beta release!!!
微服务实战|原生态实现服务的发现与调用
[go practical basis] how can gin get the request parameters of get and post
十年開發經驗的程序員告訴你,你還缺少哪些核心競爭力?
微服务实战|熔断器Hystrix初体验
Sentinel reports failed to fetch metric connection timeout and connection rejection
win10使用docker拉取redis镜像报错read-only file system: unknown
WSL installation, beautification, network agent and remote development
定时线程池实现请求合并
【Go实战基础】gin 如何自定义和使用一个中间件
Talk about the secret of high performance of message queue -- zero copy technology
微服务实战|负载均衡组件及源码分析
Microservice practice | fuse hytrix initial experience
Pyspark de duplication dropduplicates, distinct; withColumn、lit、col; unionByName、groupBy
Programmers with ten years of development experience tell you, what core competitiveness do you lack?
企业级SaaS CRM实现
[go practical basis] gin efficient artifact, how to bind parameters to structures
[staff] time mark and note duration (staff time mark | full note rest | half note rest | quarter note rest | eighth note rest | sixteenth note rest | thirty second note rest)
概率还不会的快看过来《统计学习方法》——第四章、朴素贝叶斯法
C language implementation of mine sweeping game