当前位置:网站首页>【GCN】《Adaptive Propagation Graph Convolutional Network》(TNNLS 2020)
【GCN】《Adaptive Propagation Graph Convolutional Network》(TNNLS 2020)
2022-07-25 13:08:00 【chad_ lee】
《Adaptive Propagation Graph Convolutional Network》(TNNLS 2020)
Assign a stop unit to each node , The unit outputs a value to control Propagation Whether we should continue with the next jump . The output value of the stop unit during aggregation is the weight of each hop of aggregation . It can be understood that each node finds its own feeling field .
First, the characteristics of the node pass through a MLP become embedding, This is it. Propagation The starting point of the , Then start recursive Propagation:
z i 0 = z i z i 1 = propagate ( { z j 0 ∣ j ∈ N i } ) z i 2 = propagate ( { z j 1 ∣ j ∈ N i } ) . . . . . . \begin{aligned} &\mathbf{z}_{i}^{0}=\mathbf{z}_{i}\\ &\mathbf{z}_{i}^{1}=\operatorname{propagate}\left(\left\{\mathbf{z}_{j}^{0} \mid j \in \mathcal{N}_{i}\right\}\right)\\ &\mathbf{z}_{i}^{2}=\operatorname{propagate}\left(\left\{\mathbf{z}_{j}^{1} \mid j \in \mathcal{N}_{i}\right\}\right) \\ &...... \end{aligned} zi0=zizi1=propagate({ zj0∣j∈Ni})zi2=propagate({ zj1∣j∈Ni})......
The number of propagation steps should be determined by each node itself , Therefore, a linear two classifier is attached to each node as the basis of the propagation process “ Stop unit ”. after k Output after second iteration propagation :
h i k = σ ( Q z i k + q ) h_{i}^{k}=\sigma\left(\mathbf{Q} \mathbf{z}_{i}^{k}+q\right) hik=σ(Qzik+q)
among Q , q Q,q Q,q It's a training parameter , h i k h_{i}^{k} hik Is the probability that the current iteration of this node should stop (0~1). To ensure a reasonable number of propagation steps , There are two techniques : Specify a maximum number of steps T T T; use halting values To define Propagation The boundary of the :
K i = min { k ′ : ∑ k = 1 k ′ h i k > = 1 − ϵ } K_{i}=\min \left\{k^{\prime}: \sum_{k=1}^{k^{\prime}} h_{i}^{k}>=1-\epsilon\right\} Ki=min⎩⎨⎧k′:k=1∑k′hik>=1−ϵ⎭⎬⎫
among ϵ \epsilon ϵ Is usually set to a small value 0.01, Ensure that it can be terminated after one transmission . For the node i Of the k iteration , When k = K i k=K_{i} k=Ki when Propagation stop it .
node i The stopping probability of each iteration is :
p i k = { R i = 1 − ∑ k = 1 K i − 1 h i k , if k = K i or k = T ∑ k = 1 K i h i k , otherwise. p_{i}^{k}= \begin{cases}R_{i}=1-\sum_{k=1}^{K_{i}-1} h_{i}^{k}, & \text { if } k=K_{i} \text { or } k=T \\ \sum_{k=1}^{K_{i}} h_{i}^{k}, & \text { otherwise. }\end{cases} pik={ Ri=1−∑k=1Ki−1hik,∑k=1Kihik, if k=Ki or k=T otherwise.
Naturally, it can be used as a node to aggregate each layer embedding The weight of :
z ^ i = 1 K i ∑ k = 1 K i p i k z i k + ( 1 − p i k ) z i k − 1 \widehat{\mathbf{z}}_{i}=\frac{1}{K_{i}} \sum_{k=1}^{K_{i}} p_{i}^{k} \mathbf{z}_{i}^{k}+\left(1-p_{i}^{k}\right) \mathbf{z}_{i}^{k-1} zi=Ki1k=1∑Kipikzik+(1−pik)zik−1
Nodes are also defined i Of propagation cost:
S i = K i + R i \mathcal{S}_{i}=K_{i}+R_{i} Si=Ki+Ri
Final loss There are supervision signals and penalty regularization terms :
L ^ = L + α ∑ i ∈ V S i \widehat{\mathcal{L}}=\mathcal{L}+\alpha \sum_{i \in \mathcal{V}} \mathcal{S}_{i} L=L+αi∈V∑Si
This penalty item controls the spread of information on the graph “ Difficulty degree ”. Every time 5 individual step Optimize the penalty item once .
AP-GCN The learned stop step distribution . It seems intuitive : Sparse map receptive field is generally larger , Dense graphs generally only aggregate 1~2 Step neighbors .
边栏推荐
- CONDA common commands: install, update, create, activate, close, view, uninstall, delete, clean, rename, change source, problem
- 外围系统调用SAP的WebAPI接口
- Emqx cloud update: more parameters are added to log analysis, which makes monitoring, operation and maintenance easier
- B树和B+树
- MLX90640 红外热成像仪测温传感器模块开发笔记(五)
- Mlx90640 infrared thermal imager temperature sensor module development notes (V)
- [problem solving] org.apache.ibatis.exceptions PersistenceException: Error building SqlSession. 1-byte word of UTF-8 sequence
- 跌荡的人生
- yum和vim须掌握的常用操作
- Docker学习 - Redis集群-3主3从-扩容-缩容搭建
猜你喜欢
随机推荐
【AI4Code】《GraphCodeBERT: Pre-Training Code Representations With DataFlow》 ICLR 2021
零基础学习CANoe Panel(13)—— 滑条(TrackBar )
Simple understanding of flow
Clickhouse notes 03-- grafana accesses Clickhouse
Cyberspace Security penetration attack and defense 9 (PKI)
迁移PaloAlto HA高可用防火墙到Panorama
Force deduction 83 biweekly T4 6131. The shortest dice sequence impossible to get, 303 weeks T4 6127. The number of high-quality pairs
Deep learning MEMC framing paper list
2022 年中回顾 | 大模型技术最新进展 澜舟科技
Lu MENGZHENG's "Fu of broken kiln"
【AI4Code】《InferCode: Self-Supervised Learning of Code Representations by Predicting Subtrees》ICSE‘21
手写一个博客平台~第一天
录制和剪辑视频,如何解决占用空间过大的问题?
B tree and b+ tree
Chapter5 : Deep Learning and Computational Chemistry
web安全入门-UDP测试与防御
[problem solving] org.apache.ibatis.exceptions PersistenceException: Error building SqlSession. 1-byte word of UTF-8 sequence
基于JEECG制作一个通用的级联字典选择控件-DictCascadeUniversal
Requirements specification template
Zero basic learning canoe panel (14) -- led control and LCD control









