当前位置:网站首页>【GCN】《Adaptive Propagation Graph Convolutional Network》(TNNLS 2020)
【GCN】《Adaptive Propagation Graph Convolutional Network》(TNNLS 2020)
2022-07-25 13:08:00 【chad_ lee】
《Adaptive Propagation Graph Convolutional Network》(TNNLS 2020)
Assign a stop unit to each node , The unit outputs a value to control Propagation Whether we should continue with the next jump . The output value of the stop unit during aggregation is the weight of each hop of aggregation . It can be understood that each node finds its own feeling field .
First, the characteristics of the node pass through a MLP become embedding, This is it. Propagation The starting point of the , Then start recursive Propagation:
z i 0 = z i z i 1 = propagate ( { z j 0 ∣ j ∈ N i } ) z i 2 = propagate ( { z j 1 ∣ j ∈ N i } ) . . . . . . \begin{aligned} &\mathbf{z}_{i}^{0}=\mathbf{z}_{i}\\ &\mathbf{z}_{i}^{1}=\operatorname{propagate}\left(\left\{\mathbf{z}_{j}^{0} \mid j \in \mathcal{N}_{i}\right\}\right)\\ &\mathbf{z}_{i}^{2}=\operatorname{propagate}\left(\left\{\mathbf{z}_{j}^{1} \mid j \in \mathcal{N}_{i}\right\}\right) \\ &...... \end{aligned} zi0=zizi1=propagate({ zj0∣j∈Ni})zi2=propagate({ zj1∣j∈Ni})......
The number of propagation steps should be determined by each node itself , Therefore, a linear two classifier is attached to each node as the basis of the propagation process “ Stop unit ”. after k Output after second iteration propagation :
h i k = σ ( Q z i k + q ) h_{i}^{k}=\sigma\left(\mathbf{Q} \mathbf{z}_{i}^{k}+q\right) hik=σ(Qzik+q)
among Q , q Q,q Q,q It's a training parameter , h i k h_{i}^{k} hik Is the probability that the current iteration of this node should stop (0~1). To ensure a reasonable number of propagation steps , There are two techniques : Specify a maximum number of steps T T T; use halting values To define Propagation The boundary of the :
K i = min { k ′ : ∑ k = 1 k ′ h i k > = 1 − ϵ } K_{i}=\min \left\{k^{\prime}: \sum_{k=1}^{k^{\prime}} h_{i}^{k}>=1-\epsilon\right\} Ki=min⎩⎨⎧k′:k=1∑k′hik>=1−ϵ⎭⎬⎫
among ϵ \epsilon ϵ Is usually set to a small value 0.01, Ensure that it can be terminated after one transmission . For the node i Of the k iteration , When k = K i k=K_{i} k=Ki when Propagation stop it .
node i The stopping probability of each iteration is :
p i k = { R i = 1 − ∑ k = 1 K i − 1 h i k , if k = K i or k = T ∑ k = 1 K i h i k , otherwise. p_{i}^{k}= \begin{cases}R_{i}=1-\sum_{k=1}^{K_{i}-1} h_{i}^{k}, & \text { if } k=K_{i} \text { or } k=T \\ \sum_{k=1}^{K_{i}} h_{i}^{k}, & \text { otherwise. }\end{cases} pik={ Ri=1−∑k=1Ki−1hik,∑k=1Kihik, if k=Ki or k=T otherwise.
Naturally, it can be used as a node to aggregate each layer embedding The weight of :
z ^ i = 1 K i ∑ k = 1 K i p i k z i k + ( 1 − p i k ) z i k − 1 \widehat{\mathbf{z}}_{i}=\frac{1}{K_{i}} \sum_{k=1}^{K_{i}} p_{i}^{k} \mathbf{z}_{i}^{k}+\left(1-p_{i}^{k}\right) \mathbf{z}_{i}^{k-1} zi=Ki1k=1∑Kipikzik+(1−pik)zik−1
Nodes are also defined i Of propagation cost:
S i = K i + R i \mathcal{S}_{i}=K_{i}+R_{i} Si=Ki+Ri
Final loss There are supervision signals and penalty regularization terms :
L ^ = L + α ∑ i ∈ V S i \widehat{\mathcal{L}}=\mathcal{L}+\alpha \sum_{i \in \mathcal{V}} \mathcal{S}_{i} L=L+αi∈V∑Si
This penalty item controls the spread of information on the graph “ Difficulty degree ”. Every time 5 individual step Optimize the penalty item once .
AP-GCN The learned stop step distribution . It seems intuitive : Sparse map receptive field is generally larger , Dense graphs generally only aggregate 1~2 Step neighbors .
边栏推荐
- I want to ask whether DMS has the function of regularly backing up a database?
- Leetcode 0133. clone diagram
- Simple understanding of flow
- 外围系统调用SAP的WebAPI接口
- ECCV 2022 | 登顶SemanticKITTI!基于二维先验辅助的激光雷达点云语义分割
- 2022.07.24 (lc_6126_design food scoring system)
- 零基础学习CANoe Panel(14)——二极管( LED Control )和液晶屏(LCD Control)
- Leetcode 1184. distance between bus stops
- [300 opencv routines] 239. accurate positioning of Harris corner detection (cornersubpix)
- Mlx90640 infrared thermal imager temperature sensor module development notes (V)
猜你喜欢

【运维、实施精品】月薪10k+的技术岗位面试技巧

Microsoft proposed CodeT: a new SOTA for code generation, with 20 points of performance improvement

深度学习的训练、预测过程详解【以LeNet模型和CIFAR10数据集为例】

Shell common script: check whether a domain name and IP address are connected

A turbulent life

Docker学习 - Redis集群-3主3从-扩容-缩容搭建

2022.07.24 (lc_6124_the first letter that appears twice)

Emqx cloud update: more parameters are added to log analysis, which makes monitoring, operation and maintenance easier

Word style and multi-level list setting skills (II)

网络空间安全 渗透攻防9(PKI)
随机推荐
卷积神经网络模型之——VGG-16网络结构与代码实现
Handwriting a blog platform ~ first day
JS sorts according to the attributes of the elements in the array
Force deduction 83 biweekly T4 6131. The shortest dice sequence impossible to get, 303 weeks T4 6127. The number of high-quality pairs
【OpenCV 例程 300篇】239. Harris 角点检测之精确定位(cornerSubPix)
Atcoder beginer contest 261 f / / tree array
Zero basic learning canoe panel (15) -- CAPL output view
Shell常用脚本:检测某域名、IP地址是否通
Word style and multi-level list setting skills (II)
ESP32-C3 基于Arduino框架下Blinker点灯控制10路开关或继电器组
[机器学习] 实验笔记 – 表情识别(emotion recognition)
Eccv2022 | transclassp class level grab posture migration
Substance Designer 2021软件安装包下载及安装教程
“蔚来杯“2022牛客暑期多校训练营2 补题题解(G、J、K、L)
2022.07.24 (lc_6126_design food scoring system)
How to use causal inference and experiments to drive user growth| July 28 tf67
2022.07.24 (lc_6124_the first letter that appears twice)
VIM tip: always show line numbers
[machine learning] experimental notes - emotion recognition
零基础学习CANoe Panel(12)—— 进度条(Progress Bar)