当前位置:网站首页>Self supervised heterogeneous graph neural network with CO comparative learning
Self supervised heterogeneous graph neural network with CO comparative learning
2022-07-06 18:30:00 【kc7w91】
Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning Paper reading
One 、 Key points
Heterograph : The basic idea of heterograph is quick pass
Dual Perspective : network schema view & meta-path structure view, Finally, the node representation obtained from the two perspectives is studied through comparison (contrastive learning) To merge
Self supervision : No label information is required , Directly use the data itself as supervision information , It can be divided into comparative learning and generative learning . This article is comparative learning , The core idea is to compare positive samples and negative samples in feature space , Feature representation of learning samples
Two 、 Two perspectives
network schema Pictured 1(b), Represents the relationship between nodes of different categories
meta-path Pictured 1(c), Multiple meta paths can be defined by - article - author / article - The theme - article /…
3、 ... and 、 Model definition
1. Preprocessing
The node characteristics of different categories x Project into the same space , The feature length alignment is recorded as h
2.network schema
For an article node , stay network schema From the perspective of the author 、 Topics learn from nodes in two categories embedding, Different categories have different effects on the current node , This degree value is similar to GAT The way ( Introduce attention mechanism ) Obtained by autonomous learning of the model . Each category contains multiple nodes , The importance of different nodes varies , The importance of each point in the class is also similar GAT The way to learn .
Intra class importance alpha(node-level attention):
among fai_m Indicates the node category , common m Kind of ;a For the parameters to be learned
adopt node-level attention Aggregate neighbor information :
notes : Not all neighbors , There are sampling operations
Get different categories embedding To merge
Category importance beta(type-level attention):
among W、b For the parameters to be learned
adopt type-level attention Aggregate different categories embedding:
3. meta-path
Through the Metapath, we get isomorphic graph , For each isomorphic graph, use GCN Get the preliminary characterization of each node h:
Fuse the node representations obtained under different meta paths ( It's similar GAT, semantic-level attention):
among W、b For the parameters to be learned ,beta_fai_n Is the importance of graphs obtained under different meta paths
According to different degrees of importance , obtain meta-path Nodes from the perspective embedding
4. mask
network schema Nodes in do not aggregate their own information (masked),meta-path The information of the transit node is not aggregated in , This will distinguish the nodes connected with the current article by category , Don't double count .
5.contrastive learning
Fuse nodes drawn from two perspectives embedding And then judge , The first use of MLP mapping :
Next, introduce contrastive learning
Comparative learning : There should be a high degree of similarity between small variants of something (positive samples), The similarity between essentially different things is low (negative samples). The positive and negative samples in this paper are defined as follows :
positive: By multiple meta-path Connected node pairs ( Emphasize the importance of edges )
negative: others
With i Node network schema From the perspective of loss Function as an example , Align the nodes according to meta-path The number of connected pieces is arranged in descending order , Set the threshold to divide positive and negative samples . After the division, there are network schema view Under the contrastive loss(meta-path view Empathy ):
among sim by cos function , Indicates the similarity between two vectors . the reason being that network schema From the perspective of loss function , So target embedding(gt) yes network schema Medium embedding; Positive and negative samples embedding come from meta-path view. The corresponding value of positive samples should be as large as possible , The corresponding value of negative samples should be as small as possible ,loss To get smaller .
Two perspectives loss equilibrium :
6. model extension
Hard to distinguish negative samples is very helpful to improve the performance of the comparative learning model , Therefore, a new negative sample generation strategy is introduced :GAN & Mixup
边栏推荐
- Unity资源顺序加载的一个方法
- 十、进程管理
- Excellent open source fonts for programmers
- SQL优化问题的简述
- 30 分钟看懂 PCA 主成分分析
- Take you through ancient Rome, the meta universe bus is coming # Invisible Cities
- I want to say more about this communication failure
- 287. 寻找重复数
- atcoder它A Mountaineer
- Declval (example of return value of guidance function)
猜你喜欢
Transport layer congestion control - slow start and congestion avoidance, fast retransmission, fast recovery
重磅硬核 | 一文聊透对象在 JVM 中的内存布局,以及内存对齐和压缩指针的原理及应用
SAP Fiori 应用索引大全工具和 SAP Fiori Tools 的使用介绍
287. 寻找重复数
虚拟机VirtualBox和Vagrant安装
30 minutes to understand PCA principal component analysis
Recursive way
图之广度优先遍历
I want to say more about this communication failure
Jerry's access to additional information on the dial [article]
随机推荐
Blue Bridge Cup real question: one question with clear code, master three codes
SQL优化问题的简述
2022 Summer Project Training (II)
MySQL查询请求的执行过程——底层原理
Cocos2d Lua smaller and smaller sample memory game
celery最佳实践
测试123
Markdown grammar - better blogging
2019阿里集群数据集使用总结
首先看K一个难看的数字
Recommend easy-to-use backstage management scaffolding, everyone open source
Implementation of queue
Jielizhi obtains the customized background information corresponding to the specified dial [chapter]
epoll()无论涉及wait队列分析
Rb157-asemi rectifier bridge RB157
CSRF漏洞分析
Windows connects redis installed on Linux
Markdown syntax for document editing (typera)
std::true_ Type and std:: false_ type
Excellent open source fonts for programmers