当前位置:网站首页>[reading notes] Figure comparative learning gnn+cl
[reading notes] Figure comparative learning gnn+cl
2022-07-05 09:16:00 【Virgo programmer's friend】
source : https://mp.weixin.qq.com/s/X7gxlcY-PaQ97MiEJmKfbg
For a given large number of unlabeled graph data , The graph contrast learning algorithm aims to train a graph encoder , At present, it generally refers to graph neural network (Graph Neural Network, GNN). By this GNN The encoded graph represents the vector , The characteristics of graph data can be well preserved .
Graph Contrastive Learning with Augmentations. NeurIPS 2020.
Algorithm steps :
1. Random sampling of a batch (batch) chart
2. Perform random data enhancement twice for each graph ( Add / delete edge / Discard nodes ) Get the new picture (view)
3. Use the to be trained GNN Yes View Encoding , Get the node representation vector (node representation) And graph represents vector (graph representations)
4. Calculate according to the above representation vector InfoNCE Loss , Among them, the same graph Enhanced view Are close to each other , By different graph Enhanced view Are far away from each other ;【 Features are enhanced 】
【 Heuristic graph data enhancement 】 As the graph data passes GNN It will produce Nodes represent and The picture shows Two levels of representation vectors Contrastive Multi-View Representation Learning on Graphs. ICML 2020. Design experiments to analyze the comparison of different levels , It is found that comparing the node representation with the graph representation will achieve better results . wuhu ~
【Learning Method graph data enhancement 】JOAO: Through confrontation training (adversarial training) The way , Iterative training selects each data enhancement method 【 semi-automatic 】 The probability matrix of , And corresponding replacement GraphCL Mapping header in (projection head). Experimental results show that , The probability matrix obtained from confrontation training is the same as before GraphCL The trend of experimental results on data enhancement selection is similar , And achieved competitive results without too much manual intervention .
【 Fully automatic 】 Automatically learn the distribution of disturbance to the graph during data enhancement .Adversarial Graph Augmentation to Improve Graph Contrastive Learning The author starts from how to preserve the information of the graph in data enhancement , Suppose the enhanced two View The greater the mutual information, the better , Because these mutual information may contain a lot of noise . The author introduces the information bottleneck (Information Bottleneck) principle , Think better View It should be on the premise of jointly preserving the characteristics of the graph itself , Mutual information between each other is minimal . That is, in training , Learn how to enhance retention graph Necessary information in , And at the same time reduce noise . Based on this principle , The author designed min-max game It's a new training mode , And train the neural network to decide whether to delete an edge in the data enhancement .【 pruning strategy ?】
————————————————
Copyright notice : This paper is about CSDN Blogger 「Amber_7422」 The original article of , follow CC 4.0 BY-SA Copyright agreement , For reprint, please attach the original source link and this statement .
Link to the original text :https://blog.csdn.net/Amber_7422/article/details/123773606
边栏推荐
- Explain NN in pytorch in simple terms CrossEntropyLoss
- Introduction Guide to stereo vision (4): DLT direct linear transformation of camera calibration [recommended collection]
- c语言指针深入理解
- 22-07-04 西安 尚好房-项目经验总结(01)
- 驾驶证体检医院(114---2 挂对应的医院司机体检)
- 编辑器-vi、vim的使用
- Multiple linear regression (sklearn method)
- Oracle advanced (III) detailed explanation of data dictionary
- Solution to the problem of the 10th Programming Competition (synchronized competition) of Harbin University of technology "Colin Minglun Cup"
- 云计算技术热点
猜你喜欢
AUTOSAR从入门到精通100讲(103)-dbc文件的格式以及创建详解
Solution to the problems of the 17th Zhejiang University City College Program Design Competition (synchronized competition)
[code practice] [stereo matching series] Classic ad census: (5) scan line optimization
Global configuration tabbar
Kotlin introductory notes (VIII) collection and traversal
嗨 FUN 一夏,与 StarRocks 一起玩转 SQL Planner!
Creation and reference of applet
一篇文章带你走进cookie,session,Token的世界
Kotlin introductory notes (II) a brief introduction to kotlin functions
图神经网络+对比学习,下一步去哪?
随机推荐
3D reconstruction open source code summary [keep updated]
2311. Longest binary subsequence less than or equal to K
Newton iterative method (solving nonlinear equations)
Nodejs modularization
Node collaboration and publishing
notepad++
The location search property gets the login user name
Multiple solutions to one problem, asp Net core application startup initialization n schemes [Part 1]
C#绘制带控制点的Bezier曲线,用于点阵图像及矢量图形
Kotlin introductory notes (II) a brief introduction to kotlin functions
Attention is all you need
2020 "Lenovo Cup" National College programming online Invitational Competition and the third Shanghai University of technology programming competition
Summary of "reversal" problem in challenge Programming Competition
Codeforces round 684 (Div. 2) e - green shopping (line segment tree)
Hosting environment API
阿里云发送短信验证码
Codeforces Round #648 (Div. 2) E.Maximum Subsequence Value
OpenFeign
Introduction Guide to stereo vision (1): coordinate system and camera parameters
AdaBoost use