当前位置:网站首页>[reading notes] Figure comparative learning gnn+cl

[reading notes] Figure comparative learning gnn+cl

2022-07-05 09:16:00 Virgo programmer's friend

source :  https://mp.weixin.qq.com/s/X7gxlcY-PaQ97MiEJmKfbg

For a given large number of unlabeled graph data , The graph contrast learning algorithm aims to train a graph encoder , At present, it generally refers to graph neural network (Graph Neural Network, GNN). By this GNN The encoded graph represents the vector , The characteristics of graph data can be well preserved .

 

Graph Contrastive Learning with Augmentations. NeurIPS 2020.

Algorithm steps :

1. Random sampling of a batch (batch) chart

2. Perform random data enhancement twice for each graph ( Add / delete edge / Discard nodes ) Get the new picture (view)

3. Use the to be trained GNN Yes View Encoding , Get the node representation vector (node representation) And graph represents vector (graph representations) 

4. Calculate according to the above representation vector InfoNCE Loss , Among them, the same graph Enhanced view Are close to each other , By different graph Enhanced view Are far away from each other ;【 Features are enhanced 】

【 Heuristic graph data enhancement 】 As the graph data passes GNN It will produce Nodes represent and The picture shows Two levels of representation vectors Contrastive Multi-View Representation Learning on Graphs. ICML 2020. Design experiments to analyze the comparison of different levels , It is found that comparing the node representation with the graph representation will achieve better results . wuhu ~

【Learning Method graph data enhancement 】JOAO: Through confrontation training (adversarial training) The way , Iterative training selects each data enhancement method 【 semi-automatic 】 The probability matrix of , And corresponding replacement GraphCL Mapping header in (projection head). Experimental results show that , The probability matrix obtained from confrontation training is the same as before GraphCL The trend of experimental results on data enhancement selection is similar , And achieved competitive results without too much manual intervention .

【 Fully automatic 】 Automatically learn the distribution of disturbance to the graph during data enhancement .Adversarial Graph Augmentation to Improve Graph Contrastive Learning The author starts from how to preserve the information of the graph in data enhancement , Suppose the enhanced two View The greater the mutual information, the better , Because these mutual information may contain a lot of noise . The author introduces the information bottleneck (Information Bottleneck) principle , Think better View It should be on the premise of jointly preserving the characteristics of the graph itself , Mutual information between each other is minimal . That is, in training , Learn how to enhance retention graph Necessary information in , And at the same time reduce noise . Based on this principle , The author designed min-max game It's a new training mode , And train the neural network to decide whether to delete an edge in the data enhancement .【 pruning strategy ?】
————————————————
Copyright notice : This paper is about CSDN Blogger 「Amber_7422」 The original article of , follow CC 4.0 BY-SA Copyright agreement , For reprint, please attach the original source link and this statement .
Link to the original text :https://blog.csdn.net/Amber_7422/article/details/123773606

原网站

版权声明
本文为[Virgo programmer's friend]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/186/202207050913013091.html