Official implementation of Self-supervised Graph Attention Networks (SuperGAT), ICLR 2021.

Overview

SuperGAT

Official implementation of Self-supervised Graph Attention Networks (SuperGAT). This model is presented at How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision, International Conference on Learning Representations (ICLR), 2021.

Notice

The documented SuperGATConv layer with an example has been merged to the PyTorch Geometric's main branch.

This repository is based on torch==1.4.0+cu100 and torch-geometric==1.4.3, which are somewhat outdated at this point (Feb 2021). If you are using recent PyTorch/CUDA/PyG, we would recommend using the PyG's. If you want to run codes in this repository, please follow #installation.

Installation

# In SuperGAT/
bash install.sh ${CUDA, default is cu100}
  • If you have any trouble installing PyTorch Geometric, please install PyG's dependencies manually.
  • Codes are tested with python 3.7.6 and nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04 image.
  • PYG's FAQ might be helpful.

Basics

  • The main train/test code is in SuperGAT/main.py.
  • If you want to see the SuperGAT layer in PyTorch Geometric MessagePassing grammar, refer to SuperGAT/layer.py.
  • If you want to see hyperparameter settings, refer to SuperGAT/args.yaml and SuperGAT/arguments.py.

Run

python3 SuperGAT/main.py \
    --dataset-class Planetoid \
    --dataset-name Cora \
    --custom-key EV13NSO8-ES
 
...

## RESULTS SUMMARY ##
best_test_perf: 0.853 +- 0.003
best_test_perf_at_best_val: 0.851 +- 0.004
best_val_perf: 0.825 +- 0.003
test_perf_at_best_val: 0.849 +- 0.004
## RESULTS DETAILS ##
best_test_perf: [0.851, 0.853, 0.857, 0.852, 0.858, 0.852, 0.847]
best_test_perf_at_best_val: [0.851, 0.849, 0.855, 0.852, 0.858, 0.848, 0.844]
best_val_perf: [0.82, 0.824, 0.83, 0.826, 0.828, 0.824, 0.822]
test_perf_at_best_val: [0.851, 0.844, 0.853, 0.849, 0.857, 0.848, 0.844]
Time for runs (s): 173.85422565042973

The default setting is 7 runs with different random seeds. If you want to change this number, change num_total_runs in the main block of SuperGAT/main.py.

For ogbn-arxiv, use SuperGAT/main_ogb.py.

GPU Setting

There are three arguments for GPU settings (--num-gpus-total, --num-gpus-to-use, --gpu-deny-list). Default values are from the author's machine, so we recommend you modify these values from SuperGAT/args.yaml or by the command line.

  • --num-gpus-total (default 4): The total number of GPUs in your machine.
  • --num-gpus-to-use (default 1): The number of GPUs you want to use.
  • --gpu-deny-list (default: [1, 2, 3]): The ids of GPUs you want to not use.

If you have four GPUs and want to use the first (cuda:0),

python3 SuperGAT/main.py \
    --dataset-class Planetoid \
    --dataset-name Cora \
    --custom-key EV13NSO8-ES \
    --num-gpus-total 4 \
    --gpu-deny-list 1 2 3

Model (--model-name)

Type Model name
GCN GCN
GraphSAGE SAGE
GAT GAT
SuperGATGO GAT
SuperGATDP GAT
SuperGATSD GAT
SuperGATMX GAT

Dataset (--dataset-class, --dataset-name)

Dataset class Dataset name
Planetoid Cora
Planetoid CiteSeer
Planetoid PubMed
PPI PPI
WikiCS WikiCS
WebKB4Univ WebKB4Univ
MyAmazon Photo
MyAmazon Computers
PygNodePropPredDataset ogbn-arxiv
MyCoauthor CS
MyCoauthor Physics
MyCitationFull Cora_ML
MyCitationFull CoraFull
MyCitationFull DBLP
Crocodile Crocodile
Chameleon Chameleon
Flickr Flickr

Custom Key (--custom-key)

Type Custom key (General) Custom key (for PubMed) Custom key (for ogbn-arxiv)
SuperGATGO EV1O8-ES EV1-500-ES -
SuperGATDP EV2O8-ES EV2-500-ES -
SuperGATSD EV3O8-ES EV3-500-ES EV3-ES
SuperGATMX EV13NSO8-ES EV13NSO8-500-ES EV13NS-ES

Other Hyperparameters

See SuperGAT/args.yaml or run $ python3 SuperGAT/main.py --help.

Code Base

The implemention of Video Depth Estimation by Fusing Flow-to-Depth Proposals

Flow-to-depth (FDNet) video-depth-estimation This is the implementation of paper Video Depth Estimation by Fusing Flow-to-Depth Proposals Jiaxin Xie,

32 Jun 14, 2022
Explainability of the Implications of Supervised and Unsupervised Face Image Quality Estimations Through Activation Map Variation Analyses in Face Recognition Models

Explainable_FIQA_WITH_AMVA Note This is the official repository of the paper: Explainability of the Implications of Supervised and Unsupervised Face I

3 May 08, 2022
Layered Neural Atlases for Consistent Video Editing

Layered Neural Atlases for Consistent Video Editing Project Page | Paper This repository contains an implementation for the SIGGRAPH Asia 2021 paper L

Yoni Kasten 353 Dec 27, 2022
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022
🔅 Shapash makes Machine Learning models transparent and understandable by everyone

🎉 What's new ? Version New Feature Description Tutorial 1.6.x Explainability Quality Metrics To help increase confidence in explainability methods, y

MAIF 2.1k Dec 27, 2022
codes for "Scheduled Sampling Based on Decoding Steps for Neural Machine Translation" (long paper of EMNLP-2022)

Scheduled Sampling Based on Decoding Steps for Neural Machine Translation (EMNLP-2021 main conference) Contents Overview Background Quick to Use Furth

Adaxry 13 Jul 25, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 02, 2023
A CNN implementation using only numpy. Supports multidimensional images, stride, etc.

A CNN implementation using only numpy. Supports multidimensional images, stride, etc. Speed up due to heavy use of slicing and mathematical simplification..

2 Nov 30, 2021
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)

NBFNet: Neural Bellman-Ford Networks This is the official codebase of the paper Neural Bellman-Ford Networks: A General Graph Neural Network Framework

MilaGraph 136 Dec 21, 2022
This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems.

This repository contain code on Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation Problems. The main directory include the code

0 Dec 23, 2021
DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs Abstract: Image-to-image translation has recently achieved re

yaxingwang 23 Apr 14, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

386 Jan 01, 2023
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021)

Flexible Networks for Learning Physical Dynamics of Deformable Objects (2021) By Jinhyung Park, Dohae Lee, In-Kwon Lee from Yonsei University (Seoul,

Jinhyung Park 0 Jan 09, 2022
MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition Paper: MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition accepted fo

64 Dec 18, 2022
a simple, efficient, and intuitive text editor

Oxygen beta a simple, efficient, and intuitive text editor Overview oxygen is a simple, efficient, and intuitive text editor designed as more featured

Aarush Gupta 1 Feb 23, 2022
constructing maps of intellectual influence from publication data

Influencemap Project @ ANU Influence in the academic communities has been an area of interest for researchers. This can be seen in the popularity of a

CS Metrics 13 Jun 18, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

Creating Robust Representations from Pre-Trained Image Encoders using Contrastive Learning Sriram Ravula, Georgios Smyrnis This is the code for our pr

Sriram Ravula 26 Dec 10, 2022
Source code of our BMVC 2021 paper: AniFormer: Data-driven 3D Animation with Transformer

AniFormer This is the PyTorch implementation of our BMVC 2021 paper AniFormer: Data-driven 3D Animation with Transformer. Haoyu Chen, Hao Tang, Nicu S

24 Nov 02, 2022
Official implementation for: Blended Diffusion for Text-driven Editing of Natural Images.

Blended Diffusion for Text-driven Editing of Natural Images Blended Diffusion for Text-driven Editing of Natural Images Omri Avrahami, Dani Lischinski

328 Dec 30, 2022