Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Related tags

Deep LearningGNAS-MP
Overview

Rethinking Graph Neural Architecture Search from Message-passing

Intro

The GNAS can automatically learn better architecture with the optimal depth of message passing on the graph. Specifically, we design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations (feature filtering & neighbor aggregation) from message-passing mechanism to construct powerful graph network search space. Feature filtering performs adaptive feature selection, and neighbor aggregation captures structural information and calculates neighbors’ statistics. Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.

Getting Started

0. Prerequisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN

1. Setup Python environment for GPU

# clone Github repo
conda install git
git clone https://github.com/phython96/GNAS-MP.git
cd GNAS-MP

# Install python environment
conda env create -f environment_gpu.yml
conda activate gnas

2. Download datasets

The datasets are provided by project benchmarking-gnns, you can click here to download all the required datasets.

3. Searching

We have provided scripts for easily searching graph neural networks on five datasets.

# searching on ZINC dataset at graph regression task
sh scripts/search_molecules_zinc.sh [gpu_id]

# searching on SBMs_PATTERN dataset at node classification task
sh scripts/search_sbms_pattern.sh [gpu_id]

# searching on SBMs_CLUSTER dataset at node classification task
sh scripts/search_sbms_cluster.sh [gpu_id]

# searching on MNIST dataset at graph classification task
sh scripts/search_superpixels_mnist.sh [gpu_id]

# searching on CIFAR10 dataset at graph classification task
sh scripts/search_superpixels_cifar10.sh [gpu_id]

When the search procedure is finished, you need to copy the searched genotypes from file "./save/[data_name]_search.txt" to "./configs/genotypes.py".

For example, we have searched on MNIST dataset, and obtain genotypes result file "./save/MNIST_search.txt".

Epoch : 19
[Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_dense', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 6), ('f_identity', 8, 7), ('f_dense', 9, 7)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_dense', 2, 0), ('f_dense', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_identity', 7, 5), ('f_identity', 8, 6), ('f_sparse', 9, 8)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 0), ('f_sparse', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_sparse', 8, 6), ('f_identity', 9, 4)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 1), ('f_sparse', 3, 1), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_identity', 8, 7), ('f_sparse', 9, 4)], concat_node=None)]
Epoch : 20
[Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_dense', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 6), ('f_identity', 8, 7), ('f_sparse', 9, 8)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 1), ('f_dense', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_identity', 7, 5), ('f_identity', 8, 6), ('f_sparse', 9, 8)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 0), ('f_sparse', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_dense', 8, 4), ('f_sparse', 9, 6)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_sparse', 8, 6), ('f_sparse', 9, 8)], concat_node=None)]
Epoch : 21
[Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_dense', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 6), ('f_identity', 8, 7), ('f_sparse', 9, 8)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_dense', 2, 0), ('f_dense', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_identity', 7, 5), ('f_identity', 8, 6), ('f_identity', 9, 6)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 0), ('f_sparse', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 6), ('f_identity', 8, 4), ('f_identity', 9, 7)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_sparse', 8, 6), ('f_identity', 9, 4)], concat_node=None)]

Copy the fourth line from the above file and paste it into "./configs/genotypes.py" with the prefix "MNIST = ".

MNIST_Net = [Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_dense', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 6), ('f_identity', 8, 7), ('f_sparse', 9, 8)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 1), ('f_dense', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_identity', 7, 5), ('f_identity', 8, 6), ('f_sparse', 9, 8)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 0), ('f_sparse', 3, 0), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_dense', 8, 4), ('f_sparse', 9, 6)], concat_node=None), Genotype(alpha_cell=[('f_sparse', 1, 0), ('f_sparse', 2, 1), ('f_sparse', 3, 2), ('a_max', 4, 1), ('a_max', 5, 2), ('a_max', 6, 3), ('f_sparse', 7, 4), ('f_sparse', 8, 6), ('f_sparse', 9, 8)], concat_node=None)]

4. Training

Before training, you must confim that there is a genotype of searched graph neural network in file "./configs/genotypes.py".

We provided scripts for easily training graph neural networks searched by GNAS.

# training on ZINC dataset at graph regression task
sh scripts/train_molecules_zinc.sh [gpu_id]

# training on SBMs_PATTERN dataset at node classification task
sh scripts/train_sbms_pattern.sh [gpu_id]

# training on SBMs_CLUSTER dataset at node classification task
sh scripts/train_sbms_cluster.sh [gpu_id]

# training on MNIST dataset at graph classification task
sh scripts/train_superpixels_mnist.sh [gpu_id]

# training on CIFAR10 dataset at graph classification task
sh scripts/train_superpixels_cifar10.sh [gpu_id]

Results

Visualization

Here, we show 4-layer graph neural networks searched by GNAS on five datasets at three graph tasks.

Reference

to be updated

Owner
Shaofei Cai
Retired ICPC contestant, classic algorithm enthusiast.
Shaofei Cai
Focal Loss for Dense Rotation Object Detection

Convert ResNets weights from GluonCV to Tensorflow Abstract GluonCV released some new resnet pre-training weights and designed some new resnets (such

17 Nov 24, 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
C3d-pytorch - Pytorch porting of C3D network, with Sports1M weights

C3D for pytorch This is a pytorch porting of the network presented in the paper Learning Spatiotemporal Features with 3D Convolutional Networks How to

Davide Abati 311 Jan 06, 2023
Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646

[TCSVT] Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization LPN [Paper] NEWs Prerequisites Python 3.6 GPU Memory = 8G Numpy 1.

46 Dec 14, 2022
Fast Soft Color Segmentation

Fast Soft Color Segmentation

3 Oct 29, 2022
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
Using pretrained language models for biomedical knowledge graph completion.

LMs for biomedical KG completion This repository contains code to run the experiments described in: Scientific Language Models for Biomedical Knowledg

Rahul Nadkarni 41 Nov 30, 2022
The Dual Memory is build from a simple CNN for the deep memory and Linear Regression fro the fast Memory

Simple-DMA a simple Dual Memory Architecture for classifications. based on the paper Dual-Memory Deep Learning Architectures for Lifelong Learning of

1 Jan 27, 2022
PyTorch META-DATASET (Few-shot classification benchmark)

PyTorch META-DATASET (Few-shot classification benchmark) This repo contains a PyTorch implementation of meta-dataset and a unified implementation of s

Malik Boudiaf 39 Oct 31, 2022
City-seeds - A random generator of cultural characteristics intended to spark ideas and help draw threads

City Seeds This is a random generator of cultural characteristics intended to sp

Aydin O'Leary 2 Mar 12, 2022
Project to create an open-source 6 DoF input device

6DInputs A Project to create open-source 3D printed 6 DoF input devices Note the plural ('6DInputs' and 'devices') in the headings. We would like seve

RepRap Ltd 47 Jul 28, 2022
SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP

scdlpicker SeisComP/SeisBench interface to enable deep-learning (re)picking in SeisComP Objective This is a simple deep learning (DL) repicker module

Joachim Saul 6 May 13, 2022
DI-smartcross - Decision Intelligence Platform for Traffic Crossing Signal Control

DI-smartcross DI-smartcross - Decision Intelligence Platform for Traffic Crossin

OpenDILab 213 Jan 02, 2023
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
MMFlow is an open source optical flow toolbox based on PyTorch

Documentation: https://mmflow.readthedocs.io/ Introduction English | 简体中文 MMFlow is an open source optical flow toolbox based on PyTorch. It is a part

OpenMMLab 688 Jan 06, 2023
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

VITA 161 Jan 02, 2023
An implementation of quantum convolutional neural network with MindQuantum. Huawei, classifying MNIST dataset

关于实现的一点说明 山东大学 2020级 苏博南 www.subonan.com 文件说明 tools.py 这里面主要有两个函数: resize(a, lenb) 这其实是我找同学写的一个小算法hhh。给出一个$28\times 28$的方阵a,返回一个$lenb\times lenb$的方阵。因

ぼっけなす 2 Aug 29, 2022
Fast Style Transfer in TensorFlow

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! You can even style videos! It takes 100ms o

Jefferson 5 Oct 24, 2021
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Andy Brock 478 Aug 04, 2022
A PyTorch implementation of "Graph Wavelet Neural Network" (ICLR 2019)

Graph Wavelet Neural Network ⠀⠀ A PyTorch implementation of Graph Wavelet Neural Network (ICLR 2019). Abstract We present graph wavelet neural network

Benedek Rozemberczki 490 Dec 16, 2022