EGNN - Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

Overview

EGNN - Pytorch

Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication. This technique went for simple invariant features, and ended up beating all previous methods (including SE3 Transformer and Lie Conv) in both accuracy and performance. SOTA in dynamical system models, molecular activity prediction tasks, etc.

Install

$ pip install egnn-pytorch

Usage

import torch
from egnn_pytorch import EGNN

layer1 = EGNN(dim = 512)
layer2 = EGNN(dim = 512)

feats = torch.randn(1, 16, 512)
coors = torch.randn(1, 16, 3)

feats, coors = layer1(feats, coors)
feats, coors = layer2(feats, coors) # (1, 16, 512), (1, 16, 3)

With edges

import torch
from egnn_pytorch import EGNN

layer1 = EGNN(dim = 512, edge_dim = 4)
layer2 = EGNN(dim = 512, edge_dim = 4)

feats = torch.randn(1, 16, 512)
coors = torch.randn(1, 16, 3)
edges = torch.randn(1, 16, 16, 4)

feats, coors = layer1(feats, coors, edges)
feats, coors = layer2(feats, coors, edges) # (1, 16, 512), (1, 16, 3)

Citations

@misc{satorras2021en,
    title 	= {E(n) Equivariant Graph Neural Networks}, 
    author 	= {Victor Garcia Satorras and Emiel Hoogeboom and Max Welling},
    year 	= {2021},
    eprint 	= {2102.09844},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG}
}
Comments
  • training batch size

    training batch size

    Dear authors,

    thanks for your great work! I saw your example, which is easy to understand. But I notice that during training, in each iteration, it seems it supports the case where batch-size > 1, but all the graphs have the same adj_mat. do you have better solution for that? thanks

    opened by futianfan 6
  • Import Error when torch_geometric is not available

    Import Error when torch_geometric is not available

    https://github.com/lucidrains/egnn-pytorch/blob/e35510e1be94ee9f540bf2ffea49cd63578fe473/egnn_pytorch/egnn_pytorch.py#L413

    A small problem, this Tensor is not defined.

    Thanks for your work.

    opened by zrt 4
  • About aggregations in EGNN_sparse

    About aggregations in EGNN_sparse

    Hi, thanks for your great work!

    I have a question on how aggregations are computed for node embedding and coordinate embedding. In the paper, the aggregation for node embedding is computed over its neighbors, while the aggregation for coordinate embedding is computed over is computed over all others. However, in EGNN_sparse, I didn't notice such difference in aggregations.

    I guess it is because computing all-pair messages for coordinate embedding makes 'sparse' meaningless, but I would like to double-check to see if I get this correctly. So anyway, did you do this intentionally? Or did I miss something?

    My appreciation.

    opened by simon1727 4
  • Few queries on the implementation

    Few queries on the implementation

    Hi - fast work coding these things up, as usual! Looking at the paper and your code, you're not using squared distance for the edge weighting. Is that intentional? Also, it looks like you are adding the old feature vectors to the new ones rather than taking the new vectors directly from the fully connected net - is that also an intentional change from the paper?

    opened by denjots 3
  • Fix PyG problems. add exmaple for point cloud denoising

    Fix PyG problems. add exmaple for point cloud denoising

    • Fixed some tiny errors in data flows for the PyG layers (dimensions and slices mainly)
    • fixed the EGNN_Sparse_Network so now it works
    • provides example for point cloud denoising (from gaussian masked coordinates), and showcases potential issues:
      • unstable (could be due to nature of data, not sure, but gvp does well on it)
      • not able to beat baseline (in contrast, gvp gets to 0.8 RMSD while this gets to the baseline 1 RMSD but not below it)
    opened by hypnopump 2
  • EGNN_sparse incorrect positional encoding output

    EGNN_sparse incorrect positional encoding output

    Hi, many thanks for the implementation!

    I was quickly checking the code for the pytorch geometric implementation of the EGNN_sparse layer, and I noticed that it expects the first 3 columns in the features to be the coordinates. However, in the update method, features and coordinates are passed in the wrong order.

    https://github.com/lucidrains/egnn-pytorch/blob/375d686c749a685886874baba8c9e0752db5f5be/egnn_pytorch/egnn_pytorch.py#L192

    This may cause problems during learning (think of concatenating several of these layers), as they expect coordinate and feature order to be consistent.

    One can reproduce this behaviour in the following snippet:

    layer = EGNN_sparse(feats_dim=1, pos_dim=3, m_dim=16, fourier_features=0)
    
    R = rot(*torch.rand(3))
    T = torch.randn(1, 1, 3)
    
    feats = torch.randn(16, 1)
    coors = torch.randn(16, 3)
    x1 = torch.cat([coors, feats], dim=-1)
    x2 = torch.cat([(coors @ R + T).squeeze() , feats], dim=-1)
    edge_idxs = (torch.rand(2, 20) * 16).long()
    
    out1 = layer(x=x1, edge_index=edge_idxs)
    out2 = layer(x=x2, edge_index=edge_idxs)
    

    After fixing the order of these arguments in the update method then the layer behaves as expected (output features are equivariant, and coordinate features are equivariant upon se(3) transformation)

    opened by josejimenezluna 2
  • Nan Values after stacking multiple layers

    Nan Values after stacking multiple layers

    Hi Lucid!!

    I find that when stacking multiple layers the output from the model rapidly goes to Nan. I suspect it may be related to the weights used for initialization.

    Here is a minimal working example:

    Make some data:

        import numpy as np
        import torch
        from egnn_pytorch import EGNN
        
        torch.set_default_dtype(torch.double)
    
        zline = np.arange(0, 2, 0.05)
        xline = np.sin(zline * 2 * np.pi) 
        yline = np.cos(zline * 2 * np.pi)
        points = np.array([xline, yline, zline])
        geom = torch.tensor(points.transpose())[None,:]
        feat = torch.randint(0, 20, (1, geom.shape[1],1))
    

    Make a model:

        class ResEGNN(torch.nn.Module):
            def __init__(self, depth = 2, dims_in = 1):
                super().__init__()
                self.layers = torch.nn.ModuleList([EGNN(dim = dims_in) for i in range(depth)])
            
            def forward(self, geom, feat):
                for layer in self.layers:
                    feat, geom = layer(feat, geom)
                return geom
    

    Run model for varying depths:

        for i in range(10):
            model = ResEGNN(depth = i)
            pred = model(geom, feat)
            mean_absolute_value  = torch.abs(pred).mean()
            print("Order of predictions {:.2f}".format(np.log(mean_absolute_value.detach().numpy())))
    

    Output : Order of predictions -0.29 Order of predictions 0.05 Order of predictions 6.65 Order of predictions 21.38 Order of predictions 78.25 Order of predictions 302.71 Order of predictions 277.38 Order of predictions nan Order of predictions nan Order of predictions nan

    opened by brennanaba 2
  • Edge features thrown out

    Edge features thrown out

    Hi, thanks for this implementation!

    I was wondering if the pytorch-geometric implementation of this architecture is throwing the edge features out by mistake, as seen here

    https://github.com/lucidrains/egnn-pytorch/blob/1b8320ade1a89748e4042ae448626652f1c659a1/egnn_pytorch/egnn_pytorch.py#L148-L151

    Or maybe my understanding is wrong? Cheers,

    opened by josejimenezluna 2
  • solve ij -> i bottleneck in sparse version

    solve ij -> i bottleneck in sparse version

    I don't recommend normalizing the weights nor the coords.

    • The weights are the coefficient that multiplies the delta in the i->j direction
    • the coords are the deltas in the i->j direction Can't see the advantage of normalizing them beyond a naive stabilization that might affect the convergence properties by needing more layers due to the limited transformation that a layer will be able to do.

    It works fine for denoising without normalization (the unstability might come from huge outliers, but then tuning the learning rate or clipping the gradients might be of help.)

    opened by hypnopump 0
  • Questions about the EGNN code

    Questions about the EGNN code

    Recently, I've tried to read EGNN paper and study your EGNN code. Actually, I had hard time to understand both paper and code because my major is not computer science. When studying your code, I realize that the shape of hidden_out and the shape of kwargs["x"] must be same to perform add operation (becaus of residual connection) in the class EGNN_sparse forward method. How can I increase or decrease the hidden dimension size of x?

    I would like to get some advice.

    Thanks for your consideration in this regard.

    opened by Byun-jinyoung 0
  • Wrong edge_index size hint in  class EGNN_Sparse of pyg version

    Wrong edge_index size hint in class EGNN_Sparse of pyg version

    Hi, I found there may be a little mistake. In the input hint of class EGNN_Sparse of pyg version, the size of edge_index is (n_edges, 2). However, it should be (2, n_edges). Otherwise, the distance calculation will be not correct. """ Inputs: * x: (n_points, d) where d is pos_dims + feat_dims * edge_index: (n_edges, 2) * edge_attr: tensor (n_edges, n_feats) excluding basic distance feats. * batch: (n_points,) long tensor. specifies xloud belonging for each point * angle_data: list of tensors (levels, n_edges_i, n_length_path) long tensor. * size: None """

    opened by Layne-Huang 2
  • Exploding Gradients With 4 Layers

    Exploding Gradients With 4 Layers

    I'm using EGNN with 4 layers (where I also do global attention after each layer), and I'm seeing exploding gradients after 90 epochs or so. I'm using techniques discussed earlier (sparse attention matrix, coor_weights_clamp_value, norm_coors), but I'm not sure if there's anything else I should be doing. I'm also not updating the coordinates, so the fix in the pull request doesn't apply.

    opened by cutecows 0
  • Added optional tanh to coors_mlp

    Added optional tanh to coors_mlp

    This removes the NaN bug completely (must also use norm_coors otherwise performance dies)

    The NaN bug comes from the coors_mlp exploding, so forcing values between -1 and 1 prevents this. If coordinates are normalised then performance should not be adversely affected.

    opened by jscant 1
Releases(0.2.6)
Owner
Phil Wang
Working with Attention. It's all we need.
Phil Wang
Project to create an open-source 6 DoF input device

6DInputs A Project to create open-source 3D printed 6 DoF input devices Note the plural ('6DInputs' and 'devices') in the headings. We would like seve

RepRap Ltd 47 Jul 28, 2022
Code and hyperparameters for the paper "Generative Adversarial Networks"

Generative Adversarial Networks This repository contains the code and hyperparameters for the paper: "Generative Adversarial Networks." Ian J. Goodfel

Ian Goodfellow 3.5k Jan 08, 2023
CS50's Introduction to Artificial Intelligence Test Scripts

CS50's Introduction to Artificial Intelligence Test Scripts 🤷‍♂️ What's this? 🤷‍♀️ This repository contains Python scripts to automate tests for mos

Jet Kan 2 Dec 28, 2022
Short and long time series classification using convolutional neural networks

time-series-classification Short and long time series classification via convolutional neural networks In this project, we present a novel framework f

35 Oct 22, 2022
Image-retrieval-baseline - MUGE Multimodal Retrieval Baseline

MUGE Multimodal Retrieval Baseline This repo is implemented based on the open_cl

47 Dec 16, 2022
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)

GDR-Net This repo provides the PyTorch implementation of the work: Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji. GDR-Net: Geometry-Guided

169 Jan 07, 2023
Alphabetical Letter Recognition

DecisionTrees-Image-Classification Alphabetical Letter Recognition In these demo we are using "Decision Trees" Our database is composed by Learning Im

Mohammed Firass 4 Nov 30, 2021
Deployment of PyTorch chatbot with Flask

Chatbot Deployment with Flask and JavaScript In this tutorial we deploy the chatbot I created in this tutorial with Flask and JavaScript. This gives 2

Patrick Loeber (Python Engineer) 107 Dec 29, 2022
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
An NLP library with Awesome pre-trained Transformer models and easy-to-use interface, supporting wide-range of NLP tasks from research to industrial applications.

简体中文 | English News [2021-10-12] PaddleNLP 2.1版本已发布!新增开箱即用的NLP任务能力、Prompt Tuning应用示例与生成任务的高性能推理! 🎉 更多详细升级信息请查看Release Note。 [2021-08-22]《千言:面向事实一致性的生

6.9k Jan 01, 2023
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

[ICLR'21] DARTS-: Robustly Stepping out of Performance Collapse Without Indicators [openreview] Authors: Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun

55 Nov 01, 2022
Simple SN-GAN to generate CryptoPunks

CryptoPunks GAN Simple SN-GAN to generate CryptoPunks. Neural network architecture and training code has been modified from the PyTorch DCGAN example.

Teddy Koker 66 Dec 15, 2022
This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?".

Patches Are All You Need? 🤷 This repository contains an implementation of ConvMixer for the ICLR 2022 submission "Patches Are All You Need?". Code ov

ICLR 2022 Author 934 Dec 30, 2022
CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search

CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search This repository is the official implementation of CAPITAL: Optimal Subgrou

Hengrui Cai 0 Oct 19, 2021
Weighted K Nearest Neighbors (kNN) algorithm implemented on python from scratch.

kNN_From_Scratch I implemented the k nearest neighbors (kNN) classification algorithm on python. This algorithm is used to predict the classes of new

1 Dec 14, 2021
Genpass - A Passwors Generator App With Python3

Genpass Welcom again into another python3 App this is simply an Passwors Generat

Mal4D 1 Jan 09, 2022
PyTorch META-DATASET (Few-shot classification benchmark)

PyTorch META-DATASET (Few-shot classification benchmark) This repo contains a PyTorch implementation of meta-dataset and a unified implementation of s

Malik Boudiaf 39 Oct 31, 2022
Structural Constraints on Information Content in Human Brain States

Structural Constraints on Information Content in Human Brain States Code accompanying the paper "The information content of brain states is explained

Leon Weninger 3 Sep 07, 2022
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

CoMoGAN: Continuous Model-guided Image-to-Image Translation Official repository. Paper CoMoGAN: continuous model-guided image-to-image translation [ar

166 Dec 31, 2022
Implementation of SSMF: Shifting Seasonal Matrix Factorization

SSMF Implementation of SSMF: Shifting Seasonal Matrix Factorization, Koki Kawabata, Siddharth Bhatia, Rui Liu, Mohit Wadhwa, Bryan Hooi. NeurIPS, 2021

Koki Kawabata 9 Jun 10, 2022