GitHub repository for the ICLR Computational Geometry & Topology Challenge 2021

Overview

ICLR Computational Geometry & Topology Challenge 2022

Welcome to the ICLR 2022 Computational Geometry & Topology challenge 2022 --- by the ICLR 2022 Workshop on Geometrical and Topological Representation Learning.

Lead organizers: Adele Myers, Saiteja Utpala, and Nina Miolane (UC Santa Barbara).

DOI

Description of the challenge

The purpose of this challenge is to foster reproducible research in geometric (deep) learning, by crowdsourcing the open-source implementation of learning algorithms on manifolds. Participants are asked to contribute code for a published/unpublished algorithm, following Scikit-Learn/Geomstats' or pytorch's APIs and computational primitives, benchmark it, and demonstrate its use in real-world scenarios.

Each submission takes the form of a Jupyter Notebook leveraging the coding infrastructure and building blocks from the package Geomstats. The participants submit their Jupyter Notebook via Pull Requests (PR) to this GitHub repository, see Guidelines below.

In addition to the challenge's prizes, participants will have the opportunity to co-author a white paper summarizing the findings of the competition.

This is the second edition of this challenge! Feel free to look at last year's guidelines, submissions, winners and paper for additional information.

Note: We invite participants to review this README regularly, as details are added to the guidelines when questions are submitted to the organizers.

Deadline

The final Pull Request submission date and hour will have to take place before:

  • April 4th, 2022 at 16:59 PST (Pacific Standard Time).

The participants can freely commit to their Pull Request and modify their submission until this time.

Winners announcement and prizes

The first 3 winners will be announced at the ICLR 2022 virtual workshop Geometrical and Topological Representation Learning and advertised through the web. The winners will also be contacted directly via email.

The prizes are:

  • $2000 for the 1st place,
  • $1000 for the 2nd place,
  • $500 for the 3rd place.

Subscription

Anyone can participate and participation is free. It is enough to:

  • send a Pull Request,
  • follow the challenge guidelines, to be automatically considered amongst the participants.

An acceptable PR automatically subscribes a participant to the challenge.

Guidelines

We encourage the participants to start submitting their Pull Request early on. This allows to debug the tests and helps to address potential issues with the code.

Teams are accepted and there is no restriction on the number of team members.

The principal developpers of Geomstats (i.e. the co-authors of Geomstats published papers) are not allowed to participate.

A submission should respect the following Jupyter Notebook’s structure:

  1. Introduction and Motivation
  • Explain and motivate the choice of learning algorithm
  1. Related Work and Implementations
  • Contrast the chosen learning algorithms with other algorithms
  • Describe existing implementations, if any
  1. Implementation of the Learning Algorithm --- with guidelines:
  • Follow Scikit-Learn/Geomstats APIs, see RiemannianKMeans example, or Pytorch base classes such as torch.nn.Module.
  • IMPORTANT: Use Geomstats computational primitives (e.g. exponential, geodesics, parallel transport, etc). Note that the functions in geomstats.backend are not considered computational primitives, as they are only wrappers around autograd, numpy, torch and tensorflow functions.
  1. Test on Synthetic Datasets and Benchmark
  2. Application to Real-World Datasets

Examples of possible submissions

  • Comparing embedding on trees in hyperbolic plane and variants, e.g. from Sarkar 2011.
  • Hypothesis testing on manifolds, e.g. from Osborne et al 2013..
  • (Extended/Unscented) Kalman Filters on Lie groups and variants, e.g. from Bourmaud et al 2013.
  • Gaussian Processes on Riemannian Manifolds and variants, e.g. from Calandra et al 2014.
  • Barycenter Subspace Analysis on Manifolds and variants, e.g. from Pennec 2016.
  • Curve fitting on manifolds and variants, e.g. from Gousenbourger et al 2018.
  • Smoothing splines on manifolds, e.g. from Kim et al 2020.
  • Recurrent models on manifolds and variants, e.g. from Chakraborty et al 2018.
  • Geodesic CNNs on manifolds and variants, e.g. from Masci et al 2018.
  • Variational autoencoders on Riemannian manifolds and variants, e.g. from Miolane et al 2019.
  • Probabilistic Principal Geodesic Analysis and variants, e.g. from Zhang et al 2019.
  • Gauge-equivariant neural networks and variants, e.g. from Cohen et al 2019.
  • and many more, as long as you implement them using Geomstats computational primitives (e.g. exponential, geodesics, parallel transport, etc).

Before starting your implementation, make sure that the algorithm that you want to contribute is not already in the learning module of Geomstats.

The notebook provided in the submission-example-* folders is also an example of submission that can help the participants to design their proposal and to understand how to use/inherit from Scikit-Learning, Geomstats, Pytorch. Note that this example is "naive" on purpose and is only meant to give illustrative templates rather than to provide a meaningful data analysis. More examples on how to use the packages can be found on the GitHub repository of Geomstats.

The code should be compatible with Python 3.8 and make an effort to respect the Python style guide PEP8. The portion of the code using geomstats only needs to run with numpy or pytorch backends. However, it will be appreciated by the reviewers/voters if the code can run in all backends: numpy, autograd, tensorflow and pytorch, using geomstats gs., when applicable.

The Jupyter notebooks are automatically tested when a Pull Request is submitted. The tests have to pass. Their running time should not exceed 3 hours, although exceptions can be made by contacting the challenge organizers.

If a dataset is used, the dataset has to be public and referenced. There is no constraint on the data type to be used.

A participant can raise GitHub issues and/or request help or guidance at any time through Geomstats slack. The help/guidance will be provided modulo availability of the maintainers.

Submission procedure

  1. Fork this repository to your GitHub.

  2. Create a new folder with your team leader's GitHub username in the root folder of the forked repository, in the main branch.

  3. Place your submission inside the folder created at step 2, with:

  • the unique Jupyter notebook (the file shall end with .ipynb),
  • datasets (if needed),
  • auxiliary Python files (if needed).

Datasets larger than 10MB shall be directly imported from external URLs or from data sharing platforms such as OpenML.

If your project requires external pip installable libraries that are not amongst Geomstats’ requirements.txt, you can include them at the beginning of your Jupyter notebook, e.g. with:

import sys
!{sys.executable} -m pip install numpy scipy torch

Evaluation and ranking

The Condorcet method will be used to rank the submissions and decide on the winners. The evaluation criteria will be:

  1. How "interesting"/"important"/"useful" is the learning algorithm? Note that this is a subjective evaluation criterion, where the reviewers will evaluate what the implementation of this algorithm brings to the community (regardless of the quality of the code).
  2. How readable/clean is the implementation? How well does the submission respect Scikit-Learn/Geomstats/Pytorch's APIs? If applicable: does it run across backends?
  3. Is the submission well-written? Does the docstrings help understand the methods?
  4. How informative are the tests on synthetic datasets, the benchmarks, and the real-world application?

Note that these criteria do not reward new learning algorithms, nor learning algorithms that outperform the state-of-the-art --- but rather clean code and exhaustive tests that will foster reproducible research in our field.

Selected Geomstats maintainers and collaborators, as well as each team whose submission respects the guidelines, will vote once on Google Form to express their preference for the 3 best submissions according to each criterion. Note that each team gets only one vote, even if there are several participants in the team.

The 3 preferences must all 3 be different: e.g. one cannot select the same Jupyter notebook for both first and second place. Such irregular votes will be discarded. A link to a Google Form will be provided to record the votes. It will be required to insert an email address to identify the voter. The voters will remain secret, only the final ranking will be published.

Questions?

Feel free to contact us through GitHub issues on this repository, on Geomstats repository or through Geomstats slack. Alternatively, you can contact Nina Miolane at [email protected].

Comments
  • Question about what algorithms would count

    Question about what algorithms would count

    Hi,

    I was wondering whether a couple of algorithms that are about learning metrics and embeddings would be within the scope.

    Specifically, if the following two algorithms (either individually or collectively) would be within scope

    1. TreeRep from paper. This is an algorithm that takes in a metric and outputs a tree.
    2. Tree embeddings in Hyperbolic space from this paper. This is an algorithm that takes a weighted tree and then embeds into the hyperbolic manifold.

    Thanks, Rishi

    opened by rsonthal 8
  • NeuroSEED for Small Open Reading Frame Proteins Submission

    NeuroSEED for Small Open Reading Frame Proteins Submission

    All the files for the code are in a branch called "master." There is one folder and inside are all the codes and folders necessary to run the code.

    opened by xiongjeffrey 3
  • Challenge submission: Sasaki Metric and Applications in Geodesic Analysis

    Challenge submission: Sasaki Metric and Applications in Geodesic Analysis

    Dear Challenge Team,

    we are happy to contribute our Project Sasaki Metric and Applications in Geodesic Analysis to the ICLR Challgene 2022.

    Best regards, Felix Ambellan, Martin Hanik, Esfandiar Nava-Yazdani, and Christoph von Tycowicz

    opened by vontycowicz 1
  • NeuroSEED for Small Open Reading Frame Proteins

    NeuroSEED for Small Open Reading Frame Proteins

    Unfortunately, I don't have access to the Geomstats Slack and I am unsure how to accurately submit a pull request. The link to our research folder is below. https://github.com/xiongjeffrey/NeuroSEED

    opened by xiongjeffrey 0
  • autodiff fails on svd in pre_shape.py

    autodiff fails on svd in pre_shape.py

    Dear geomstats team,

    we are trying to perform geodesic regression in Kendall shape space but encountered the issue that the current implementation is not compatible with autodiff functionality. In particular, the align method in geomstats/geometry/pre_shape.py employs singular value decomposition for which autodiff fails if a full set of left/right singular vectors are requested. However, providing the flag 'full_matrices=False' avoids this pitfall and should yield the same alignment.

    We added the flag and, indeed, are now able to run regression. We will submit the modified pre_shape.py along with our project s.t. it does not rely on short notice updates of geomstats.

    Best regards, Christoph

    opened by vontycowicz 1
Releases(final)
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
Cognition-aware Cognate Detection

Cognition-aware Cognate Detection The repository which contains our code for our EACL 2021 paper titled, "Cognition-aware Cognate Detection". This wor

Prashant K. Sharma 1 Feb 01, 2022
Attack on Confidence Estimation algorithm from the paper "Disrupting Deep Uncertainty Estimation Without Harming Accuracy"

Attack on Confidence Estimation (ACE) This repository is the official implementation of "Disrupting Deep Uncertainty Estimation Without Harming Accura

3 Mar 30, 2022
Automatically align face images 🙃→🙂. Can also do windowing and warping.

Automatic Face Alignment (AFA) Carl M. Gaspar & Oliver G.B. Garrod You have lots of photos of faces like this: But you want to line up all of the face

Carl Michael Gaspar 15 Dec 12, 2022
Asymmetric metric learning for knowledge transfer

Asymmetric metric learning This is the official code that enables the reproduction of the results from our paper: Asymmetric metric learning for knowl

20 Dec 06, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

🐤 Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Rendi Chevi 156 Jan 09, 2023
This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf

Behavior-Sequence-Transformer-Pytorch This is a pytorch implementation for the BST model from Alibaba https://arxiv.org/pdf/1905.06874.pdf This model

Jaime Ferrando Huertas 83 Jan 05, 2023
Benchmark tools for Compressive LiDAR-to-map registration

Benchmark tools for Compressive LiDAR-to-map registration This repo contains the released version of code and datasets used for our IROS 2021 paper: "

Allie 9 Nov 24, 2022
Self-supervised learning algorithms provide a way to train Deep Neural Networks in an unsupervised way using contrastive losses

Self-supervised learning Self-supervised learning algorithms provide a way to train Deep Neural Networks in an unsupervised way using contrastive loss

Arijit Das 2 Mar 26, 2022
UNAVOIDS: Unsupervised and Nonparametric Approach for Visualizing Outliers and Invariant Detection Scoring

UNAVOIDS: Unsupervised and Nonparametric Approach for Visualizing Outliers and Invariant Detection Scoring Code Summary aggregate.py: this script aggr

1 Dec 28, 2021
CNNs for Sentence Classification in PyTorch

Introduction This is the implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in PyTorch. Kim's implementation of t

Shawn Ng 956 Dec 19, 2022
DenseNet Implementation in Keras with ImageNet Pretrained Models

DenseNet-Keras with ImageNet Pretrained Models This is an Keras implementation of DenseNet with ImageNet pretrained weights. The weights are converted

Felix Yu 568 Oct 31, 2022
A simple baseline for 3d human pose estimation in tensorflow. Presented at ICCV 17.

3d-pose-baseline This is the code for the paper Julieta Martinez, Rayat Hossain, Javier Romero, James J. Little. A simple yet effective baseline for 3

Julieta Martinez 1.3k Jan 03, 2023
Galaxy images labelled by morphology (shape). Aimed at ML development and teaching

Galaxy images labelled by morphology (shape). Aimed at ML debugging and teaching.

Mike Walmsley 14 Nov 28, 2022
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper] This is Official PyTorch implementatio

42 Nov 01, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

2.6k Jan 04, 2023
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped

CSWin-Transformer This repo is the official implementation of "CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows". Th

Microsoft 409 Jan 06, 2023
vit for few-shot classification

Few-Shot ViT Requirements PyTorch (= 1.9) TorchVision timm (latest) einops tqdm numpy scikit-learn scipy argparse tensorboardx Pretrained Checkpoints

Martin Dong 26 Nov 30, 2022
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis, including human motion imitation, appearance transfer, and novel view synthesis. Currently the paper is under review

2.3k Jan 05, 2023