Reading Group @mila-iqia on Computational Optimal Transport for Machine Learning Applications

Overview

Computational Optimal Transport for Machine Learning Reading Group

Over the last few years, optimal transport (OT) has quickly become a central topic in machine learning. OT is now routinely used in many areas of ML, ranging from the theoretical use of OT flow for controlling learning algorithms to the inference of high-dimensional cell trajectories in genomics. This reading group aims to keep participants up to date with the latest research happening in this area.

Logistics

For Winter 2022 term, meetings will be held weekly on Mondays from 14:00 to 15:00 EST via zoom (for now).

  • Zoom Link.

  • Password will be provided on slack before every meeting.

  • Meetings will be recorded by default. Recordings are available to Mila members at this link. Presenters can email [email protected] to opt out from being recorded.

  • Reading Group participates are expected to read each paper beforehand.

Schedule

Date Topic Presenters Slides
01/17/21 Introduction to Optimal Transport for Machine Learning Alex Tong
Ali Harakeh
Part 1
Part 2
01/24/21 Learning with minibatch Wasserstein : asymptotic and gradient properties Kilian Fatras --
01/31/21 -- -- --
02/7/21 -- -- --
02/14/21 -- -- --
02/21/21 -- -- --
02/28/21 -- -- --

Paper Presentation Instructions

Volunteer to Present

  • All participants are encouraged to volunteer to present at the reading group.

  • Volunteers can choose a paper from this list of suggested papers, or any other paper that is related to optimal transport in machine learning.

  • To volunteer, please send the paper title, link, and your preferred presentation date the Slack channel #volunteer-to-present or email [email protected].

Presentation Instructions

  • Presentations should be limited to 40 minutes at most. During the presentation, organizers will act as moderators and will read questions as they come up on the Zoom chat. The aim is to be done in 35-40 min to allow 15 min for general discussion.

  • Presentations should roughly adhere to the following outline:

    1. 5-10 minutes: Problem setup and position to literature.
    2. 10-15 minutes: Contributions/Novel technical points.
    3. 10-15 minutes: Weak points, open questions, and future directions.

Useful References

This is a list of useful references including code, text books, and presentations.

Code

  • POT: Python Optimal Transport: This open source Python library provide several solvers for optimization problems related to Optimal Transport for signal, image processing and machine learning. This library has the most efficient exact OT solvers.
  • GeomLoss: The GeomLoss library provides efficient GPU implementations for Kernel norms, Hausdorff divergences, and Debiased Sinkhorn divergences. This library has the most scalable duel OT solvers embedded within the Sinkhorn divergence computation.

Textbooks

@article{peyre2019computational,
  title={Computational optimal transport: With applications to data science},
  author={Peyr{\'e}, Gabriel and Cuturi, Marco and others},
  journal={Foundations and Trends{\textregistered} in Machine Learning},
  volume={11},
  number={5-6},
  pages={355--607},
  year={2019},
  publisher={Now Publishers, Inc.}}

Workshops and Presentations

Organizers

Modeled after the Causal Representation Learning Reading Group .

Owner
Ali Harakeh
Postdoctoral Research Fellow @mila-iqia
Ali Harakeh
Trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI

Introduction This script trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI. In order to run this

Momin Haider 0 Jan 02, 2022
A Python reference implementation of the CF data model

cfdm A Python reference implementation of the CF data model. References Compliance with FAIR principles Documentation https://ncas-cms.github.io/cfdm

NCAS CMS 25 Dec 13, 2022
NuPIC Studio is an all­-in-­one tool that allows users create a HTM neural network from scratch

NuPIC Studio is an all­-in-­one tool that allows users create a HTM neural network from scratch, train it, collect statistics, and share it among the members of the community. It is not just a visual

HTM Community 93 Sep 30, 2022
Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation Introduction 📋 Official implementation of Explainable Robust Learnin

JeongEun Park 6 Apr 19, 2022
A pyparsing-based library for parsing SOQL statements

CONTRIBUTORS WANTED!! Installation pip install python-soql-parser or, with poetry poetry add python-soql-parser Usage from python_soql_parser import p

Kicksaw 0 Jun 07, 2022
Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021

ATISS: Autoregressive Transformers for Indoor Scene Synthesis This repository contains the code that accompanies our paper ATISS: Autoregressive Trans

138 Dec 22, 2022
中文语音识别系列,读者可以借助它快速训练属于自己的中文语音识别模型,或直接使用预训练模型测试效果。

MASR中文语音识别(pytorch版) 开箱即用 自行训练 使用与训练分离(增量训练) 识别率高 说明:因为每个人电脑机器不同,而且有些安装包安装起来比较麻烦,强烈建议直接用我编译好的docker环境跑 目前docker基础环境为ubuntu-cuda10.1-cudnn7-pytorch1.6.

发送小信号 180 Dec 17, 2022
Background-Click Supervision for Temporal Action Localization

Background-Click Supervision for Temporal Action Localization This repository is the official implementation of BackTAL. In this work, we study the te

LeYang 221 Oct 09, 2022
[ICME 2021 Oral] CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning

CORE-Text: Improving Scene Text Detection with Contrastive Relational Reasoning This repository is the official PyTorch implementation of CORE-Text, a

Jingyang Lin 18 Aug 11, 2022
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

How to eat TensorFlow2 in 30 days ? 🔥 🔥 Click here for Chinese Version(中文版) 《10天吃掉那只pyspark》 🚀 github项目地址: https://github.com/lyhue1991/eat_pyspark

lyhue1991 9.7k Jan 01, 2023
The official implementation for "FQ-ViT: Fully Quantized Vision Transformer without Retraining".

FQ-ViT [arXiv] This repo contains the official implementation of "FQ-ViT: Fully Quantized Vision Transformer without Retraining". Table of Contents In

132 Jan 08, 2023
A Python library for unevenly-spaced time series analysis

traces A Python library for unevenly-spaced time series analysis. Why? Taking measurements at irregular intervals is common, but most tools are primar

Datascope Analytics 516 Dec 29, 2022
Malware Env for OpenAI Gym

Malware Env for OpenAI Gym Citing If you use this code in a publication please cite the following paper: Hyrum S. Anderson, Anant Kharkar, Bobby Fila

ENDGAME 563 Dec 29, 2022
Official Implementation of DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation

DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation [Arxiv] [Paper] As acquiring pixel-wise an

Lukas Hoyer 305 Dec 29, 2022
PyTorch code for Composing Partial Differential Equations with Physics-Aware Neural Networks

FInite volume Neural Network (FINN) This repository contains the PyTorch code for models, training, and testing, and Python code for data generation t

Cognitive Modeling 20 Dec 18, 2022
AQP is a modular pipeline built to enable the comparison and testing of different quality metric configurations.

Audio Quality Platform - AQP An Open Modular Python Platform for Objective Speech and Audio Quality Metrics AQP is a highly modular pipeline designed

Jack Geraghty 24 Oct 01, 2022
Detail-Preserving Transformer for Light Field Image Super-Resolution

DPT Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 . Update

50 Jan 01, 2023
Reimplement of SimSwap training code

SimSwap-train Reimplement of SimSwap training code Instructions 1.Environment Preparation (1)Refer to the README document of SIMSWAP to configure the

seeprettyface.com 111 Dec 31, 2022
A Simple Example for Imitation Learning with Dataset Aggregation (DAGGER) on Torcs Env

Imitation Learning with Dataset Aggregation (DAGGER) on Torcs Env This repository implements a simple algorithm for imitation learning: DAGGER. In thi

Hao 66 Nov 23, 2022
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 07, 2021