PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Overview

PyTorch Lightning Optical Flow

GitHub CI flake8 status GitHub CI pytest status GitHub CI pytest pip status DOI

Introduction

This is a collection of state-of-the-art deep model for estimating optical flow. The main goal is to provide a unified framework where multiple models can be trained and tested more easily.

The work and code from many others are present here. I tried to make sure everything is properly referenced, but please let me know if I missed something.

This is still under development, so some things may not work as intended. I plan to add more models in the future, as well keep improving the platform.

Available models

Read more details about the models on https://ptlflow.readthedocs.io/en/latest/models/models_list.html.

Results

You can see a table with main evaluation results of the available models here. More results are also available in the folder docs/source/results.

Disclaimer: These results are the ones obtained by evaluating the available models in this framework in my machine. Your results may be different due to differences in hardware and software. I also do not guarantee that the results of each model will be similar to the ones presented in the respective papers or other original sources. If you need to replicate the original results from a paper, you should use the original implementations.

Getting started

Please take a look at the documentation to learn how to install and use PTLFlow.

You can also check the notebooks below running on Google Colab for some practical examples:

Licenses

The original code of this repository is licensed under the Apache 2.0 license.

Each model may be subjected to different licenses. The license of each model is included in their respective folders. It is your responsibility to make sure that your project is in compliance with all the licenses and conditions involved.

The external pretrained weights all have different licenses, which are listed in their respective folders.

The pretrained weights that were trained within this project are available under the CC BY-NC-SA 4.0 license, which I believe that covers the licenses of the datasets used in the training. That being said, I am not a legal expert so if you plan to use them to any purpose other than research, you should check all the involved licenses by yourself. Additionally, the datasets used for the training usually require the user to cite the original papers, so be sure to include their respective references in your work.

Contributing

Contribution are welcome! Please check CONTRIBUTING.md to see how to contribute.

Citing

BibTeX

@misc{morimitsu2021ptlflow,
  author = {Henrique Morimitsu},
  title = {PyTorch Lightning Optical Flow},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/hmorimitsu/ptlflow}}
}

Acknowledgements

  • This README file is heavily inspired by the one from the timm repository.
  • Some parts of the code were inspired by or taken from FlowNetPytorch.
  • flownet2-pytorch was also another important source.
  • The current main training routine is based on RAFT.
Comments
  • Inference on a whole video (or batch of image pairs) efficiently?

    Inference on a whole video (or batch of image pairs) efficiently?

    I would like to process a whole video efficiently - ie not calling model.forward() once for every pair of images, and instead batching things together. But I can't quite figure out how to do that with IOAdapter (which I would like to use, to ensure eg I use the correct padding). Is this possible? I tried formatting my video into a batch of image pairs, of shape (batch, 2, h, w, c), but this didn't seem to be supported by IOAdapter.

    opened by zplizzi 4
  • AttributeError: 'IOAdapter' object has no attribute 'unpad'

    AttributeError: 'IOAdapter' object has no attribute 'unpad'

    Following the colab example, the line:

    # Some padding may have been added during prepare_inputs. The line below ensures that the padding is removed
    # to make the predictions have the same size as the original images.
    predictions = io_adapter.unpad(predictions)
    

    gives the error

    AttributeError: 'IOAdapter' object has no attribute 'unpad'

    A simple fix seems to be to replace the line with

    predictions = io_adapter.unpad_and_unscale(predictions)

    which makes the example run. Is this correct?

    opened by duckduck-sys 4
  • question about running from source code

    question about running from source code

    Hi, I want to make some modifications to this repo. However, after cloning the repo and running train.py, it says No module named 'pytorch_lightning'. I guess the command pip install dist/ptlflow-*.whl might help. But it download a torch with version>=1.7, which my GPU does not support. Is there any way that I can bypass this error?

    opened by btwbtm 3
  • train on new data

    train on new data

    Hi!

    I'm trying to train the model on my own training data but I get the following error:

    !python train.py raft_small \
      --gpus 1 \
      --train_dataset overfit-sintel \
      --pretrained_ckpt things \
      --val_dataset none \
      --train_batch_size 1 \
      --train_crop_size 512 128 \
      --max_epochs 100 \
      --lr 1e-3
    
    /usr/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
      return f(*args, **kwds)
    /usr/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
      return f(*args, **kwds)
    ERROR: torch_scatter not found. CSV requires torch_scatter library to run. Check instructions at: https://github.com/rusty1s/pytorch_scatter
    Global seed set to 1234
    Downloading: "https://github.com/hmorimitsu/ptlflow/releases/download/weights1/raft_small-things-b7d9f997.ckpt" to /root/.cache/torch/hub/ptlflow/checkpoints/raft_small-things-b7d9f997.ckpt
    100% 3.81M/3.81M [00:00<00:00, 26.3MB/s]
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    IPU available: False, using: 0 IPUs
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
    Traceback (most recent call last):
      File "train.py", line 151, in <module>
        train(args)
      File "train.py", line 111, in train
        trainer.fit(model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
        self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
        return trainer_fn(*args, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
        self._run(model, ckpt_path=ckpt_path)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1145, in _run
        self.accelerator.setup(self)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/gpu.py", line 46, in setup
        return super().setup(trainer)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 93, in setup
        self.setup_optimizers(trainer)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 352, in setup_optimizers
        trainer=trainer, model=self.lightning_module
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 245, in init_optimizers
        return trainer.init_optimizers(model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/optimizers.py", line 35, in init_optimizers
        optim_conf = self.call_hook("configure_optimizers", pl_module=pl_module)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1501, in call_hook
        output = model_fx(*args, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 339, in configure_optimizers
        self.train_dataloader()  # Just to initialize dataloader variables
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 405, in train_dataloader
        dataset = getattr(self, f'_get_{dataset_name}_dataset')(True, *parsed_vals[2:])
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 904, in _get_overfit_dataset
        get_occlusion_mask=False)
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/data/datasets.py", line 1025, in __init__
        f'{passd}, {seq_name}: {len(image_paths)-1} vs {len(flow_paths)}')
    AssertionError: clean, .ipynb_checkpoints: -1 vs 0
    

    I prepared the data as in the example:

    Screenshot 2022-01-19 at 18 06 54

    For the inference, everything works well but not for the training,

    Thank you!

    opened by esgomezm 2
  • error in train with ptlflow_demo_train.ipynb

    error in train with ptlflow_demo_train.ipynb

    Hi, When I run this code in Colab. I get the following error. Please advise. I did not change anything.

    !python train.py raft_small \ --gpus 1 \ --train_dataset overfit-sintel \ --val_dataset none \ --train_batch_size 1 \ --max_epochs 100 \ --lr 1e-3

    ERROR: torch_scatter not found. CSV requires torch_scatter library to run. Check instructions at: https://github.com/rusty1s/pytorch_scatter Global seed set to 1234 GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] 01/02/2022 08:49:01 - WARNING: --train_crop_size is not set. It will be set as (432, 1024). 01/02/2022 08:49:01 - INFO: Loading 1 samples from Sintel_clean dataset. Traceback (most recent call last): File "train.py", line 152, in train(args) File "train.py", line 111, in train trainer.fit(model) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 732, in fit self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 768, in _fit_impl results = self._run(model, ckpt_path=ckpt_path) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1155, in _run self.strategy.setup(self) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/single_device.py", line 76, in setup super().setup(trainer) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/strategy.py", line 118, in setup self.setup_optimizers(trainer) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/strategy.py", line 108, in setup_optimizers self.lightning_module File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/optimizer.py", line 174, in _init_optimizers_and_lr_schedulers optim_conf = model.trainer._call_lightning_module_hook("configure_optimizers", pl_module=model) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1535, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 358, in configure_optimizers optimizer, self.args.lr, total_steps=self.args.max_steps, pct_start=0.05, cycle_momentum=False, anneal_strategy='linear') File "/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py", line 1452, in init raise ValueError("Expected positive integer total_steps, but got {}".format(total_steps)) ValueError: Expected positive integer total_steps, but got -1

    opened by Cyrus1993 2
  • results of raft on sintel after training on kitti

    results of raft on sintel after training on kitti

    Hi Henrique, thanks for this great repo! I have a question about the evaluation results of raft on sintel. The results on sintel are 6.293/4.687 after training on kitti. On the RAFT paper the authors reported 1.61/2.86 (Table 1). I understand the model was fine-tuned on kitti only after training on sintel. Therefore the model performance on sintel drops. I also obtained similar results after this training procedure.

    In order to achieve the results reported in the RAFT paper, seems we should mix the sintel data with KITTI. ("When evaluating on the Sintel(test) set, we finetune on the combined clean and final passes of the training set along with KITTI and HD1K data. ") Do you have plan to do so? Thanks.

    opened by askerlee 2
  • FastFlowNet - Hard and un-reproducible convergence

    FastFlowNet - Hard and un-reproducible convergence

    Hi Henrique,

    Thank you so much for sharing this collection of optical models. It helps me get on with these models quickly.

    I've been training FastFlowNet on FlyingChairs dataset with your default configurations. I found that the convergence is hard and usually un-reproducible. Some times the training will converge after 16 epochs (45k steps). Sometimes the training will converge after 47 epochs (130k steps). Sometimes it will just not converge.

    I'm attaching the loss curve for convergence starting with 16 epochs and 47 epochs for example.

    convergence starting with 16 epochs 16 epochs convergence

    convergence starting with 47 epochs 47 epochs convergences

    Did you see this phenomenon when you were training the model?

    Besides, I compared your loss calculation with FastFlowNet's and PWCNet's original paper. In both papers, the loss for each pyramid level was multiplied with a weight sequence.

    self._weights = [0.005, 0.01, 0.02, 0.08, 0.32]
    

    with 0.005 multiplied to the loss of the highest resolution pyramid level (which is level of 1/4 original image resolution) and 0.32 multiplied to the loss of the lowest resolution pyramid level (which is the level of 1/64 original resolution).

    In your implementation, you reverse the weight sequence and replace values with proportional sequence values. ie.

    self._weights = [0.32, 0.16, 0.08, 0.04, 0.02]
    

    So in your implementation 0.32 applies to the highest resolution pyramid level.

    Do you have any reasons for making this change? Is it due to the original weight sequence is even harder to converge?

    Really appreciate if you can advise.

    Best Regards! David

    opened by magsail 1
Releases(v0.2.7)
Owner
Henrique Morimitsu
Henrique Morimitsu
Model summary in PyTorch similar to `model.summary()` in Keras

Keras style model.summary() in PyTorch Keras has a neat API to view the visualization of the model which is very helpful while debugging your network.

Shubham Chandel 3.7k Dec 29, 2022
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS

(Generic) EfficientNets for PyTorch A 'generic' implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter ef

Ross Wightman 1.5k Jan 01, 2023
Over9000 optimizer

Optimizers and tests Every result is avg of 20 runs. Dataset LR Schedule Imagenette size 128, 5 epoch Imagewoof size 128, 5 epoch Adam - baseline OneC

Mikhail Grankin 405 Nov 27, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 04, 2023
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
A PyTorch implementation of L-BFGS.

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook) What is it?

Hao-Jun Michael Shi 478 Dec 27, 2022
Use Jax functions in Pytorch with DLPack

Use Jax functions in Pytorch with DLPack

Phil Wang 106 Dec 17, 2022
270 Dec 24, 2022
PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

Matthias Fey 1.2k Jan 07, 2023
A few Windows specific scripts for PyTorch

It is a repo that contains scripts that makes using PyTorch on Windows easier. Easy Installation Update: Starting from 0.4.0, you can go to the offici

408 Dec 15, 2022
PyTorch to TensorFlow Lite converter

PyTorch to TensorFlow Lite converter

Omer Ferhat Sarioglu 140 Dec 13, 2022
TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards

TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards. It can reduce GPU memory and scale up the training when the model has massive linear layers (e.g., ViT, BERT and

Kaiyu Yue 275 Nov 22, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Jan 07, 2023
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that all

Preferred Networks, Inc. 96 Nov 28, 2022
S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

Amazon Web Services 138 Jan 03, 2023