Code accompanying our NeurIPS 2021 traffic4cast challenge

Overview

Traffic forecasting on traffic movie snippets

This repo contains all code to reproduce our approach to the IARAI Traffic4cast 2021 challenge. In the challenge, traffic data is provided in movie format, i.e. a rasterised map with volume and average speed values evolving over time. The code is based on (and forked from) the code provided by the competition organizers, which can be found here. For further information on the data and the challenge we also refer to the competition Website or GitHub.

Installation and setup

To install the repository and all required packages, run

git clone https://github.com/NinaWie/NeurIPS2021-traffic4cast.git
cd NeurIPS2021-traffic4cast

conda env update -f environment.yaml
conda activate t4c

export PYTHONPATH="$PYTHONPATH:$PWD"

Instructions on installation with GPU support can be found in the yaml file.

To reproduce the results and train or test on the original data, download the data and extract it to the subfolder data/raw.

Test model

Download the weights of our best model here and put it in a new folder named trained_model in the main directory. The path to the checkpoint should now be NeurIPS2021-traffic4cast/trained_models/ckpt_upp_patch_d100.pt.

To create a submission on the test data, run

DEVICE=cpu
DATA_RAW_PATH="data/raw"
STRIDE=10

python baselines/baselines_cli.py --model_str=up_patch --resume_checkpoint='trained_models/ckpt_upp_patch_d100.pt' --radius=50 --stride=$STRIDE --epochs=0 --batch_size=1 --num_workers=0 --data_raw_path=$DATA_RAW_PATH --device=$DEVICE --submit

Notes:

  • For our best submission (score 59.93) a stride of 10 is used. This means that patches are extracted from the test data in a very densely overlapping manner. However, much more patches per sample have to be predicted and the runtime thus increases significantly. We thus recommend to use a stride of 50 for testing (score 60.13 on leaderboard).
  • In our paper, we define d as the side length of each patch. In this codebase we set a radius instead. The best performing model was trained with radius 50 corresponding to d=100.
  • The --submit-flag was added to the arguments to be called whenever a submission should be created.

Train

To train a model from scratch with our approach, run

DEVICE=cpu
DATA_RAW_PATH="data/raw"

python baselines/baselines_cli.py --model_str=up_patch --radius=50 --epochs=1000 --limit=100 --val_limit=10 --batch_size=8 --checkpoint_name='_upp_50_retrained' --num_workers=0 --data_raw_path=$DATA_RAW_PATH --device=$DEVICE

Notes:

  • The model will be saved in a folder called ckpt_upp_50_retrained, as specified with the checkpoint_name argument. The checkpoints will be saved every 50 epochs and whenever a better validation score is achieved (best.pt). Later, training can be resumed (or the model can be tested) by setting --resume_checkpoint='ckpt_upp_50_retrained/best.pt'.
  • No submission will be created after the run. Add the flag --submit in order to create a submission
  • The stride argument is not necessary for training, since it is only relevant for test data. The validation MSE is computed on the patches, not a full city.
  • In order to use our dataset, the number of workers must be set to 0. Otherwise, the random seed will be set such that the same files are loaded for every epoch. This is due to the setup of the PatchT4CDataset, where files are randomly loaded every epoch and then kept in memory.

Reproduce experiments

In our short paper, further experiments comparing model architectures and different strides are shown. To reproduce the experiment on stride values, execute the following steps:

  • Run python baselines/naive_shifted_stats.py to create artifical test data from the city Antwerp
  • Adapt the paths in the script
  • Run python test_script.py
  • Analyse the output csv file results_test_script.csv

For the other experiments, we regularly write training and validation losses to a file results.json during training (file is stored in the same folder as the checkpoints).

Other approaches

  • In naive_shifted_stats we have implemented a naive approach to the temporal challenge, namely using averages of the previous year and adapting the values to 2020 with a simple factor dependent on the shift of the input hour. The statistics however first have to be computed for each city.
  • In the configs file further options were added, for example u_patch which is the normal U-Net with patching, and models from the segmentation_models_pytorch (smp) PyPI package. For the latter, smp must be installed with pip install segmentation_models_pytorch.
Owner
Nina Wiedemann
Nina Wiedemann
Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies

To make the comparison with Animatable NeRF easier on the Human3.6M dataset, we save the quantitative results at here, which also contains the results of other methods, including Neural Body, D-NeRF,

ZJU3DV 359 Jan 08, 2023
πŸ₯‡ LG-AI-Challenge 2022 1μœ„ μ†”λ£¨μ…˜ μž…λ‹ˆλ‹€.

LG-AI-Challenge-for-Plant-Classification Daconμ—μ„œ μ§„ν–‰λœ 농업 ν™˜κ²½ 변화에 λ”°λ₯Έ μž‘λ¬Ό 병해 진단 AI κ²½μ§„λŒ€νšŒ 에 λŒ€ν•œ μ½”λ“œμž…λ‹ˆλ‹€. (colab directory에 μ½”λ“œκ°€ 잘 정리 λ˜μ–΄μžˆμŠ΅λ‹ˆλ‹€.) Requirements python

siwooyong 10 Jun 30, 2022
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy N. Monday 3 Feb 15, 2022
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
Code for Contrastive-Geometry Networks for Generalized 3D Pose Transfer

CGTransformer Code for our AAAI 2022 paper "Contrastive-Geometry Transformer network for Generalized 3D Pose Transfer" Contrastive-Geometry Transforme

18 Jun 28, 2022
Data Engineering ZoomCamp

Data Engineering ZoomCamp I'm partaking in a Data Engineering Bootcamp / Zoomcamp and will be tracking my progress here. I can't promise these notes w

Aaron 61 Jan 06, 2023
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

LMMNN Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks This is the working dire

Giora Simchoni 10 Nov 02, 2022
This is the implementation of the paper LiST: Lite Self-training Makes Efficient Few-shot Learners.

LiST (Lite Self-Training) This is the implementation of the paper LiST: Lite Self-training Makes Efficient Few-shot Learners. LiST is short for Lite S

Microsoft 28 Dec 07, 2022
The pyrelational package offers a flexible workflow to enable active learning with as little change to the models and datasets as possible

pyrelational is a python active learning library developed by Relation Therapeutics for rapidly implementing active learning pipelines from data management, model development (and Bayesian approximat

Relation Therapeutics 95 Dec 27, 2022
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [Project Website] [Replicate.ai Project] StyleGAN-NADA: CLIP-Guided Domain Adaptation

992 Dec 30, 2022
Learning to Prompt for Continual Learning

Learning to Prompt for Continual Learning (L2P) Official Jax Implementation L2P is a novel continual learning technique which learns to dynamically pr

Google Research 207 Jan 06, 2023
Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022)

Group R-CNN for Point-based Weakly Semi-supervised Object Detection (CVPR2022) By Shilong Zhang*, Zhuoran Yu*, Liyang Liu*, Xinjiang Wang, Aojun Zhou,

Shilong Zhang 129 Dec 24, 2022
Official PyTorch implementation of GDWCT (CVPR 2019, oral)

This repository provides the official code of GDWCT, and it is written in PyTorch. Paper Image-to-Image Translation via Group-wise Deep Whitening-and-

WonwoongCho 135 Dec 02, 2022
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022

Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
Efficient semidefinite bounds for multi-label discrete graphical models.

Low rank solvers #################################### benchmark/ : folder with the random instances used in the paper. ############################

1 Dec 08, 2022
A program to recognize fruits on pictures or videos using yolov5

Yolov5 Fruits Detector Requirements Either Linux or Windows. We recommend Linux for better performance. Python 3.6+ and PyTorch 1.7+. Installation To

Fateme Zamanian 30 Jan 06, 2023
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

XCL 191 Dec 31, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models Requirements A suitable conda environment named ldm can be created and activated with: conda env create -f environment.yaml co

CompVis Heidelberg 5.6k Jan 04, 2023