Action Recognition for Self-Driving Cars

Overview

Action Recognition for Self-Driving Cars

demo img

This repo contains the codes for the 2021 Fall semester project "Action Recognition for Self-Driving Cars" at EPFL VITA lab. For experiment results, please refer to the project report and presenation slides at docs. A demo video is available here.

This project utilizes a simple yet effective architecture (called poseact) to classify multiple actions.

The model has been tested on three datasets, TCG, TITAN and CASR.

drawing

Preparation and Installation

This project mainly depends PyTorch. If you wish to start from extracting poses from images, you would also need OpenPifPaf (along with posetrack plugin), please also refer to this section for following steps. In case you wish to skip extracting your own poses, and directly start from the poses used in this repo, you can download this folder. It contains the poses extracted from TITAN and CASR dataset as well as a trained model for TITAN dataset. For the poses in TCG dataset, please refer to the official repo.

First, clone and install this repo. If you have downloaded the folder above, please put the contents to poseact/out/

Then clone this repo and install in editable mode.

git clone https://github.com/vita-epfl/pose-action-recognition.git
cd Action_Recognition
python -m pip install -e .

Project Structure and usage

poseact
	|___ data # create this folder to store your datasets, or create a symlink 
	|___ models 
	|___ test # debug tests, may also be helpful for basic usage
	|___ tools # preprocessing and analyzing tools, usage stated in the scripts 
	|___ utils # utility functions, such as datasets, losses and metrics 
	|___ xxxx_train.py # training scripts for TCG, TITAN and CASR
	|___ python_wrapper.sh # script for submitting jobs to EPFL IZAR cluster, same for debug.sh
	|___ predictor.py  # a visualization tool with the model trained on TITAN dataset 

It's advised to cd poseact and conda activate pytorch before running the experiments.

To submit jobs to EPFL IZAR cluster (or similar clusters managed by slurm), you can use the script python_wrapper.sh. Just think of it as "the python on the cluster". To submit to debug node of IZAR, you can use the debug.sh

Here is an example to train a model on TITAN dataset. --imbalance focal means using the focal loss, --gamma 0 sets the gamma value of focal loss to 0 (because I find 0 is better :=), --merge_cls means selecting a suitable set of actions from the original actions hierarchy, and--relative_kp means using relative coordinates of the keypoints, see the presentation slides for intuition. You can specify a name for this task with --task_name, which will be used to name the saved model if you use --save_model.

sbatch python_wrapper.sh titan_train.py --imbalance focal --gamma 0 --merge_cls --relative_kp --task_name Relative_KP --save_model

To use the temporal model, you can use --model_type sequence, and maybe you will need to adjust the number of epochs, batch size and learning rate. To use pifpaf track ID instead of ground truth track ID, you can use --track_method pifpaf .

sbatch python_wrapper.sh titan_train.py --model_type sequence --num_epoch 100 --imbalance focal --track_method gt --batch_size 128 --gamma 0 --lr 0.001

For all available training options, please refer to the comments and docstrings in the training scripts.

All the datasets have "train-validate-test" setup, so after the training, you should be able to see a summary of evaluation.

Here is an example

In general, overall accuracy 0.8614 avg Jaccard 0.6069 avg F1 0.7409

For valid_action actions accuracy 0.8614 Jaccard score 0.6069 f1 score 0.9192 mAP 0.7911
Precision for each class: [0.885 0.697 0.72  0.715 0.87]
Recall for each class: [0.956 0.458 0.831 0.549 0.811]
F1 score for each class: [0.919 0.553 0.771 0.621 0.839]
Average Precision for each class is [0.9687, 0.6455, 0.8122, 0.6459, 0.883]
Confusion matrix (elements in a row share the same true label, those in the same columns share predicted):
The corresponding classes are {'walking': 0, 'standing': 1, 'sitting': 2, 'bending': 3, 'biking': 4, 'motorcycling': 4}
[[31411  1172    19   142   120]
 [ 3556  3092    12    45    41]
 [   12     1   157     0    19]
 [  231   160     3   512    26]
 [  268     9    27    17  1375]]

After training and saving the model (to out/trained/), you can use the predictor to visualize results on TITAN (all sequences). Feel free to change the chekpoint to your own trained model, but only the file name is needed, because models are assumed to be out/trained

sbatch python_wrapper.sh predictor.py --function titanseqs --save_dir out/recognition --ckpt TITAN_Relative_KP803217.pth

It's also possible to run on a single sequence with --function titan_single --seq_idx <Number>

or run on a single image with --function image --image_path <path/to/your/image.png>

More about the TITAN dataset

For the TITAN dataset, we first extract poses from the images with OpenPifPaf, and then match the poses to groundtruth accoding to IOU of bounding boxes. After that, we store the poses sequence by sequence, frame by frame, person by person, and you will find corresponding classes in titan_dataset.py.

Preparing poses for TITAN and CASR

This part may be a bit cumbersome and it's advised to use the prepared poses in this folder. If you want to extract the poses yourself, please also download that folder, because poseact/out/titan_clip/example.png is needed as the input to OpenPifPaf.

First, install OpenPifPaf and the posetrack plugin.

For TITAN, download the dataset to poseact/data/TITAN and then

cd poseact
conda activate pytorch # activate the python environment
# run single frame pose detection , wait for the program to complete
sbatch python_wrapper.sh tools/run_pifpaf_on_titan.py --mode single --n_process 6
# run pose tracking, required for temporal model with pifpaf track ID, wait for the program to complete
sbatch python_wrapper.sh tools/run_pifpaf_on_titan.py --mode track --n_process 6
# make the pickle file for single frame model 
python utils/titan_dataset.py --function pickle --mode single
# make the pickle file from pifpaf posetrack result
python utils/titan_dataset.py --function pickle --mode track 

For CASR, you should agree with the terms and conditions required by the authors of CASR

CASR dataset needs some preprocessing, please create the folder poseact/scratch (or link to the scratch on IZAR) and then

cd poseact
conda activate pytorch # activate the python environment
sbatch tools/casr_download.sh # wait for the whole process to complete, takes a long time 
sbatch python_wrapper.sh tools/run_pifpaf_on_casr.py --n_process 6 # wait for this process to complete, again a long time 
python ./utils/casr_dataset.py # now you should have the file out/CASR_pifpaf.pkl

Credits

The poses are extracted with OpenPifPaf.

The model is inspired by MonoLoco and the heuristics are from this work

The code for TCG dataset is adopted from the official repo.

Owner
VITA lab at EPFL
Visual Intelligence for Transportation
VITA lab at EPFL
Generates all variables from your .tf files into a variables.tf file.

tfvg Generates all variables from your .tf files into a variables.tf file. It searches for every var.variable_name in your .tf files and generates a v

1 Dec 01, 2022
This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking".

SCT This is the official code for the paper "Tracker Meets Night: A Transformer Enhancer for UAV Tracking" The spatial-channel Transformer (SCT) enhan

Intelligent Vision for Robotics in Complex Environment 27 Nov 23, 2022
The Wearables Development Toolkit - a development environment for activity recognition applications with sensor signals

Wearables Development Toolkit (WDK) The Wearables Development Toolkit (WDK) is a framework and set of tools to facilitate the iterative development of

Juan Haladjian 114 Nov 27, 2022
DCGAN-tensorflow - A tensorflow implementation of Deep Convolutional Generative Adversarial Networks

DCGAN in Tensorflow Tensorflow implementation of Deep Convolutional Generative Adversarial Networks which is a stabilize Generative Adversarial Networ

Taehoon Kim 7.1k Dec 29, 2022
[NeurIPS 2021] SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning

SSUL - Official Pytorch Implementation (NeurIPS 2021) SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning Sun

Clova AI Research 44 Dec 27, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022
PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick."

PixelPick This is an official implementation of the paper "All you need are a few pixels: semantic segmentation with PixelPick." [Project page] [Paper

Gyungin Shin 59 Sep 25, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

41 Apr 28, 2022
Norm-based Analysis of Transformer

Norm-based Analysis of Transformer Implementations for 2 papers introducing to analyze Transformers using vector norms: Kobayashi+'20 Attention is Not

Goro Kobayashi 52 Dec 05, 2022
Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank

This repository provides the official code for replicating experiments from the paper: Semi-Supervised Semantic Segmentation with Pixel-Level Contrast

Iñigo Alonso Ruiz 58 Dec 15, 2022
vit for few-shot classification

Few-Shot ViT Requirements PyTorch (= 1.9) TorchVision timm (latest) einops tqdm numpy scikit-learn scipy argparse tensorboardx Pretrained Checkpoints

Martin Dong 26 Nov 30, 2022
Official implementation of Representer Point Selection via Local Jacobian Expansion for Post-hoc Classifier Explanation of Deep Neural Networks and Ensemble Models at NeurIPS 2021

Representer Point Selection via Local Jacobian Expansion for Classifier Explanation of Deep Neural Networks and Ensemble Models This repository is the

Yi(Amy) Sui 2 Dec 01, 2021
Forecasting with Gradient Boosted Time Series Decomposition

ThymeBoost ThymeBoost combines time series decomposition with gradient boosting to provide a flexible mix-and-match time series framework for spicy fo

131 Jan 08, 2023
A Genetic Programming platform for Python with TensorFlow for wicked-fast CPU and GPU support.

Karoo GP Karoo GP is an evolutionary algorithm, a genetic programming application suite written in Python which supports both symbolic regression and

Kai Staats 149 Jan 09, 2023
A generalist algorithm for cell and nucleus segmentation.

Cellpose | A generalist algorithm for cell and nucleus segmentation. Cellpose was written by Carsen Stringer and Marius Pachitariu. To learn about Cel

MouseLand 733 Dec 29, 2022
A Kernel fuzzer focusing on race bugs

Razzer: Finding kernel race bugs through fuzzing Environment setup $ source scripts/envsetup.sh scripts/envsetup.sh sets up necessary environment var

Systems and Software Security Lab at Seoul National University (SNU) 328 Dec 26, 2022
PyTorch Lightning + Hydra. A feature-rich template for rapid, scalable and reproducible ML experimentation with best practices. ⚡🔥⚡

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

Łukasz Zalewski 2.1k Jan 09, 2023
FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction

FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel ga

Tarun K 280 Dec 23, 2022
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Shubham Tulsiani 24 Dec 17, 2022
🔥 Real-time Super Resolution enhancement (4x) with content loss and relativistic adversarial optimization 🔥

🔥 Real-time Super Resolution enhancement (4x) with content loss and relativistic adversarial optimization 🔥

Rishik Mourya 48 Dec 20, 2022