Efficient Two-Step Networks for Temporal Action Segmentation (Neurocomputing 2021)

Related tags

Deep LearningETSN
Overview

Efficient Two-Step Networks for Temporal Action Segmentation

This repository provides a PyTorch implementation of the paper Efficient Two-Step Networks for Temporal Action Segmentation.

Requirements

* Python 3.8.5
* pyTorch 1.8.1

You can download packages using requirements.txt.
pip install -r requirements.txt

Datasets

  • Download the data provided by MS-TCN, which contains the I3D features (w/o fine-tune) and the ground truth labels for 3 datasets. (~30GB)
  • Extract it so that you have the data folder in the same directory as train.py.

directory structure

├── config
│   ├── 50salads
│   ├── breakfast
│   └── gtea
├── csv
│   ├── 50salads
│   ├── breakfast
│   └── gtea
├─ dataset ─── 50salads/...
│           ├─ breakfast/...
│           └─ gtea ─── features/
│                    ├─ groundTruth/
│                    ├─ splits/
│                    └─ mapping.txt
├── libs
├── result
├── utils 
├── requirements.txt
├── train.py
├── eval.py
└── README.md

Training and Testing of ETSN

Setting

First, convert ground truth files into numpy array.

python utils/generate_gt_array.py ./dataset

Then, please run the below script to generate csv files for data laoder'.

python utils/builda_dataset.py ./dataset

Training

You can train a model by changing the settings of the configuration file.

python train.py ./config/xxx/xxx/config.yaml

Evaluation

You can evaluate the performance of result after running.

python eval.py ./result/xxx/xxx/config.yaml test

We also provide trained ETSN model in Google Drive. Extract it so that you have the result folder in the same directory as train.py.

average cross validation results

python utils/average_cv_results.py [result_dir]

Citation

If you find our code useful, please cite our paper.

@article{LI2021373,
author = {Yunheng Li and Zhuben Dong and Kaiyuan Liu and Lin Feng and Lianyu Hu and Jie Zhu and Li Xu and Yuhan wang and Shenglan Liu},
journal = {Neurocomputing},
title = {Efficient Two-Step Networks for Temporal Action Segmentation},
year = {2021},
volume = {454},
pages = {373-381},
issn = {0925-2312},
doi = {https://doi.org/10.1016/j.neucom.2021.04.121},
url = {https://www.sciencedirect.com/science/article/pii/S0925231221006998},

}

Contact

For any question, please raise an issue or contact.

Acknowledgement

We appreciate MS-TCN for extracted I3D feature, backbone network and evaluation code.

Appreciating Yuchi Ishikawa shares the re-implementation of MS-TCN with pytorch.

Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (陈三元) 81 Nov 28, 2022
Torchyolo - Yolov3 ve Yolov4 modellerin Pytorch uygulamasıdır

TORCHYOLO : Yolo Modellerin Pytorch Uygulaması Yapılacaklar: Yolov3 model.py ve

Kadir Nar 3 Aug 22, 2022
Code for ICLR2018 paper: Improving GAN Training via Binarized Representation Entropy (BRE) Regularization - Y. Cao · W Ding · Y.C. Lui · R. Huang

code for "Improving GAN Training via Binarized Representation Entropy (BRE) Regularization" (ICLR2018 paper) paper: https://arxiv.org/abs/1805.03644 G

21 Oct 12, 2020
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Jooyoung Choi 88 Dec 01, 2022
GLODISMO: Gradient-Based Learning of Discrete Structured Measurement Operators for Signal Recovery

GLODISMO: Gradient-Based Learning of Discrete Structured Measurement Operators for Signal Recovery This is the code to the paper: Gradient-Based Learn

3 Feb 15, 2022
Repository for training material for the 2022 SDSC HPC/CI User Training Course

hpc-training-2022 Repository for training material for the 2022 SDSC HPC/CI Training Series HPC/CI Training Series home https://www.sdsc.edu/event_ite

sdsc-hpc-training-org 21 Jul 27, 2022
The project page of paper: Architecture disentanglement for deep neural networks [ICCV 2021, oral]

This is the project page for the paper: Architecture Disentanglement for Deep Neural Networks, Jie Hu, Liujuan Cao, Tong Tong, Ye Qixiang, ShengChuan

Jie Hu 15 Aug 30, 2022
Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks]

Neural Architecture Search for Spiking Neural Networks Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks] (https

Intelligent Computing Lab at Yale University 28 Nov 18, 2022
WarpRNNT loss ported in Numba CPU/CUDA for Pytorch

RNNT loss in Pytorch - Numba JIT compiled (warprnnt_numba) Warp RNN Transducer Loss for ASR in Pytorch, ported from HawkAaron/warp-transducer and a re

Somshubra Majumdar 15 Oct 22, 2022
Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Graph Regularized Residual Subspace Clustering Network for hyperspectral image clustering

Yaoming Cai 5 Jul 18, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
🛰️ List of earth observation companies and job sites

Earth Observation Companies & Jobs source Portals & Jobs Geospatial Geospatial jobs newsletter: ~biweekly newsletter with geospatial jobs by Ali Ahmad

Dahn 64 Dec 27, 2022
本项目是一个带有前端界面的垃圾分类项目,加载了训练好的模型参数,模型为efficientnetb4,暂时为40分类问题。

说明 本项目是一个带有前端界面的垃圾分类项目,加载了训练好的模型参数,模型为efficientnetb4,暂时为40分类问题。 python依赖 tf2.3 、cv2、numpy、pyqt5 pyqt5安装 pip install PyQt5 pip install PyQt5-tools 使用 程

4 May 04, 2022
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Peter Schaldenbrand 247 Dec 23, 2022
Official Pytorch implementation for video neural representation (NeRV)

NeRV: Neural Representations for Videos (NeurIPS 2021) Project Page | Paper | UVG Data Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser-Nam Lim, Abhinav S

hao 214 Dec 28, 2022