This is the source code for generating the ASL-Skeleton3D and ASL-Phono datasets. Check out the README.md for more details.

Overview

ASL-Skeleton3D and ASL-Phono Datasets Generator

Build Code Quality DOI - ASL-Skeleton3D DOI - ASL-Phono

The ASL-Skeleton3D contains a representation based on mapping into the three-dimensional space the coordinates of the signers in the ASLLVD dataset. The ASL-Phono, in turn, introduces a novel linguistics-based representation, which describes the signs in the ASLLVD dataset in terms of a set of attributes of the American Sign Language phonology.

This is the source code used to generate the ASL-Skeleton3D and ASL-Phono datasets, which are based on the American Sign Language Lexicon Video Dataset (ASLLVD).

Learn more about the datasets:

  • Paper: "ASL-Skeleton3D and ASL-Phono: Two NovelDatasets for the American Sign Language" -> CIn

Download

Download the processed datasets by using the links below:

Generate

If you prefer generating the datasets by yourself, this section presents the requirements, setup and procedures to execute the code.

The generation is a process comprising the phases below, which start by the retrieval of the original ASLLVD samples for then computing additional properties, as follows:

  • download: original samples (video sequences) are obtained from the ASLLVD.
  • segment: signs are segmented from the original samples.
  • skeleton: signer skeletons are estimated.
  • normalize: the coordinates of the skeletons are normalized.
  • phonology: the phonological attributes are extracted.

Requirements

To generate the datasets, your system will need the following software configured:

OpenPose will require additional hardware and software configured which might include a NVIDIA GPU and related drivers and software. Please, check this link for the full list.

Recommended

If you prefer running a Docker container with the software requirements already configured, check out the link below -- just make sure to have a GPU available to your Docker environment:

Installation

Once observed the requirements, checkout the source code and execute the following command, which will setup your virtual environment and dependencies:

$ poetry install

Configuration

There is a set of files in the folder ./config that will help you to configure the parameters for generating the datasets. A good starting point is to take a look into the ./config/template.yaml file, which contains a basic structure with all the properties documented.

You will also find other predefined configurations that might help you to generate the datasets. Just remember to always review the comments inside of the files to fine-tune the execution to your environment.

Learn about the configurations available in the ./config/template.yaml, which contains the properties documented.

Generation

ASL-Skeleton3D

The ASL-Skeleton3D is generated by using the configuration predefined in the file ./config/asl-skeleton3d.yaml. Thus, to start processing the dataset, execute the following command informing this file as the parameter -c (or --config):

$ poetry run python main.py -c ./config/asl-skeleton3d.yaml

The resulting dataset will be located in the folder configured as output for the phase normalize, which by default is set to ../work/dataset/normalized.

ASL-Phono

The ASL-Skeleton3D is generated by using the configuration predefined in the file ./config/asl-phono.yaml. Thus, to start processing the dataset, execute the following command informing this file as the parameter -c (or --config):

$ poetry run python main.py -c ./config/asl-phono.yaml

The resulting dataset will be located in the folder configured as output for the phase phonology, which by default is set to ../work/dataset/phonology.

Logs

The logs from the datasets processing will be recorded in the file ./output.log.

Deprecated datasets

Previously, we introduced the dataset ASLLVD-Skeleton, which is now being replaced by the ASL-Skeleton3D. Read more about the old dataset in the links:

Citation

Please cite the following paper if you use this repository in your reseach.

@article{asl-datasets-2021,
  title     = {ASL-Skeleton3D and ASL-Phono: Two Novel Datasets for the American Sign Language},
  author    = {Cleison Correia de Amorim and Cleber Zanchettin},
  year      = {2021},
}

Contact

For any question, feel free to contact me at:

You might also like...
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

Source code for
Source code for "MusCaps: Generating Captions for Music Audio" (IJCNN 2021)

MusCaps: Generating Captions for Music Audio Ilaria Manco1 2, Emmanouil Benetos1, Elio Quinton2, Gyorgy Fazekas1 1 Queen Mary University of London, 2

The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

Source code, datasets and trained models for the paper Learning Advanced Mathematical Computations from Examples (ICLR 2021), by François Charton, Amaury Hayat (ENPC-Rutgers) and Guillaume Lample

Maths from examples - Learning advanced mathematical computations from examples This is the source code and data sets relevant to the paper Learning a

This is the official source code for SLATE. We provide the code for the model, the training code, and a dataset loader for the 3D Shapes dataset. This code is implemented in Pytorch.

SLATE This is the official source code for SLATE. We provide the code for the model, the training code and a dataset loader for the 3D Shapes dataset.

Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)
Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)

MUC Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018) Performance Details for Accuracy: | Dataset

The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory
The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the

Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Comments
  • keypoint scale?

    keypoint scale?

    Hello this data looks to be amazing, but making use of it takes a bit more knowledge about how to actually translate the x,y values into usable points.

    It seems you guys have taken advantage of the --keypoint_scale in OpenPose - could you post something about how to translate these decimal numbers back into something more like a traditional x,y value? I'd like to draw these points using standard javascript, but right now I can't figure how how to rescale them back to size.

    Any help would be greatly appreciated!

    opened by mspanish 0
Releases(v1.0.0)
Owner
Cleison Amorim
Cleison Amorim
Graph Convolutional Neural Networks with Data-driven Graph Filter (GCNN-DDGF)

Graph Convolutional Gated Recurrent Neural Network (GCGRNN) Improved from Graph Convolutional Neural Networks with Data-driven Graph Filter (GCNN-DDGF

Lei Lin 21 Dec 18, 2022
CN24 is a complete semantic segmentation framework using fully convolutional networks

Build status: master (production branch): develop (development branch): Welcome to the CN24 GitHub repository! CN24 is a complete semantic segmentatio

Computer Vision Group Jena 123 Jul 14, 2022
LBK 35 Dec 26, 2022
LSTM model trained on a small dataset of 3000 names written in PyTorch

LSTM model trained on a small dataset of 3000 names. Model generates names from model by selecting one out of top 3 letters suggested by model at a time until an EOS (End Of Sentence) character is no

Sahil Lamba 1 Dec 20, 2021
Code for the paper 'A High Performance CRF Model for Clothes Parsing'.

Clothes Parsing Overview This code provides an implementation of the research paper: A High Performance CRF Model for Clothes Parsing Edgar Simo-S

Edgar Simo-Serra 119 Nov 21, 2022
Neural Network to colorize grayscale images

#colornet Neural Network to colorize grayscale images Results Grayscale Prediction Ground Truth Eiji K used colornet for anime colorization Sources Au

Pavel Hanchar 3.6k Dec 24, 2022
Dual Attention Network for Scene Segmentation (CVPR2019)

Dual Attention Network for Scene Segmentation(CVPR2019) Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu Introduction W

Jun Fu 2.2k Dec 28, 2022
Tensorflow implementation of "Learning Deep Features for Discriminative Localization"

Weakly_detector Tensorflow implementation of "Learning Deep Features for Discriminative Localization" B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and

Taeksoo Kim 363 Jun 29, 2022
Blind Video Temporal Consistency via Deep Video Prior

deep-video-prior (DVP) Code for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior PyTorch implementation | paper | project web

Chenyang LEI 272 Dec 21, 2022
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation This repository is the implementation of DynaTune paper. This folder

4 Nov 02, 2022
SigOpt wrappers for scikit-learn methods

SigOpt + scikit-learn Interfacing This package implements useful interfaces and wrappers for using SigOpt and scikit-learn together Getting Started In

SigOpt 73 Sep 30, 2022
Complete* list of autonomous driving related datasets

AD Datasets Complete* and curated list of autonomous driving related datasets Contributing Contributions are very welcome! To add or update a dataset:

Daniel Bogdoll 13 Dec 19, 2022
Relative Human dataset, CVPR 2022

Relative Human (RH) contains multi-person in-the-wild RGB images with rich human annotations, including: Depth layers (DLs): relative depth relationsh

Yu Sun 112 Dec 02, 2022
Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend

Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend This project acts as both a tuto

Guillaume Chevalier 103 Jul 22, 2022
Intelligent Video Analytics toolkit based on different inference backends.

English | 中文 OpenIVA OpenIVA is an end-to-end intelligent video analytics development toolkit based on different inference backends, designed to help

Quantum Liu 15 Oct 27, 2022
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
Python Library for Signal/Image Data Analysis with Transport Methods

PyTransKit Python Transport Based Signal Processing Toolkit Website and documentation: https://pytranskit.readthedocs.io/ Installation The library cou

24 Dec 23, 2022
[ICCV 2021] Group-aware Contrastive Regression for Action Quality Assessment

CoRe Created by Xumin Yu*, Yongming Rao*, Wenliang Zhao, Jiwen Lu, Jie Zhou This is the PyTorch implementation for ICCV paper Group-aware Contrastive

Xumin Yu 31 Dec 24, 2022
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR2021)

NExT-QA We reproduce some SOTA VideoQA methods to provide benchmark results for our NExT-QA dataset accepted to CVPR2021 (with 1 'Strong Accept' and 2

Junbin Xiao 50 Nov 24, 2022
A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.

Rockpool Rockpool is a Python package for developing signal processing applications with spiking neural networks. Rockpool allows you to build network

SynSense 21 Dec 14, 2022