An implementation of "Learning human behaviors from motion capture by adversarial imitation"

Overview

Merel-MoCap-GAIL

An implementation of Merel et al.'s paper on generative adversarial imitation learning (GAIL) using motion capture (MoCap) data:

Learning human behaviors from motion capture by adversarial imitation
Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, Nicolas Heess
arXiv preprint arXiv:1707.02201, 2017

Acknowledgements

This code is based on an earlier version developed by Ruben Villegas.

Clone the Repository

This repo contains one submodule (baselines), so make sure you clone with --recursive:

git clone --recursive https://github.com/ywchao/merel-mocap-gail.git

Installation

Make sure the following are installed.

  • Our own branch of baselines provided as a submodule

    1. Change the directory:

      cd baselines
    2. Go through the installation steps in this README without re-cloning the repo.

  • An old verion of dm_control provided as a submodule

    1. Change the directory:

      cd dm_control
    2. Go through the installation steps in this README without re-cloning the repo. This requires the installation of MuJoCo. Also make sure to install the cloned verion:

      pip install .

    Note that we have only tested on this version. The code might work with newer versions but it is not guaranteed.

  • Matplotlib

Training and Visualization

  1. Download the CMU MoCap dataset:

    ./scripts/download_cmu_mocap.sh

    This will populate the data folder with cmu_mocap.

  2. Preprocess data. We use the walk sequences from subject 8 as described in the paper.

    ./scripts/data_collect.sh

    The output will be saved in data/cmu_mocap.npz.

  3. Visualize the processed MoCap sequences in dm_control:

    ./scripts/data_visualize.sh

    The output will be saved in data/cmu_mocap_vis.

  4. Start training:

    ./scripts/train.sh 0 1

    Note that:

    • The first argument sets the random seed, and the second argument sets the number of used sequences.
    • For now we use only sequence 1. We will show using all sequences in later steps.
    • The command will run training with random seed 0. In practice we recommend running multiple training jobs with different seeds in parallel, as the training outcome is often sensitive to the seed value.

    The output will be saved in output.

  5. Monitor training with TensorBoard:

    tensorboard --logdir=output --port=6006

    Below are the curves of episode length, rewards, and true rewards, obtained with four different random seeds:

  6. Visualize trained humanoid:

    ./scripts/visualize.sh \
      output/trpo_gail.obs_only.transition_limitation_1.humanoid_CMU_run.g_step_3.d_step_1.policy_entcoeff_0.adversary_entcoeff_0.001.seed_0.num_timesteps_5.00e+07/checkpoints/model.ckpt-30000 \
      output/trpo_gail.obs_only.transition_limitation_1.humanoid_CMU_run.g_step_3.d_step_1.policy_entcoeff_0.adversary_entcoeff_0.001.seed_0.num_timesteps_5.00e+07/vis_model.ckpt-30000.mp4 \
      0 \
      1

    The arguments are the model path, output video (mp4) file path, random seed, and number of used sequences.

    Below is a sample visualization:

  7. If you want to train with all sequences from subject 8. This can be done by replacing 1 by -1 in step 4:

    ./scripts/train.sh 0 -1

    Similarly, for visualization, replace 1 by -1 and update the paths:

    ./scripts/visualize.sh \
      output/trpo_gail.obs_only.transition_limitation_-1.humanoid_CMU_run.g_step_3.d_step_1.policy_entcoeff_0.adversary_entcoeff_0.001.seed_0.num_timesteps_5.00e+07/checkpoints/model.ckpt-50000 \
      output/trpo_gail.obs_only.transition_limitation_-1.humanoid_CMU_run.g_step_3.d_step_1.policy_entcoeff_0.adversary_entcoeff_0.001.seed_0.num_timesteps_5.00e+07/vis_model.ckpt-50000.mp4 \
      0 \
      -1

    Note that training takes longer to converge when using all sequences:

    A sample visualization:

Owner
Yu-Wei Chao
Yu-Wei Chao
EfficientMPC - Efficient Model Predictive Control Implementation

efficientMPC Efficient Model Predictive Control Implementation The original algo

Vin 8 Dec 04, 2022
Pytorch implementation of Zero-DCE++

Zero-DCE++ You can find more details here: https://li-chongyi.github.io/Proj_Zero-DCE++.html. You can find the details of our CVPR version: https://li

Chongyi Li 157 Dec 23, 2022
Technical experimentations to beat the stock market using deep learning :chart_with_upwards_trend:

DeepStock Technical experimentations to beat the stock market using deep learning. Experimentations Deep Learning Stock Prediction with Daily News Hea

Keon 449 Dec 29, 2022
A PyTorch Implementation of "Neural Arithmetic Logic Units"

Neural Arithmetic Logic Units [WIP] This is a PyTorch implementation of Neural Arithmetic Logic Units by Andrew Trask, Felix Hill, Scott Reed, Jack Ra

Kevin Zakka 181 Nov 18, 2022
Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

scc4onnx Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel

Katsuya Hyodo 16 Dec 22, 2022
CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices.

CenterFace Introduce CenterFace(size of 7.3MB) is a practical anchor-free face detection and alignment method for edge devices. Recent Update 2019.09.

StarClouds 1.2k Dec 21, 2022
Approaches to modeling terrain and maps in python

topography 🌎 Contains different approaches to modeling terrain and topographic-style maps in python Features Inverse Distance Weighting (IDW) A given

John Gutierrez 1 Aug 10, 2022
Official Repository for Machine Learning class - Physics Without Frontiers 2021

PWF 2021 Física Sin Fronteras es un proyecto del Centro Internacional de Física Teórica (ICTP) en Trieste Italia. El ICTP es un centro dedicado a fome

36 Aug 06, 2022
Deeprl - Standard DQN and dueling network for simple games

DeepRL This code implements the standard deep Q-learning and dueling network with experience replay (memory buffer) for playing simple games. DQN algo

Yao Zhou 6 Apr 12, 2020
dualPC.R contains the R code for the main functions.

dualPC.R contains the R code for the main functions. dualPC_sim.R contains an example run with the different PC versions; it calls dualPC_algs.R whic

3 May 30, 2022
Explanatory Learning: Beyond Empiricism in Neural Networks

Explanatory Learning This is the official repository for "Explanatory Learning: Beyond Empiricism in Neural Networks". Datasets Download the datasets

GLADIA Research Group 10 Dec 06, 2022
PyTorch and Tensorflow functional model definitions

functional-zoo Model definitions and pretrained weights for PyTorch and Tensorflow PyTorch, unlike lua torch, has autograd in it's core, so using modu

Sergey Zagoruyko 590 Dec 22, 2022
Multi-Content GAN for Few-Shot Font Style Transfer at CVPR 2018

MC-GAN in PyTorch This is the implementation of the Multi-Content GAN for Few-Shot Font Style Transfer. The code was written by Samaneh Azadi. If you

Samaneh Azadi 422 Dec 04, 2022
PyTorch Implementation of Backbone of PicoDet

PicoDet-Backbone PyTorch Implementation of Backbone of PicoDet Original Implementation is implemented on PaddlePaddle. Example picodet_l_backbone = ES

Yonghye Kwon 7 Jul 12, 2022
Learning cell communication from spatial graphs of cells

ncem Features Repository for the manuscript Fischer, D. S., Schaar, A. C. and Theis, F. Learning cell communication from spatial graphs of cells. 2021

Theis Lab 77 Dec 30, 2022
Pytorch implementation of Decoupled Spatial-Temporal Transformer for Video Inpainting

Decoupled Spatial-Temporal Transformer for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, J

51 Dec 13, 2022
Simple and ready-to-use tutorials for TensorFlow

TensorFlow World To support maintaining and upgrading this project, please kindly consider Sponsoring the project developer. Any level of support is a

Amirsina Torfi 4.5k Dec 23, 2022
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 03, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 04, 2023
Building blocks for uncertainty-aware cycle consistency presented at NeurIPS'21.

UncertaintyAwareCycleConsistency This repository provides the building blocks and the API for the work presented in the NeurIPS'21 paper Robustness vi

EML Tübingen 19 Dec 12, 2022