Learning Time-Critical Responses for Interactive Character Control

Overview

Learning Time-Critical Responses for Interactive Character Control

teaser

Abstract

This code implements the paper Learning Time-Critical Responses for Interactive Character Control. This system implements teacher-student framework to learn time-critically responsive policies, which guarantee the time-to-completion between user inputs and their associated responses regardless of the size and composition of the motion databases. This code is written in java and Python, based on Tensorflow2.

Publications

Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee. 2021. Learning Time-Critical Responses for Interactive Character Control. ACM Trans. Graph. 40, 4, 147. (SIGGRAPH 2021)

Project page: http://mrl.snu.ac.kr/research/ProjectAgile/Agile.html

Paper: http://mrl.snu.ac.kr/research/ProjectAgile/AGILE_2021_SIGGRAPH_author.pdf

Youtube: https://www.youtube.com/watch?v=rQKuvxg5ZHc

How to install

This code is implemented with Java and Python, and was developed using Eclipse on Windows. A Windows 64-bit environment is required to run the code.

Requirements

Install JDK 1.8

Java SE Development Kit 8 Downloads

Install Eclipse

Install Eclipse IDE for Java Developers

Install Python 3.6

https://www.python.org/downloads/release/python-368/

Install pydev to Eclipse

https://www.pydev.org/download.html

Install cuda and cudnn 10.0

CUDA Toolkit 10.0 Archive

NVIDIA cuDNN

Install Visual C++ Redistributable for VS2012

Laplacian Motion Editing(PmQmJNI.dll) is implemented in C++, and VS2012 is required to run it.

Visual C++ Redistributable for Visual Studio 2012 Update 4

Install JEP(Java Embedded Python)

Java Embedded Python

This library requires a part of the Visual Studio installation. I don't know exactly which ones are needed, but I'm guessing .net framework 3.5, VC++ 2015.3 v14.00(v140). Installing Visual Studio 2017 or later may be helpful.

Install Tensoflow 1.14.0

pip install tensorflow-gpu==1.14.0

Install this repository

We recommend downloading through Git in Eclipse environment.

  1. Open Git Perspective in Elcipse
  2. Paste repository url and clone repository ( 'https://git.ncsoft.net/scm/private_khlee/private-khlee-test.git' )
  3. Select all projects in Working Tree
  4. Right click and select Import Projects, and Import existing Eclipse projects.

Or you can just download the repository as Zip file and extract it, and import it using File->Import->General->Existing Projects into Workspace in Eclipse.

Install third party library

This code uses Interactive Character Animation by Learning Multi-Objective Control for learning the student policy.

Download required third pary library files(ThirdPartyDlls.zip) and extract it to mrl.motion.critical folder.

Dataset

The entire data used in the paper cannot be published due to copyright issues. This repository contains only minimal motion dataset for algorithm validation. SNU Motion Database was used for martial arts movements, CMU Motion Database was used for locomotion.

How to run

Eclipse

All of the instructions below are assumed to be executed based on Eclipse. Executable java files are grouped in package mrl.motion.critical.run of project mrl.motion.critical.

  • You can directly open source file with Ctrl+Shift+R
  • You can run the currently open source file with Ctrl+F11.
  • You can configure program arguments in Run->Run Configurations menu.

Pre-trained student policy

You can see the pre-trained network by running RuntimeMartialArtsControlModule.java. Pre-trained network file is located at mrl.python.neural\train\martial_arts_sp_da

  • 1, 2 : walk, run
  • 3,4,5,6 : martial arts actions
  • q,w,e,r,t : control critical response time

How to train

  1. Data Annotation & Configuration
    • You can check motion data list and annotation information by executing MAnnotationRun.java.
  2. Model Configuration
    • Action list, critical response time of each action, user input model and error metric is defined at MartialArtsConfig.java
  3. Preprocessing
    • You can precompute data table for pruning by executing DP_Preprocessing.java
    • The data file will be located at mrl.motion.critical\output\dp_cache
  4. Training teacher policy
    • You can train teacher policy by executing LearningTeacherPolicy.java
    • The result will be located at mrl.motion.critical\train_rl
  5. Training data for student policy
    • You can generate training data for student policy by executing StudentPolicyDataGeneration.java
    • The result will be located at mrl.python.neural\train
  6. Training student policy
    • You can train student policy by executing mrl.python.neural\train_rl.py
    • You need to set program arguments in Run->Run Configurations menu.
      • arguments format :
      • ex) martial_arts_sp new 0.0001
  7. Running student policy
    • You can see the trained student policy by running RuntimeMartialArtsControlModule.java.
    • This class will be load student policy located at mrl.python.neural\train.
Owner
Movement Research Lab
Our research group explores new ways of understanding, representing, and animating human movements.
Movement Research Lab
Torch implementation of various types of GAN (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN, LSGAN)

gans-collection.torch Torch implementation of various types of GANs (e.g. DCGAN, ALI, Context-encoder, DiscoGAN, CycleGAN, EBGAN). Note that EBGAN and

Minchul Shin 53 Jan 22, 2022
Norm-based Analysis of Transformer

Norm-based Analysis of Transformer Implementations for 2 papers introducing to analyze Transformers using vector norms: Kobayashi+'20 Attention is Not

Goro Kobayashi 52 Dec 05, 2022
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space

extrinsic2pyramid Visualize Camera's Pose Using Extrinsic Parameter by Plotting Pyramid Model on 3D Space Intro A very simple and straightforward modu

JEONG HYEONJIN 106 Dec 28, 2022
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
MediaPipe Kullanarak İleri Seviye Bilgisayarla Görü

MediaPipe Kullanarak İleri Seviye Bilgisayarla Görü

Burak Bagatarhan 12 Mar 29, 2022
Reverse engineer your pytorch vision models, in style

🔍 Rover Reverse engineer your CNNs, in style Rover will help you break down your CNN and visualize the features from within the model. No need to wri

Mayukh Deb 32 Sep 24, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"

GINC small-scale in-context learning dataset GINC (Generative In-Context learning Dataset) is a small-scale synthetic dataset for studying in-context

P-Lambda 29 Dec 19, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cr

Scaleout 75 Nov 09, 2022
OpenMMLab Image and Video Editing Toolbox

Introduction MMEditing is an open source image and video editing toolbox based on PyTorch. It is a part of the OpenMMLab project. The master branch wo

OpenMMLab 3.9k Jan 04, 2023
Jremesh-tools - Blender addon for quad remeshing

JRemesh Tools Blender 2.8 - 3.x addon for quad remeshing. Currently it is a wrap

Jayanam 89 Dec 30, 2022
Code and experiments for "Deep Neural Networks for Rank Consistent Ordinal Regression based on Conditional Probabilities"

corn-ordinal-neuralnet This repository contains the orginal model code and experiment logs for the paper "Deep Neural Networks for Rank Consistent Ord

Raschka Research Group 14 Dec 27, 2022
TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels.

AutoDSP TLDR; Train custom adaptive filter optimizers without hand tuning or extra labels. About Adaptive filtering algorithms are commonplace in sign

Jonah Casebeer 48 Sep 19, 2022
Source code for our Paper "Learning in High-Dimensional Feature Spaces Using ANOVA-Based Matrix-Vector Multiplication"

NFFT4ANOVA Source code for our Paper "Learning in High-Dimensional Feature Spaces Using ANOVA-Based Matrix-Vector Multiplication" This package uses th

Theresa Wagner 1 Aug 10, 2022
Simple Pixelbot for Diablo 2 Resurrected written in python and opencv.

Simple Pixelbot for Diablo 2 Resurrected written in python and opencv. Obviously only use it in offline mode as it is against the TOS of Blizzard to use it in online mode!

468 Jan 03, 2023
Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective

Does-MAML-Only-Work-via-Feature-Re-use-A-Data-Set-Centric-Perspective Does MAML Only Work via Feature Re-use? A Data Set Centric Perspective Installin

2 Nov 07, 2022
Diverse Branch Block: Building a Convolution as an Inception-like Unit

Diverse Branch Block: Building a Convolution as an Inception-like Unit (PyTorch) (CVPR-2021) DBB is a powerful ConvNet building block to replace regul

253 Dec 24, 2022
This is an example of object detection on Micro bacterium tuberculosis using Mask-RCNN

Mask-RCNN on Mycobacterium tuberculosis This is an example of object detection on Mycobacterium Tuberculosis using Mask RCNN. Implement of Mask R-CNN

Jun-En Ding 1 Sep 16, 2021
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022