PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision
Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Tao Wang*, Jing Huang, Bi Mi, Jiashi Feng, Xinchao Wang
arxiv)
CVPR 2022 (Oral Presentation,Framework
Pose-triplet contains three components: estimator, imitator and hallucinator
The three components form dual-loop during the training process, complementing and strengthening one another.
Improvement through co-evolving
Here is imitated motion of different rounds, the estimator and imitator get improved over the rounds of training, and thus the imitated motion becomes more accurate and realistic from round 1 to 3.
Video demo
04806-supp.mp4
Comparasion
Here we compared our results with two recent works Yu et al. and Hu et al.
Installation
- Please refer to
README_env.md
for the python environment setup.
Data Preparation
- Please refer to
estimator/README.md
for the preparation of the dataset files.
Training
Please refer to script-summary
for the training process. We also provide a checkpoint folder here with better performance, which support that this framework has the potential to reach the same performance as fully-supervised approaches.
Note: checkpoint for the RL policy is not include due to the size limitation, please following the training code to train the policy.
Inference
We provide an inference code here. Please follow the instruction and download the pretrained model for inference on videos.
Talk
Here is a slidestalk (PPT in english, speak in chinese).
Citation
If you find this code useful for your research, please consider citing the following paper:
@inproceedings{gong2022posetriplet,
title = {PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision},
author = {Gong, Kehong and Li, Bingbing and Zhang, Jianfeng and Wang, Tao and Huang, Jing and Mi, Michael Bi and Feng, Jiashi and Wang, Xinchao},
booktitle = {CVPR},
year = {2022}
}