Dressing in Order (DiOr)
The official implementation of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing". by Aiyu Cui, Daniel McKee and Svetlana Lazebnik. (ICCV 2021)
- [2021/08] Please check our latest version of paper for the updated and clarified implementation details.
- Clarification: the facial component was not added to the skin encoding as stated in the our CVPR 2021 workshop paper due to a minor typo. However, this doesn't affect our conclusions nor the comparison with the prior work, because it is an independent skin encoding design.
- [2021/07] To appear in ICCV 2021.
- [2021/06] The best paper at Computer Vision for Fashion, Art and Design Workshop CVPR 2021.
Supported Try-on Applications
Supported Editing Applications
More results
Play with demo.ipynb!
Get Started
Please follow the installation instruction in GFLA to install the environment.
Then run
pip install -r requirements.txt
If one wants to run inference only: You can use later version of PyTorch and you don't need to worry about how to install GFLA's cuda functions. Please specify --frozen_flownet
.
Dataset
We run experiments on Deepfashion Dataset. To set up the dataset:
- Download and unzip
img_highres.zip
from the deepfashion inshop dataset at$DATA_ROOT
- Download the train/val split and pre-processed keypoints annotations from GFLA source or PATN source, and put the
.csv
and.lst
files at$DATA_ROOT
. - Run
python tools/generate_fashion_dataset.py
to split the data. (Please specify the $DATA_ROOT accordingly.) - Get human parsing. You can obtain the parsing by either:
- Download standard_test_anns.txt for fast visualization.
After the processing, you should have the dataset folder formatted like:
+ $DATA_ROOT
| + train (all training images)
| | - xxx.jpg
| | ...
| + trainM_lip (human parse of all training images)
| | - xxx.png
| | ...
| + test (all test images)
| | - xxx.jpg
| | ...
| + testM_lip (human parse of all test images)
| | - xxx.png
| | ...
| - fashion-pairs-train.csv (paired poses for training)
| - fashion-pairs-test.csv (paired poses for test)
| - fashion-annotation-train.csv (keypoints for training images)
| - fashion-annotation-test.csv (keypoints for test images)
| - train.lst
| - test.lst
| - standard_test_anns.txt
Run Demo
Please download the pretrained weights from here and unzip at checkpoints/
.
After downloading the pretrained model and setting the data, you can try out our applications in notebook demo.ipynb.
(The checkpoints above are reproduced, so there could be slightly difference in quantitative evaluation from the reported results. To get the original results, please check our released generated images here.)
(DIORv1_64
was trained with a minor difference in code, but it may give better visual results in some applications. If one wants to try it, specify --netG diorv1
.)
Training
Warmup the Global Flow Field Estimator
Note, if you don't want to warmup the Global Flow Field Estimator, you can extract its weights from GFLA by downloading the pretrained weights GFLA from here.
Otherwise, run
sh scripts/run_pose.sh
Training
After warming up the flownet, train the pipeline by
sh scripts/run_train.sh
Run tensorboard --logdir checkpoints/$EXP_NAME/train
to check tensorboard. Resetting discriminators may help training when it stucks at local minimals.
Evaluations
To download our generated images (256x176 reported in paper): here.
SSIM, FID and LPIPS
To run evaluation (SSIM, FID and LPIPS) on pose transfer task:
sh scripts/run_eval.sh
Cite us!
If you find this work is helpful, please consider to star
@article{cui2021dressing,
title={Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing},
author={Cui, Aiyu and McKee, Daniel and Lazebnik, Svetlana},
journal={arXiv preprint arXiv:2104.07021},
year={2021}
}
Acknowledgements
This repository is built up on GFLA, pytorch-CycleGAN-and-pix2pix, PATN and MUNIT. Please be aware of their licenses when using the code.
Thanks a lot for the great work to the pioneer researchers!