3D-aware GANs based on NeRF (arXiv).

Overview

CIPS-3D

This repository will contain the code of the paper,
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis.

We are planning to publish the training code here in December. But if the github star reaches two hundred, I will advance the date. Stay tuned 🕙 .

Demo videos

demo1.mp4
demo2.mp4
demo_animal_finetuned.mp4
demo3.mp4
demo4.mp4
demo5.mp4

Mirror symmetry problem

The problem of mirror symmetry refers to the sudden change of the direction of the bangs near the yaw angle of pi/2. We propose to use an auxiliary discriminator to solve this problem (please see the paper).

Note that in the initial stage of training, the auxiliary discriminator must dominate the generator more than the main discriminator does. Otherwise, if the main discriminator dominates the generator, the mirror symmetry problem will still occur. In practice, progressive training is able to guarantee this. We have trained many times from scratch. Adding an auxiliary discriminator stably solves the mirror symmetry problem. If you find any problems with this idea, please open an issue.

Envs


Training


Citation

If you find our work useful in your research, please cite:


@article{zhou2021CIPS3D,
  title = {{{CIPS}}-{{3D}}: A {{3D}}-{{Aware Generator}} of {{GANs Based}} on {{Conditionally}}-{{Independent Pixel Synthesis}}},
  shorttitle = {{{CIPS}}-{{3D}}},
  author = {Zhou, Peng and Xie, Lingxi and Ni, Bingbing and Tian, Qi},
  year = {2021},
  eprint = {2110.09788},
  eprinttype = {arxiv},
  primaryclass = {cs, eess},
  archiveprefix = {arXiv}
}

Acknowledgments

Comments
  • CUDA error: out of memory

    CUDA error: out of memory

    Hi guy, There is an issue CUDA error: out of memory (even with batch size = 1) when I try to run training script with this command CUDA_VISIBLE_DEVICES=2 python -c "import sys; sys.path.append('./'); from exp.tests.test_cips3d import Testing_ffhq_exp; Testing_ffhq_exp().test_train_ffhq(debug=False)" --tl_opts batch_size 1 img_size 32 total_iters 80000

    I try to run on V100 GPU with 32Gb mem. What should I do? Btw, really appreciate your work, a great paper. 👏

    image

    opened by longnhatne 7
  • Problem about reproducing the results

    Problem about reproducing the results

    Hi, PeterouZh,

    I'm reproducing your results at the same pace with you. Honestly speaking, this model takes about 40 hours to reach 64x64 at FID 15.97 with 8 A100 gpus. While I change the resolution to 128x128, the FID reach to 23.58. I'm still traning it and it reach FID 20.03 yet.

    How can this model reach FID 6.XX as you described in paper? Do we miss some key things? It looks that this model can only reach 10+ FID in 256 resolutions because the performance increases very lowly when the FID reach 16 at 64x64 resolution.

    By the way, I try to reproduce your results few weeks ago but I met problems about moxing. Does moxing provide very important tricks for this work?

    opened by 0three 7
  • The quality of generated images for FFHQ

    The quality of generated images for FFHQ

    Hello,

    Thanks for sharing your source code and pre-trained weights. I am trying to generate high-quality images from FFHQ pre-trained model. However, the quality of generated images is not as good enough as stated in the paper. I could not reproduce the results.

    I am using the pre-trained weights from here https://github.com/PeterouZh/CIPS-3D/releases/tag/v0.0.2

    The command I tried: python exp/cips3d/scripts/sample_images.py --tl_config_file exp/cips3d/configs/ffhq_exp.yaml --tl_command sample_images

    Generated images: 0048220334 0038712131 0002215104

    Do you have any idea regarding the problem?

    opened by enisimsar 6
  • How can I get an image resolution greater than 256?

    How can I get an image resolution greater than 256?

    Hi! You did a great job, thanks for such a great paper and promptly published CIPS-3D code.

    I've already gotten good results with your pipeline, but for images with resolution 64x64. Now I'm waiting the results of generating images with a resolution of 128x128. And I will further train for higher resolution images.

    I understand correctly, in order to get 512x512 images, I need to convert the original FFHQ dataset once again through your script dataset_tool.py, but specifying the resize for 512 in it? And after I need to run training pipeline with lower values for generator learning rate and discriminator learning rate?

    Thanks!

    opened by gofixyourself 4
  • > I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    > I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    I want test some other image on your model. But I dont konw how to do it. If I have image sequence with pose data,how to test?

    1. Align the images in the way of StyleGAN. You can refer to this script align_images.py.
    2. Project the aligned images into the W space, also known as GAN inversion. Different from the common 2D inversion, you'd better set an appropriate yaw/pitch/fov for the CIPS-3D generator to make the initial pose of G(w) and the image to be inverted consistent.
    3. After you get the w of the image, you can reconstruct images of different styles using G'(w). G' can be obtained by interpolating generators of different domains.

    Hope this helps.

    Originally posted by @PeterouZh in https://github.com/PeterouZh/CIPS-3D/issues/7#issuecomment-963163677

    opened by zhywanna 2
  • Configuration environment issues

    Configuration environment issues

    Hi,good job!

    I have a problem, please help me.

    pip install -e torch_fidelity_lib ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /media/sdb/wd/test_code/CIPS-3D/torch_fidelity_lib

    opened by Stephanie-ustc 2
  • The pretrained model can be used in finetune_photo2cartoon.sh?

    The pretrained model can be used in finetune_photo2cartoon.sh?

    I load the FFHQ pretrained model from Pre-trained checkpoints. And change the finetune_dir as Pre-trained checkpoints in finetune_photo2cartoon.sh. But it seems not to work. I want to know if the pre-trained model can be used in finetune_photo2cartoon.sh?

    opened by Benwang-chen 1
  • A few questions

    A few questions

    Dear Dr.Zhou, Thanks for sharing your great job and congratulations on your graduating Ph.D ! I have a few questions and hoping for your reply.

    1、I found a command in another issue https://github.com/PeterouZh/CIPS-3D/issues/31#issue-1196645855 python exp/cips3d/scripts/sample_images.py --tl_config_file exp/cips3d/configs/ffhq_exp.yaml --tl_command sample_images But I can't find those arguments in sample_images.py and confuse about why he knows how to use. And I also found some packages import from tl2 library, but failed to find any documentation. I wonder if there are any instruction i miss in addition to README. 2、I saw two generators file in /CIPS-3D/exp/cips3d/models generator.py and generator_v1.py, which one should I use ? 3、Which class in generator files indicates the complete generator module cause I want to do some inversion tests and not sure whether it's class GeneratorNerfINR ? And the G_ema.pth or generator.pth in ckpt is the corresponding parametors to the generator which I can directly load into, am i right? 4、What is the use of state_dict.pth in ckpt.

    By the way, I think using Chinese is more convenience for us. Thanks!

    opened by zhywanna 1
  • Output images with gradient during inference

    Output images with gradient during inference

    Hi there,

    I try to output the image with the gradient. However, I found that if I use your default testing code, it will call whole_grad_forward (https://github.com/PeterouZh/CIPS-3D/blob/aee40251a02c34e58d3002bcb845151c41b538f0/exp/dev/nerf_inr/models/generator_nerf_inr_v16.py#L1395), and will remove the gradient. If I comment out the torch.no_grad(), it would be out of memory. Is there a way to output the image with gradient? Thanks

    opened by lelechen63 1
  • closed

    closed

    Hi,

    Thanks for the great work. I am trying to inverse the image into w/z using the pretrained model. So would you release the pretrained discriminator to enable the inversion feature? Thanks

    opened by lelechen63 1
  • Question about the input of shallow nerf network

    Question about the input of shallow nerf network

    I know nerf is a view-dependent synthesis method due to a direction input. However, in your code. I find you don't use it. Why can cips3d still work? just input the world coordinate can achieve new view synthesis? why?

    opened by shoutOutYangJie 1
  • Why not train from scratch?

    Why not train from scratch?

    您好,感谢您的开源代码。

    在Readme中您有说明,生成高分辨率时的训练流程是32->64->128->256, 每次训练都基于前一分辨率得到的model进行finetune。 这样的训练策略的确会比直接训练要容易得多,那请问您试过直接训练256分辨率吗,调整训练参数是否也能得到类似的效果?

    opened by BlingHe 0
  • How to view G model effects?web_demo.py only 3 same pics

    How to view G model effects?web_demo.py only 3 same pics

    How to view G model effects?

    run web_demo.py like this , web only display 3 same pictures,1picture display nothing(be black).

    image

    image

    web_demo.py like below : image

    opened by jojoWd 0
  • Can I put my face photo in your pre-trained  web_ Demo to generate a  3D? video

    Can I put my face photo in your pre-trained web_ Demo to generate a 3D? video

    Hello, thank you for your contribution. I try to run your web_ Demo. I saw you say"Thus current stylization is limited to randomly generated images. To edit a real image, we need to project the image to the latent space of the generator. ".So I can't import other face images to produce the effect like the demo-video? Thank you.

    opened by lemonsstyle 0
  • How to set the near and far plane in NeRF network?

    How to set the near and far plane in NeRF network?

    Thanks for your excellent work. I am curious why you set the ray_near and ray_end to 0.88 and 1.12? (and for other variables like h_stddev etc.) Is that set empirically?

    opened by cwchenwang 1
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding CIPS-3D to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 1
Owner
Peterou
I have trained thousands of GAN models in the past three years, including WGAN, BigGAN, and StyleGAN.
Peterou
Detectorch - detectron for PyTorch

Detectorch - detectron for PyTorch (Disclaimer: this is work in progress and does not feature all the functionalities of detectron. Currently only inf

Ignacio Rocco 558 Dec 23, 2022
Official PyTorch implementation of MX-Font (Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts)

Introduction Pytorch implementation of Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Expert. | paper Song Park1

Clova AI Research 97 Dec 23, 2022
Transformer in Computer Vision

Transformer-in-Vision A paper list of some recent Transformer-based CV works. If you find some ignored papers, please open issues or pull requests. **

506 Dec 26, 2022
Python implementation of ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images, AAAI2022.

ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images Binh M. Le & Simon S. Woo, "ADD:

2 Oct 24, 2022
Convert human motion from video to .bvh

video_to_bvh Convert human motion from video to .bvh with Google Colab Usage 1. Open video_to_bvh.ipynb in Google Colab Go to https://colab.research.g

Dene 306 Dec 10, 2022
GANsformer: Generative Adversarial Transformers Drew A

GANformer: Generative Adversarial Transformers Drew A. Hudson* & C. Lawrence Zitnick Update: We released the new GANformer2 paper! *I wish to thank Ch

Drew Arad Hudson 1.2k Jan 02, 2023
Source code for the Paper: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints}

CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints Installation Run pipenv install (at your own risk with --skip-lo

Autonomous Learning Group 65 Dec 27, 2022
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
[Machine Learning Engineer Basic Guide] 부스트캠프 AI Tech - Product Serving 자료

Boostcamp-AI-Tech-Product-Serving 부스트캠프 AI Tech - Product Serving 자료 Repository 구조 part1(MLOps 개론, Model Serving, 머신러닝 프로젝트 라이프 사이클은 별도의 코드가 없으며, part

Sung Yun Byeon 269 Dec 21, 2022
Advantage Actor Critic (A2C): jax + flax implementation

Advantage Actor Critic (A2C): jax + flax implementation Current version supports only environments with continious action spaces and was tested on muj

Andrey 3 Jan 23, 2022
Exploration & Research into cross-domain MEV. Initial focus on ETH/POLYGON.

xMEV, an apt exploration This is a small exploration on the xMEV opportunities between Polygon and Ethereum. It's a data analysis exercise on a few pa

odyslam.eth 7 Oct 18, 2022
Texture mapping with variational auto-encoders

vae-textures This is an experiment with using variational autoencoders (VAEs) to perform mesh parameterization. This was also my first project using J

Alex Nichol 41 May 24, 2022
Recommendationsystem - Movie-recommendation - matrixfactorization colloborative filtering recommendation system user

recommendationsystem matrixfactorization colloborative filtering recommendation

kunal jagdish madavi 1 Jan 01, 2022
Notebooks, slides and dataset of the CorrelAid Machine Learning Winter School

CorrelAid Machine Learning Winter School Welcome to the CorrelAid ML Winter School! Task The problem we want to solve is to classify trees in Roosevel

CorrelAid 12 Nov 23, 2022
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
NDE: Climate Modeling with Neural Diffusion Equation, ICDM'21

Climate Modeling with Neural Diffusion Equation Introduction This is the repository of our accepted ICDM 2021 paper "Climate Modeling with Neural Diff

Jeehyun Hwang 5 Dec 18, 2022
Multi-view 3D reconstruction using neural rendering. Unofficial implementation of UNISURF, VolSDF, NeuS and more.

Volume rendering + 3D implicit surface Showcase What? previous: surface rendering; now: volume rendering previous: NeRF's volume density; now: implici

Jianfei Guo 682 Jan 04, 2023
Repository for MDPGT

MD-PGT Repository for implementing and reproducing the results for the paper MDPGT: Momentum-based Decentralized Policy Gradient Tracking. Available E

Xian Yeow Lee 2 Dec 30, 2021
Adaptation through prediction: multisensory active inference torque control

Adaptation through prediction: multisensory active inference torque control Submitted to IEEE Transactions on Cognitive and Developmental Systems Abst

Cristian Meo 1 Nov 07, 2022
Demos of essentia classifiers hosted on replicate.ai

essentia-replicate-demos Demos of Essentia models hosted on replicate.ai's MTG site. The models Check our site for a complete list of the models avail

Music Technology Group - Universitat Pompeu Fabra 12 Nov 14, 2022