当前位置:网站首页>[2020]GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

[2020]GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis

2022-07-05 06:17:00 Dark blue blue blue

This is an improvement NeRF The article , Mainly introduced GAN, Thus avoiding NeRF Requirements for camera parameter labels .

The main structure is very intuitive , It's a standard conditional GAN, then NeRF The part of is placed in the generator .

First , The input of the generator is the various parameters of the camera , Location included , Direction , The focus of , Distance and other information , These parameters are completely random from a uniform distribution .

Then input the light sampler to determine the falling point of light and the number of light .

Then there are two ways to input the conditional radiation field :

1. Sample along the light , Determine the location of the sampling point . Then the position information and randomly sampled shape information are combined and input into the neural network , Learn a shape representation . Shape representation can be used to predict Density of target points

2. Combine the light fall point information with the above shape representation , Combined with randomly sampled texture information , Predict together The color of the target point

With the density and color of the target points, the final result can be rendered by volume rendering , That is, the result generated by the generator .

Then we will sample some results from real images , Input the discriminator together with the generated result , So that the discriminator can learn the distribution of real images .

Be careful :

1. In order to speed up the training , Both generation and sampling only synthesize some pixels in the image , There is no one-time generation of the entire image .

2. In fact, it was difficult to understand when I first read this article , Why can we learn a smooth view transition without any constraints on the camera position ? I personally think it is the effect of radiation field , Because the radiation field implicitly establishes an infinite resolution 3D object , And the simplest way is to build a reasonable 3D The way of objects is according to the real world 3D Structure to build , Therefore, it can ensure that the transition of perspective is smooth . But if the number of images for reference is too small , Or the structure of the object is too simple , I think there will be some strange problems .

原网站

版权声明
本文为[Dark blue blue blue]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/02/202202140617433778.html