当前位置:网站首页>[2020]GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
[2020]GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
2022-07-05 06:17:00 【Dark blue blue blue】
This is an improvement NeRF The article , Mainly introduced GAN, Thus avoiding NeRF Requirements for camera parameter labels .

The main structure is very intuitive , It's a standard conditional GAN, then NeRF The part of is placed in the generator .
First , The input of the generator is the various parameters of the camera , Location included , Direction , The focus of , Distance and other information , These parameters are completely random from a uniform distribution .
Then input the light sampler to determine the falling point of light and the number of light .
Then there are two ways to input the conditional radiation field :

1. Sample along the light , Determine the location of the sampling point . Then the position information and randomly sampled shape information are combined and input into the neural network , Learn a shape representation . Shape representation can be used to predict Density of target points
2. Combine the light fall point information with the above shape representation , Combined with randomly sampled texture information , Predict together The color of the target point
With the density and color of the target points, the final result can be rendered by volume rendering , That is, the result generated by the generator .
Then we will sample some results from real images , Input the discriminator together with the generated result , So that the discriminator can learn the distribution of real images .
Be careful :
1. In order to speed up the training , Both generation and sampling only synthesize some pixels in the image , There is no one-time generation of the entire image .
2. In fact, it was difficult to understand when I first read this article , Why can we learn a smooth view transition without any constraints on the camera position ? I personally think it is the effect of radiation field , Because the radiation field implicitly establishes an infinite resolution 3D object , And the simplest way is to build a reasonable 3D The way of objects is according to the real world 3D Structure to build , Therefore, it can ensure that the transition of perspective is smooth . But if the number of images for reference is too small , Or the structure of the object is too simple , I think there will be some strange problems .
边栏推荐
- QT判断界面当前点击的按钮和当前鼠标坐标
- Redis publish subscribe command line implementation
- Data visualization chart summary (I)
- leetcode-9:回文数
- Chapter 6 relational database theory
- [BMZCTF-pwn] ectf-2014 seddit
- Daily question 1984 Minimum difference in student scores
- Spark中groupByKey() 和 reduceByKey() 和combineByKey()
- Daily question 1189 Maximum number of "balloons"
- Usage scenarios of golang context
猜你喜欢
随机推荐
New title of module a of "PanYun Cup" secondary vocational network security skills competition
Sword finger offer II 058: schedule
Navicat连接Oracle数据库报错ORA-28547或ORA-03135
Basic explanation of typescript
MySQL advanced part 2: MySQL architecture
1041 Be Unique
[rust notes] 16 input and output (Part 2)
开源存储这么香,为何我们还要坚持自研?
The difference between CPU core and logical processor
Network security skills competition in Secondary Vocational Schools -- a tutorial article on middleware penetration testing in Guangxi regional competition
Golang uses context gracefully
【LeetCode】Easy | 20. Valid parentheses
leetcode-6108:解密消息
SQLMAP使用教程(一)
RGB LED infinite mirror controlled by Arduino
Leetcode-6109: number of people who know secrets
Leetcode backtracking method
1.15 - input and output system
MySQL advanced part 1: stored procedures and functions
One question per day 1765 The highest point in the map









