当前位置:网站首页>[2020]GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
[2020]GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
2022-07-05 06:17:00 【Dark blue blue blue】
This is an improvement NeRF The article , Mainly introduced GAN, Thus avoiding NeRF Requirements for camera parameter labels .
The main structure is very intuitive , It's a standard conditional GAN, then NeRF The part of is placed in the generator .
First , The input of the generator is the various parameters of the camera , Location included , Direction , The focus of , Distance and other information , These parameters are completely random from a uniform distribution .
Then input the light sampler to determine the falling point of light and the number of light .
Then there are two ways to input the conditional radiation field :
1. Sample along the light , Determine the location of the sampling point . Then the position information and randomly sampled shape information are combined and input into the neural network , Learn a shape representation . Shape representation can be used to predict Density of target points
2. Combine the light fall point information with the above shape representation , Combined with randomly sampled texture information , Predict together The color of the target point
With the density and color of the target points, the final result can be rendered by volume rendering , That is, the result generated by the generator .
Then we will sample some results from real images , Input the discriminator together with the generated result , So that the discriminator can learn the distribution of real images .
Be careful :
1. In order to speed up the training , Both generation and sampling only synthesize some pixels in the image , There is no one-time generation of the entire image .
2. In fact, it was difficult to understand when I first read this article , Why can we learn a smooth view transition without any constraints on the camera position ? I personally think it is the effect of radiation field , Because the radiation field implicitly establishes an infinite resolution 3D object , And the simplest way is to build a reasonable 3D The way of objects is according to the real world 3D Structure to build , Therefore, it can ensure that the transition of perspective is smooth . But if the number of images for reference is too small , Or the structure of the object is too simple , I think there will be some strange problems .
边栏推荐
- leetcode-31:下一个排列
- The sum of the unique elements of the daily question
- leetcode-3:无重复字符的最长子串
- 数据可视化图表总结(一)
- 阿里巴巴成立企业数智服务公司“瓴羊”,聚焦企业数字化增长
- 打印机脱机时一种容易被忽略的原因
- Sword finger offer II 058: schedule
- Chapter 6 relational database theory
- NotImplementedError: Cannot convert a symbolic Tensor (yolo_boxes_0/meshgrid/Size_1:0) to a numpy ar
- 2021apmcm post game Summary - edge detection
猜你喜欢
1.15 - input and output system
Leetcode array operation
阿里巴巴成立企业数智服务公司“瓴羊”,聚焦企业数字化增长
Data visualization chart summary (II)
2021apmcm post game Summary - edge detection
MySQL advanced part 1: stored procedures and functions
实时时钟 (RTC)
Sqlmap tutorial (II) practical skills I
MySQL advanced part 2: MySQL architecture
redis发布订阅命令行实现
随机推荐
Traversal of leetcode tree
Daily question 1189 Maximum number of "balloons"
js快速将json数据转换为url参数
Leetcode stack related
LeetCode 0108. Convert an ordered array into a binary search tree - the median of the array is the root, and the left and right of the median are the left and right subtrees respectively
Simple selection sort of selection sort
数据可视化图表总结(一)
高斯消元 AcWing 884. 高斯消元解异或线性方程组
One question per day 1447 Simplest fraction
阿里新成员「瓴羊」正式亮相,由阿里副总裁朋新宇带队,集结多个核心部门技术团队
leetcode-6110:网格图中递增路径的数目
Leetcode-6110: number of incremental paths in the grid graph
Doing SQL performance optimization is really eye-catching
[rust notes] 13 iterator (Part 2)
TypeScript 基础讲解
Arduino 控制的 RGB LED 无限镜
Daily question 1342 Number of operations to change the number to 0
【Rust 笔记】14-集合(下)
New title of module a of "PanYun Cup" secondary vocational network security skills competition
[rust notes] 14 set (Part 2)