当前位置:网站首页>(论文翻译]未配对Image-To-Image翻译使用Cycle-Consistent敌对的网络
(论文翻译]未配对Image-To-Image翻译使用Cycle-Consistent敌对的网络
2022-07-30 13:50:00 【xiongxyowo】
Only the method part is translated
III. Formulation
Our target is given training samples { x i } i = 1 N ∈ X \left\{x_{i}\right\}_{i=1}^{N} \in X { xi}i=1N∈X和{ { x j } j = 1 M ∈ X \left\{x_{j}\right\}_{j=1}^{M} \in X { xj}j=1M∈XLearn between the two domains X X X和 Y Y Y的映射函数.如图3(a)所示,我们的模型包括两个映射 G : X → Y G: X→Y G:X→Y和 F : Y → X F: Y→X F:Y→X.此外,We introduce two adversarial discriminators D X D_X DX和 D Y D_Y DY,其中 D X D_X DX旨在区分图像 { x } \{x\} { x}and translated images { F ( y ) } \{F(y)\} { F(y)};同样, D Y D_Y DY旨在区分 { y } \{y\} { y}和 { G ( x ) } \{G(x)\} { G(x)}.Our goals consist of two kinds:对抗性损失(Adversarial Loss),Used to match the resulting image distribution to the data distribution of the target domain;以及循环一致性损失(Cycle Consistency Loss
),to prevent learned mappings G G G和 F F F相互矛盾.
Adversarial Loss
We apply adversarial loss to two mapping functions.对于映射函数 G : X → Y G:X→Y G:X→Yand its discriminator D Y D_Y DY,We express the goal as : L GAN ( G , D Y , X , Y ) = E y ∼ p data ( y ) [ log D Y ( y ) ] + E x ∼ p data ( x ) [ log ( 1 − D Y ( G ( x ) ) ] \mathcal{L}_{\text{GAN}}(G,\ D_{Y},\ X,\ Y) = \mathbb{E}_{y\sim p_{\text{data}}(y)}[\log D_{Y}(y)]\\ +\mathbb{E}_{x\sim p_{\text{data}}(x)}[\log(1- D_{Y}(G(x))] LGAN(G, DY, X, Y)=Ey∼pdata(y)[logDY(y)]+Ex∼pdata(x)[log(1−DY(G(x))] 其中 G G GTrying to generate with fields Y Y YThe image is similar to the image G ( x ) G(x) G(x),而 D Y D_Y DYAims to distinguish translated samples G ( x ) G(x) G(x)and real samples Y Y Y.我们为映射函数 F : Y → X F:Y→X F:Y→Xand its discriminator D X D_X DX也引入了类似的对抗性损失:即 L G A N ( F , D X , Y , X ) L_{GAN}(F, D_X, Y, X) LGAN(F,DX,Y,X).
Cycle Consistency Loss
理论上,对抗性训练可以学习映射 G G G和 F F F,Generated with the target domain Y Y Y和 X X Xdistribute the same output(严格来说,这需要G和F是随机的函数).然而,with sufficient capacity,The network can map the same set of input images to any random arrangement of images in the target domain,Any of these learned mappings can lead to an output distribution that matches the target distribution.为了进一步减少可能的映射函数的空间,We believe that the learned mapping function should be cycle-consistent:如图3b所示,对于来自域 X X X的每个图像 x x x,The image translation loop should be able to x x x带回原始图像,即 x → G ( x ) → F ( G ( x ) ) ≈ x x→G(x)→F(G(x))≈x x→G(x)→F(G(x))≈x.We call this forward cycle consistency.同样,如图3c所示,对于来自域 Y Y Y的每个图像 y y y, G G G和 F F FBackward circular consistency should also be satisfied: y → F ( y ) → G ( F ( y ) ) ≈ y y→F(y)→G(F(y))≈y y→F(y)→G(F(y))≈y.We can incentivize this behavior with a cycle consistency loss. L cyc ( G , F ) = E x ∼ p data ( x ) [ ∥ F ( G ( x ) ) − x ∥ 1 ] + E y ∼ p data ( ( y ) [ ∥ G ( F ( y ) ) − y ∥ 1 ] . \mathcal{L}_{\text{cyc}}(G,\ F)=\mathbb{E}_{x\sim p_{\text{data}}(x)}[\Vert F(G(x))-x \Vert_{1}]\\ +\mathbb{E}_{y\sim p_{\text{data}}((y)}[\Vert G(F(y))-y \Vert_{1}]. Lcyc(G, F)=Ex∼pdata(x)[∥F(G(x))−x∥1]+Ey∼pdata((y)[∥G(F(y))−y∥1]. 在初步实验中,我们还尝试用 F ( G ( x ) ) F(G(x)) F(G(x))和 x x x之间以及 G ( F ( y ) ) G(F(y)) G(F(y))和 y y yThe adversarial loss between is replaced in this lossL1准则,但没有观察到性能的改善.Behavior caused by cycle consistency loss can be found in arXivversion observed.
Full Objective
Our overall objective function is : L ( G , F , D X , D Y ) = L GAN ( G , D Y , X , Y ) + L GAN ( F , D X , Y , X ) + λ L cyc ( G , F ) \mathcal{L}(G,\ F,\ D_{X},\ D_{Y})=\mathcal{L}_{\text{GAN}}(G,\ D_{Y},\ X,\ Y)\\ +\mathcal{L}_{\text{GAN}}(F,\ D_{X},\ Y,\ X)\\ +\lambda \mathcal{L}_{\text{cyc}}(G,\ F) L(G, F, DX, DY)=LGAN(G, DY, X, Y)+LGAN(F, DX, Y, X)+λLcyc(G, F) 其中 λ \lambda λ控制两个目标的相对重要性.我们的目标是解决: G ∗ , F ∗ = arg min G , F max D x , D Y L ( G , F , D X , D Y ) G^{\ast},\ F^{\ast}= \arg\min_{G,\ F}\ \max_{D_{x},\ D_{Y}}\mathcal{L}(G,\ F,\ D_{X},\ D_{Y}) G∗, F∗=argG, Fmin Dx, DYmaxL(G, F, DX, DY) 请注意,Our model can be seen as training two"自动编码器":We will use an autoencoder F ∘ G : X → X F∘G:X→X F∘G:X→X与另一个 G ∘ F : Y → Y G∘F:Y→Y G∘F:Y→Y共同学习.然而,这些自动编码器都有特殊的内部结构:They map images to themselves through an intermediate representation,This intermediate representation is the translation of the image in another domain.Such a setup can also be seen as"对抗性自动编码器"的一个特例,It uses an adversarial loss to train the bottleneck layer of the autoencoder to match an arbitrary target distribution.在我们的例子中, X → X X→X X→XThe target distribution of the autoencoder is the domain Y Y Y的分布.在第5.1.3节中,We compare our method with full target subtraction,and shown by experience,These two goals play a key role in obtaining high-quality results.
边栏推荐
- [ARC092B] Two Sequences
- 简单理解精确率(Precision),召回率(Recall),准确率(Accuracy),TP,TN,FP,FN
- 干货分享:小技巧大用处之Bean管理类工厂多种实现方式
- Parallelized Quick Sort Ideas
- shell script flow control statement
- UPC2022暑期个人训练赛第19场(B,P)
- 电池包托盘有进水风险,存在安全隐患,紧急召回52928辆唐DM
- [C# 循环跳转]-C# 中的 while/do-while/for/foreach 循环结构以及 break/continue 跳转语句
- 戴墨镜的卡通太阳SVG动画js特效
- 20220729 证券、金融
猜你喜欢

canvas彩虹桥动画js特效

无代码开发平台全部应用设置入门教程

权威推荐!腾讯安全DDoS边缘安全产品获国际研究机构Omdia认可

LeetCode二叉树系列——116.填充每个节点的下一个右侧指针

“12306” 的架构到底有多牛逼

js人均寿命和GDP散点图统计样式

ML之PDP:基于FIFA 2018 Statistics(2018年俄罗斯世界杯足球赛)球队比赛之星分类预测数据集利用DT决策树&RF随机森林+PDP部分依赖图可视化实现模型可解释性之详细攻略

js男女身高体重关系图

如何判断自己是否适合IT行业?方法很简单

No-code development platform all application settings introductory tutorial
随机推荐
一文读懂Elephant Swap,为何为ePLATO带来如此高的溢价?
ARC115F Migration
jsArray数组复制方法性能测试2207300040
C语言学习练习题:汉诺塔(函数与递归)
20220729 证券、金融
Logic Vulnerability----Permission Vulnerability
No-code development platform application visible permission setting introductory tutorial
PyQt5快速开发与实战 8.6 设置样式
永州动力电池实验室建设合理布局方案
域名抢注“卷”到了表情包?ENS逆势上涨的新推力
R语言ggpubr包的ggboxplot函数可视化分组箱图、自定义移除可视化图像的特定对象(移除可视化图像轴坐标轴的刻度线标签文本、both x and y axis ticks labels)
重保特辑|筑牢第一道防线,云防火墙攻防演练最佳实践
戴墨镜的卡通太阳SVG动画js特效
odoo--qweb模板介绍(一)
SQL 26 calculation under 25 years of age or older and the number of users
canvas彩虹桥动画js特效
pytorch学习记录(五):卷积神经网络的实现
UPC2022暑期个人训练赛第19场(B,P)
【23考研】408代码题参考模板——链表
curl 执行脚本时传递环境变量与参数