Alias-Free Generative Adversarial Networks

我们观察到,尽管它们具有层次卷积性质,但典型生成对抗网络的合成过程以一种不健康的方式依赖于绝对像素坐标。这表现为,例如,细节似乎粘在图像坐标上,而不是所描绘对象的表面上。
我们将根本原因追溯到粗心的信号处理,导致发电机网络出现混叠。将网络中的所有信号解释为连续的,我们导出了普遍适用的小型架构更改,以确保不需要的信息不会泄漏到分层合成过程中。由此产生的网络与StyleGAN2的FID匹配,但在内部表示上存在显著差异,即使在亚像素尺度上,它们也与平移和旋转完全相同。我们的结果为更适合视频和动画的生成模型铺平了道路。

We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects.

We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation.
PDFAbstract

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
搜索