论文标题
通过自我监督的矢量化产生角色
Character Generation through Self-Supervised Vectorization
论文作者
论文摘要
自我监督图像生成中普遍的方法是在像素级表示上操作。尽管这种方法可以产生高质量的图像,但它不能从矢量化的简单性和先天质量中受益。在这里,我们提出了一个以图像的冲程级表示作用的绘图代理。在每个时间步骤中,代理商首先评估当前画布,并决定是停止还是继续绘画。当做出“抽奖”决定时,代理输出一个计划,表明要绘制的笔触。结果,它通过使用最少数量的笔触并动态决定何时停止,从而产生最终的光栅图像。我们通过对MNIST和Omniglot数据集进行强化学习来培训我们的代理,以无条件生成和解析(重建)任务。我们利用我们的解析代理在Omniglot挑战中进行典范生成和类型的条件概念生成,而无需进行任何进一步的培训。我们在所有三代任务和解析任务上都提供了成功的结果。至关重要的是,我们不需要任何中风级别或矢量监督;我们只使用栅格图像进行训练。
The prevalent approach in self-supervised image generation is to operate on pixel level representations. While this approach can produce high quality images, it cannot benefit from the simplicity and innate quality of vectorization. Here we present a drawing agent that operates on stroke-level representation of images. At each time step, the agent first assesses the current canvas and decides whether to stop or keep drawing. When a 'draw' decision is made, the agent outputs a program indicating the stroke to be drawn. As a result, it produces a final raster image by drawing the strokes on a canvas, using a minimal number of strokes and dynamically deciding when to stop. We train our agent through reinforcement learning on MNIST and Omniglot datasets for unconditional generation and parsing (reconstruction) tasks. We utilize our parsing agent for exemplar generation and type conditioned concept generation in Omniglot challenge without any further training. We present successful results on all three generation tasks and the parsing task. Crucially, we do not need any stroke-level or vector supervision; we only use raster images for training.