论文标题

神经细胞视频综合通过光流扩散

Neural Cell Video Synthesis via Optical-Flow Diffusion

论文作者

Serna-Aguilera, Manuel, Luu, Khoa, Harris, Nathaniel, Zou, Min

论文摘要

生物医学成像界因使用少量数据,令人沮丧的计算机视觉和深度学习世界的最新工作而臭名昭著。使用大型数据集,从自然图像分布中看到的进度更容易。在培养物中移动的神经元细胞的显微镜视频相同。这个问题提出了几个挑战,因为几天可能很难种植和维护文化,并且购买材料和设备很昂贵。在这项工作中,我们探讨了如何通过综合视频来缓解这些数据稀缺问题。因此,我们将视频扩散模型的最新工作从培训数据集中综合了细胞视频。然后,我们分析模型的优势和一致的缺点,以指导我们改善视频的高质量。为了改善此类任务,我们建议修改denoising函数并添加运动信息(密集的光流),以便模型具有更多关于视频帧如何随时间变化以及每个像素如何随时间变化的背景。

The biomedical imaging world is notorious for working with small amounts of data, frustrating state-of-the-art efforts in the computer vision and deep learning worlds. With large datasets, it is easier to make progress we have seen from the natural image distribution. It is the same with microscopy videos of neuron cells moving in a culture. This problem presents several challenges as it can be difficult to grow and maintain the culture for days, and it is expensive to acquire the materials and equipment. In this work, we explore how to alleviate this data scarcity problem by synthesizing the videos. We, therefore, take the recent work of the video diffusion model to synthesize videos of cells from our training dataset. We then analyze the model's strengths and consistent shortcomings to guide us on improving video generation to be as high-quality as possible. To improve on such a task, we propose modifying the denoising function and adding motion information (dense optical flow) so that the model has more context regarding how video frames transition over time and how each pixel changes over time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源