论文标题

跨感受和深度学习方法的适用性在研究和模仿复杂的软骨组织的研究和模仿中

The applicability of transperceptual and deep learning approaches to the study and mimicry of complex cartilaginous tissues

论文作者

Waghorne, J., Howard, C., Hu, H., Pang, J., Peveler, W. J., Harris, L., Barrera, O.

论文摘要

复杂的软组织(例如膝盖弯面条)在移动性和关节健康中起着至关重要的作用,但是当受损时,很难修复和更换。这是由于它们高度的分层和多孔性质,进而导致其独特的机械性能。为了设计组织替代品,需要理解和复制天然组织的内部结构。在这里,我们探索了一种合并的视听方法 - 所谓的transceptual-以生成模仿本地架构的人工体系结构。所提出的方法既使用传统图像,也使用从每个图像产生的声音作为快速比较和对比样品内的孔隙和孔径的方法。我们已经在2D图像堆栈上训练并测试了生成对抗网络(GAN)。通过分析两个样本来评估训练图的训练集对人造与原始数据集的相似性的影响。第一个由n = 478对的音频和图像文件组成的图像被删除至64 $ \ times $ 64 $ 64像素,第二个由n = 7640对的音频文件和图像文件组成的第二个像素和完整分辨率256 $ \ \ \ \ \ \ \ $ 256 $ 256的像素的$ 64 $ 64 $ 64 $ 64 $ 64 $ 64 $ 64 $ 64 $ 64。我们将人工生成的数据集的2D堆栈重建为3D对象,并运行图像分析算法以统计地表征建筑参数 - 孔径,折磨和孔连接性 - 并将它们与原始数据集进行比较。结果表明,在参数匹配方面,经历下采样的人为生成的数据集的性能更好。我们的视听方法有可能扩展到较大的数据集,以探索如何在多个样本中识别出相似性和差异。

Complex soft tissues, for example the knee meniscus, play a crucial role in mobility and joint health, but when damaged are incredibly difficult to repair and replace. This is due to their highly hierarchical and porous nature which in turn leads to their unique mechanical properties. In order to design tissue substitutes, the internal architecture of the native tissue needs to be understood and replicated. Here we explore a combined audio-visual approach - so called transperceptual - to generate artificial architectures mimicking the native ones. The proposed method uses both traditional imagery, and sound generated from each image as a method of rapidly comparing and contrasting the porosity and pore size within the samples. We have trained and tested a generative adversarial network (GAN) on the 2D image stacks. The impact of the training set of images on the similarity of the artificial to the original dataset was assessed by analyzing two samples. The first consisting of n=478 pairs of audio and image files for which the images were downsampled to 64 $\times$ 64 pixels, the second one consisting of n=7640 pairs of audio and image files for which the full resolution 256 $\times$ 256 pixels is retained but each image is divided into 16 squares to maintain the limit of 64 $\times$ 64 pixels required by the GAN. We reconstruct the 2D stacks of artificially generated datasets into 3D objects and run image analysis algorithms to characterize statistically the architectural parameters - pore size, tortuosity and pore connectivity - and compare them with the original dataset. Results show that the artificially generated dataset that undergoes downsampling performs better in terms of parameter matching. Our audiovisual approach has the potential to be extended to larger data sets to explore both how similarities and differences can be audibly recognized across multiple samples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源