论文标题
花粉谷物微观图像分类,使用微调深卷积神经网络合奏
Pollen Grain Microscopic Image Classification Using an Ensemble of Fine-Tuned Deep Convolutional Neural Networks
论文作者
论文摘要
花粉谷物显微照片分类在医学和生物学中有多个应用。自动花粉谷物图像分类可以减轻手动分类的问题,例如主观性和时间限制。虽然文献中已经引入了许多基于计算机的方法来执行此任务,但是对于这些方法,需要改进分类性能。 在本文中,我们提出了一种花粉谷物微观图像分类的合奏方法,分为四类:Corylus avellana发育良好的花粉谷物,corylus avellana异常花粉谷物,Alnus良好的花粉谷物和非pl胶(碎片)实例。在我们的方法中,我们制定了一种基于四个最先进的微调卷积神经网络融合的分类策略,即效率NETB0,EdgitionNetB1,EdgitionNetB2和Seresnext-50深模型。这些模型经过三种固定尺寸(224x224、240x240和260x260像素)的图像进行训练,然后将其预测概率向量以合奏方法融合,以形成给定花粉谷物图像的最终分类矢量。 我们提出的方法证明可以产生出色的分类性能,在ICPR 2020年2020年花粉谷物分类挑战训练数据集上,基于五倍的交叉验证,获得了94.48%的精度和94.54%的加权F1评分。根据挑战的测试集评估,我们的方法与最高排名的方法相比,具有非常有竞争力的表现,其准确性和加权F1分别为96.28%和96.30%。
Pollen grain micrograph classification has multiple applications in medicine and biology. Automatic pollen grain image classification can alleviate the problems of manual categorisation such as subjectivity and time constraints. While a number of computer-based methods have been introduced in the literature to perform this task, classification performance needs to be improved for these methods to be useful in practice. In this paper, we present an ensemble approach for pollen grain microscopic image classification into four categories: Corylus Avellana well-developed pollen grain, Corylus Avellana anomalous pollen grain, Alnus well-developed pollen grain, and non-pollen (debris) instances. In our approach, we develop a classification strategy that is based on fusion of four state-of-the-art fine-tuned convolutional neural networks, namely EfficientNetB0, EfficientNetB1, EfficientNetB2 and SeResNeXt-50 deep models. These models are trained with images of three fixed sizes (224x224, 240x240, and 260x260 pixels) and their prediction probability vectors are then fused in an ensemble method to form a final classification vector for a given pollen grain image. Our proposed method is shown to yield excellent classification performance, obtaining an accuracy of of 94.48% and a weighted F1-score of 94.54% on the ICPR 2020 Pollen Grain Classification Challenge training dataset based on five-fold cross-validation. Evaluated on the test set of the challenge, our approach achieved a very competitive performance in comparison to the top ranked approaches with an accuracy and a weighted F1-score of 96.28% and 96.30%, respectively.