论文标题
审查数据的线性专家模型的混合物:一种具有正常分布比例混合的新方法
Mixture of linear experts model for censored data: A novel approach with scale-mixture of normal distributions
论文作者
论文摘要
线性专家(MOE)模型的经典混合物是用于建模,分类和集群数据的广泛统计框架之一。基于数学和计算便利性的错误项的正态性假设,经典MOE模型有两个挑战:1)它对非典型观察和异常值敏感,而2)可能会为审查数据产生误导性的推论结果。然后,该论文的目的是通过提出一种新型的强大MOE模型来解决这两个挑战,以用于基于模型的聚类和判别审查数据,并具有正常分布类别的尺度杂项,以实现未观察到的误差项。基于这个新型模型,我们开发了一种分析期望最大化(EM)类型算法,以获得最大似然参数估计。进行仿真研究以检查所提出方法的性能,有效性和鲁棒性。最后,实际数据用于说明新模型的优越性。
The classical mixture of linear experts (MoE) model is one of the widespread statistical frameworks for modeling, classification, and clustering of data. Built on the normality assumption of the error terms for mathematical and computational convenience, the classical MoE model has two challenges: 1) it is sensitive to atypical observations and outliers, and 2) it might produce misleading inferential results for censored data. The paper is then aimed to resolve these two challenges, simultaneously, by proposing a novel robust MoE model for model-based clustering and discriminant censored data with the scale-mixture of normal class of distributions for the unobserved error terms. Based on this novel model, we develop an analytical expectation-maximization (EM) type algorithm to obtain the maximum likelihood parameter estimates. Simulation studies are carried out to examine the performance, effectiveness, and robustness of the proposed methodology. Finally, real data is used to illustrate the superiority of the new model.