论文标题
通过掩盖的似然比渐近最佳的仿真统计
Asymptotically Optimal Knockoff Statistics via the Masked Likelihood Ratio
论文作者
论文摘要
在特征选择问题中,仿制是原始特征的合成控制。采用仿制使分析师几乎可以使用任何可变的重要性度量或“特征统计量”来选择功能,同时严格控制误报。但是,尚不清楚哪种统计数据最大化了功率。在本文中,我们认为,基于最新的套索的功能统计通常会优先考虑不太可能发现的功能,从而导致实际应用中的低功率。取而代之的是,我们引入了蒙版的似然比(MLR)统计数据,该统计数据根据一个人将每个功能与仿冒品区分开的能力确定功能优先级。尽管在所有情况下都没有单一的特征统计量最强大,但我们表明MLR统计数据渐近地最大化了用户指定的数据模型下的发现数量。 (像所有功能统计一样,MLR统计信息始终提供常见的错误控制。)此结果对问题维度没有任何限制,也没有参数假设;相反,我们需要仅取决于已知数量的“局部依赖”条件。在模拟和三个实际应用中,MLR统计数据的表现优于最先进的功能统计信息,包括在误指定贝叶斯模型的设置中。我们在Python软件包Knockpy中实现MLR统计;我们的实现通常比计算交叉验证的拉索更快。
In feature selection problems, knockoffs are synthetic controls for the original features. Employing knockoffs allows analysts to use nearly any variable importance measure or "feature statistic" to select features while rigorously controlling false positives. However, it is not clear which statistic maximizes power. In this paper, we argue that state-of-the-art lasso-based feature statistics often prioritize features that are unlikely to be discovered, leading to low power in real applications. Instead, we introduce masked likelihood ratio (MLR) statistics, which prioritize features according to one's ability to distinguish each feature from its knockoff. Although no single feature statistic is uniformly most powerful in all situations, we show that MLR statistics asymptotically maximize the number of discoveries under a user-specified Bayesian model of the data. (Like all feature statistics, MLR statistics always provide frequentist error control.) This result places no restrictions on the problem dimensions and makes no parametric assumptions; instead, we require a "local dependence" condition that depends only on known quantities. In simulations and three real applications, MLR statistics outperform state-of-the-art feature statistics, including in settings where the Bayesian model is misspecified. We implement MLR statistics in the python package knockpy; our implementation is often faster than computing a cross-validated lasso.