论文标题

实现因果表示学习以调整潜在空间中混杂偏见的学习

Realization of Causal Representation Learning to Adjust Confounding Bias in Latent Space

论文作者

Li, Jia, Li, Xiang, Jia, Xiaowei, Steinbach, Michael, Kumar, Vipin

论文摘要

通常在2D平面中考虑因果DAG(定向无环图)。边缘表示因果效应的方向,并暗示其相应的时间缓解。由于统计模型的自然限制,通常通过平均个人的相关性,即在特定时间内的观察性变化来近似效应估计。但是,在机器学习有关具有复杂DAG的大规模问题的背景下,这种轻微的偏见可能会滚雪球扭曲全球模型 - 更重要的是,它实际上阻碍了AI的发展,例如,因果模型的普遍性较弱。在本文中,我们将因果dag重新定义为\ emph {do-dag},其中变量的值不再依赖于时标,并且时间表可以看作是轴。通过对多维DO-DAG的几何解释,我们确定了\ emph {因果表示偏见}及其必要因素,与常见的混杂偏见有所不同。因此,将提出一个基于DL(深度学习)的框架作为一般解决方案,以及实现方法和实验以验证其可行性。

Causal DAGs(Directed Acyclic Graphs) are usually considered in a 2D plane. Edges indicate causal effects' directions and imply their corresponding time-passings. Due to the natural restriction of statistical models, effect estimation is usually approximated by averaging the individuals' correlations, i.e., observational changes over a specific time. However, in the context of Machine Learning on large-scale questions with complex DAGs, such slight biases can snowball to distort global models - More importantly, it has practically impeded the development of AI, for instance, the weak generalizability of causal models. In this paper, we redefine causal DAG as \emph{do-DAG}, in which variables' values are no longer time-stamp-dependent, and timelines can be seen as axes. By geometric explanation of multi-dimensional do-DAG, we identify the \emph{Causal Representation Bias} and its necessary factors, differentiated from common confounding biases. Accordingly, a DL(Deep Learning)-based framework will be proposed as the general solution, along with a realization method and experiments to verify its feasibility.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源