论文标题

强大的深度学习自动驾驶

Robust Deep Learning for Autonomous Driving

论文作者

Corbière, Charles

论文摘要

过去十年的人工智能研究对自动驾驶的前进产生了重大影响。但是,在高风险环境中部署此类系统时,安全仍然是一个主要问题。本文的目的是开发方法学工具,为深度神经网络提供可靠的不确定性估计。首先,我们引入了一个新的标准,以可靠地估计模型置信度:真实类概率(TCP)。我们表明,与当前的不确定性度量相比,TCP为故障预测提供了更好的特性。由于真实的类是在测试时间未知的本质上,因此我们建议通过辅助模型从数据中学习TCP标准,并引入适合这种情况的特定学习方案。在图像分类和语义分割数据集上验证了所提出方法的相关性。然后,我们将学习的置信度扩展到域适应的任务,在该任务中,它改善了自训练方法中伪标签的选择。最后,我们通过基于证据模型引入新的不确定性度量并在单纯形上定义了新的不确定性度量来解决共同检测错误分类和分布样本的挑战。

The last decade's research in artificial intelligence had a significant impact on the advance of autonomous driving. Yet, safety remains a major concern when it comes to deploying such systems in high-risk environments. The objective of this thesis is to develop methodological tools which provide reliable uncertainty estimates for deep neural networks. First, we introduce a new criterion to reliably estimate model confidence: the true class probability (TCP). We show that TCP offers better properties for failure prediction than current uncertainty measures. Since the true class is by essence unknown at test time, we propose to learn TCP criterion from data with an auxiliary model, introducing a specific learning scheme adapted to this context. The relevance of the proposed approach is validated on image classification and semantic segmentation datasets. Then, we extend our learned confidence approach to the task of domain adaptation where it improves the selection of pseudo-labels in self-training methods. Finally, we tackle the challenge of jointly detecting misclassification and out-of-distributions samples by introducing a new uncertainty measure based on evidential models and defined on the simplex.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源