论文标题
DeepCave:自动化机器学习的互动分析工具
DeepCAVE: An Interactive Analysis Tool for Automated Machine Learning
论文作者
论文摘要
自动化机器学习(AUTOML)比以往任何时候都多,以支持用户确定有效的超参数,神经体系结构,甚至是完整的机器学习管道。但是,由于缺乏透明度,用户倾向于不信任优化过程及其结果,因此手动调整仍然很普遍。我们介绍了DeepCave,这是一个交互式框架,可轻松和临时分析和监视最新的优化程序。通过建立完全且可访问的透明度,DeepCave在用户和Automl之间建立了桥梁,并有助于建立信任。我们的框架模块化且易于扩展的自然可以为用户提供自动生成的文本,表和图形可视化。我们显示了DeepCave在示例性检测的示例用例中的价值,在该示例性检测中,我们的框架使您易于识别问题,比较多个运行并解释优化过程。该软件包可在github https://github.com/automl/deepcave上免费获得。
Automated Machine Learning (AutoML) is used more than ever before to support users in determining efficient hyperparameters, neural architectures, or even full machine learning pipelines. However, users tend to mistrust the optimization process and its results due to a lack of transparency, making manual tuning still widespread. We introduce DeepCAVE, an interactive framework to analyze and monitor state-of-the-art optimization procedures for AutoML easily and ad hoc. By aiming for full and accessible transparency, DeepCAVE builds a bridge between users and AutoML and contributes to establishing trust. Our framework's modular and easy-to-extend nature provides users with automatically generated text, tables, and graphic visualizations. We show the value of DeepCAVE in an exemplary use-case of outlier detection, in which our framework makes it easy to identify problems, compare multiple runs and interpret optimization processes. The package is freely available on GitHub https://github.com/automl/DeepCAVE.