论文标题

流程知识注入的AI:朝着用户级别的解释性,可解释性和安全性迈进

Process Knowledge-Infused AI: Towards User-level Explainability, Interpretability, and Safety

论文作者

Sheth, Amit, Gaur, Manas, Roy, Kaushik, Venkataraman, Revathy, Khandelwal, Vedant

论文摘要

人工智能系统已被广泛采用在现实世界中的各个领域。但是,在具有特定目的的高价值,敏感或关键性应用程序中,例如为个性化健康或食品建议(例如,过敏性食谱建议),它们的采用不太可能。首先,AI系统需要遵循专家设定的准则或定义明确的流程;仅数据将不足。例如,为了诊断抑郁症的严重程度,心理保健提供者使用患者健康问卷(PHQ-9)。因此,如果要使用AI系统进行诊断,则需要使用PHQ-9所隐含的医疗指南。同样,营养学家的知识和步骤也需要用于引导糖尿病患者制定食品计划的AI系统。其次,许多当前AI系统的BlackBox性质典型都无法正常工作。 AI系统的用户将需要能够给出用户可理解的解释,并使用人类可以理解和熟悉的概念构建的解释。这是引起对AI系统的信心和信任的关键。对于此类应用程序,除了数据和域知识外,AI系统还需要访问和使用过程知识,这是AI系统需要使用或遵守的一组步骤。

AI systems have been widely adopted across various domains in the real world. However, in high-value, sensitive, or safety-critical applications such as self-management for personalized health or food recommendation with a specific purpose (e.g., allergy-aware recipe recommendations), their adoption is unlikely. Firstly, the AI system needs to follow guidelines or well-defined processes set by experts; the data alone will not be adequate. For example, to diagnose the severity of depression, mental healthcare providers use Patient Health Questionnaire (PHQ-9). So if an AI system were to be used for diagnosis, the medical guideline implied by the PHQ-9 needs to be used. Likewise, a nutritionist's knowledge and steps would need to be used for an AI system that guides a diabetic patient in developing a food plan. Second, the BlackBox nature typical of many current AI systems will not work; the user of an AI system will need to be able to give user-understandable explanations, explanations constructed using concepts that humans can understand and are familiar with. This is the key to eliciting confidence and trust in the AI system. For such applications, in addition to data and domain knowledge, the AI systems need to have access to and use the Process Knowledge, an ordered set of steps that the AI system needs to use or adhere to.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源