论文标题
实践中的人类AI指南:在协作软件团队中作为推动者的泄漏抽象
Human-AI Guidelines in Practice: Leaky Abstractions as an Enabler in Collaborative Software Teams
论文作者
论文摘要
在传统的软件开发中,用户体验(UX)设计师和工程师通过关注(SOC)进行协作:设计师创建人界面规格,工程师构建这些规格。但是,我们认为人类系统阻碍了SOC,因为人类的需求必须塑造AI界面的设计,基础AI子组件和培训数据。设计师和工程师目前如何在AI和UX设计上进行合作?为了找出答案,我们采访了14个组织的21名行业专业人员(UX研究人员,AI工程师,数据科学家和管理人员),以了解其协作工作实践和相关挑战。我们发现,SOC挑战在设计和工程问题之间的协作所包围的隐藏信息。从业人员描述了发明临时表示,揭示了低级设计和实施细节(我们将其描述为漏水摘要)以“刺穿” SOC并跨专业知识边界共享信息。我们确定如何在AI-UX边界进行漏水的抽象来合作,并正式化创建和使用泄漏的抽象过程。
In conventional software development, user experience (UX) designers and engineers collaborate through separation of concerns (SoC): designers create human interface specifications, and engineers build to those specifications. However, we argue that Human-AI systems thwart SoC because human needs must shape the design of the AI interface, the underlying AI sub-components, and training data. How do designers and engineers currently collaborate on AI and UX design? To find out, we interviewed 21 industry professionals (UX researchers, AI engineers, data scientists, and managers) across 14 organizations about their collaborative work practices and associated challenges. We find that hidden information encapsulated by SoC challenges collaboration across design and engineering concerns. Practitioners describe inventing ad-hoc representations exposing low-level design and implementation details (which we characterize as leaky abstractions) to "puncture" SoC and share information across expertise boundaries. We identify how leaky abstractions are employed to collaborate at the AI-UX boundary and formalize a process of creating and using leaky abstractions.