论文标题

不好,疯狂和烹饪:人类军事团队中平民危害的道德责任

Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams

论文作者

Devitt, Susannah Kate

论文摘要

本章探讨了人工智能(AI)团队对平民危害的道德责任。尽管军队可能对战争犯罪负责一些坏苹果,而一些疯狂的苹果在冲突期间对他们的行为负责,但日益军事的苹果可能会通过将AI在战争制定中替换人类决策的过程中将它们置于无法替代的决策环境中来“烹饪”他们的好苹果。可能会对人类军事团队中的平民伤害责任责任,冒着运营商脱离的风险,成为极端的道德证人,成为道德上的破碎区或因成为国家授权的大型人类系统的一部分而遭受道德伤害。承认迄今为止的军事伦理,人为因素和人工智能工作以及批判性案例研究,本章提供了新的机制来绘制人类AI团队中道德责任的条件。其中包括:1)在认知任务分析中提示关键决策方法的新决策责任,以及2)应用AI工作场所健康和安全框架来确定与道德责任归因于针对决策的认知和心理风险。诸如这些机制使军队能够设计以人为本的AI系统进行负责部署。

This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams. Although militaries may have some bad apples responsible for war crimes and some mad apples unable to be responsible for their actions during a conflict, increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments through the processes of replacing human decision-making with AI determinations in war making. Responsibility for civilian harm in human-AI military teams may be contested, risking operators becoming detached, being extreme moral witnesses, becoming moral crumple zones or suffering moral injury from being part of larger human-AI systems authorised by the state. Acknowledging military ethics, human factors and AI work to date as well as critical case studies, this chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams. These include: 1) new decision responsibility prompts for critical decision method in a cognitive task analysis, and 2) applying an AI workplace health and safety framework for identifying cognitive and psychological risks relevant to attributions of moral responsibility in targeting decisions. Mechanisms such as these enable militaries to design human-centred AI systems for responsible deployment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源