论文标题
Android嘲笑电绵羊吗?幽默的“理解”基准在《纽约客》标题比赛中
Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest
论文作者
论文摘要
大型神经网络现在可以产生笑话,但是他们真的“理解”幽默吗?我们挑战了AI模型,该模型的三个任务来自《纽约客》卡通标题比赛:将笑话与动画片相匹配,确定了获胜的标题,并解释了为什么获胜的标题很有趣。这些任务逐渐封装了“理解”动画片的更复杂的方面。关键要素是图像和标题之间的复杂,通常令人惊讶的关系,以及经常将间接和嬉戏的寓言包含在人类的经验和文化中。我们研究了多模式和仅语言模型:前者直接受到卡通图像的挑战,而后者则对视觉场景的多方面描述进行了挑战,以模拟人级的视觉理解。我们发现两种类型的模型都在所有三个任务上都挣扎。例如,我们最好的多模式模型在匹配任务上落后于人类绩效的30个精度,即使在提供了地面真实的视觉场景描述符时,人为撰写的解释也比最好的机器作者(几乎没有机器的GPT-4)优先于2/3的情况。我们发布了模型,代码,排行榜和语料库,其中包括描述图像的位置/实体,场景中的不寻常的新注释,以及对笑话的解释。
Large neural networks can now generate jokes, but do they really "understand" humor? We challenge AI models with three tasks derived from the New Yorker Cartoon Caption Contest: matching a joke to a cartoon, identifying a winning caption, and explaining why a winning caption is funny. These tasks encapsulate progressively more sophisticated aspects of "understanding" a cartoon; key elements are the complex, often surprising relationships between images and captions and the frequent inclusion of indirect and playful allusions to human experience and culture. We investigate both multimodal and language-only models: the former are challenged with the cartoon images directly, while the latter are given multifaceted descriptions of the visual scene to simulate human-level visual understanding. We find that both types of models struggle at all three tasks. For example, our best multimodal models fall 30 accuracy points behind human performance on the matching task, and, even when provided ground-truth visual scene descriptors, human-authored explanations are preferred head-to-head over the best machine-authored ones (few-shot GPT-4) in more than 2/3 of cases. We release models, code, leaderboard, and corpus, which includes newly-gathered annotations describing the image's locations/entities, what's unusual in the scene, and an explanation of the joke.