论文标题
语言模型是贪婪的推理者:对经营链的系统形式分析
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought
论文作者
论文摘要
考虑到链条的提示,大型语言模型(LLM)显示出了显着的推理能力(具有中间推理步骤的示例)。现有基准通过评估下游任务(例如数学推理)的准确性来间接衡量推理能力。但是,目前尚不清楚这些模型如何获得答案,以及它们是否依赖简单的启发式方法而不是产生的思想链。为了实现对LLM的推理能力的系统探索,我们提出了一个称为ProntoQA的新合成问题驱动数据集,其中每个示例都是从一阶逻辑中代表的合成世界模型生成的。这使我们能够将生成的思想链解析为符号证明,以进行正式分析。我们对教学的分析和GPT-3的分析表明,LLM非常有能力做出正确的个人演绎步骤,因此即使在虚构的环境中,通常也能够推理。但是,他们在证明计划方面很难:当有多个有效的扣除步骤可用时,他们将无法系统地探索不同的选项。
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks measure reasoning ability indirectly, by evaluating accuracy on downstream tasks such as mathematical reasoning. However, it is unclear how these models obtain the answers and whether they rely on simple heuristics rather than the generated chain-of-thought. To enable systematic exploration of the reasoning ability of LLMs, we present a new synthetic question-answering dataset called PrOntoQA, where each example is generated from a synthetic world model represented in first-order logic. This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis. Our analysis on InstructGPT and GPT-3 shows that LLMs are quite capable of making correct individual deduction steps, and so are generally capable of reasoning, even in fictional contexts. However, they have difficulty with proof planning: When multiple valid deduction steps are available, they are not able to systematically explore the different options.