论文标题

GO Transformer:游戏游戏的自然语言建模

The Go Transformer: Natural Language Modeling for Game Play

论文作者

Ciolino, Matthew, Noever, David, Kalin, Josh

论文摘要

这项工作应用自然语言建模来在古老的GO游戏中产生合理的战略动作。我们训练生成验证的变压器(GPT-2),以模仿智能游戏格式(SGF)存档的GO Champions的风格,该形式提供了移动序列的文本描述。训练有素的模型进一步生成了有效但以前看不见的GO策略。由于GPT-2保留了标点符号和间距,因此文本生成器的原始输出为游戏可视化和创意模式提供了输入,例如使用自动重新播放的Sabaki Project的游戏引擎。结果表明,语言建模可以捕获冠军GO游戏的测序格式及其战略形式。与随机的游戏板相比,GPT-2微调显示了有效的开场移动序列,偏爱角球比较不优惠的中心和侧球。游戏生成作为语言建模任务为其他40多种棋盘游戏提供了新颖的方法,其中历史文本注释提供了培训数据(例如,Amazons&Connect 4/6)。

This work applies natural language modeling to generate plausible strategic moves in the ancient game of Go. We train the Generative Pretrained Transformer (GPT-2) to mimic the style of Go champions as archived in Smart Game Format (SGF), which offers a text description of move sequences. The trained model further generates valid but previously unseen strategies for Go. Because GPT-2 preserves punctuation and spacing, the raw output of the text generator provides inputs to game visualization and creative patterns, such as the Sabaki project's game engine using auto-replays. Results demonstrate that language modeling can capture both the sequencing format of championship Go games and their strategic formations. Compared to random game boards, the GPT-2 fine-tuning shows efficient opening move sequences favoring corner play over less advantageous center and side play. Game generation as a language modeling task offers novel approaches to more than 40 other board games where historical text annotation provides training data (e.g., Amazons & Connect 4/6).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源