论文标题

SAICL:通过互动级辅助对比任务进行知识追踪和辍学预测的学生建模

SAICL: Student Modelling with Interaction-level Auxiliary Contrastive Tasks for Knowledge Tracing and Dropout Prediction

论文作者

Park, Jungbae, Kim, Jinyoung, Kwon, Soonwoo, Lee, Sang Wan

论文摘要

知识追踪和辍学预测对于在线教育估计学生的知识状态或防止辍学率至关重要。尽管传统系统与学生互动的数据稀疏性和过度拟合度,但最近的样本级对比度学习有助于减轻此问题。样本级方法的一个主要局限性是,他们将学生的行为交互序列视为捆绑,因此他们通常无法编码时间上下文并跟踪他们的动态变化,从而难以找到知识跟踪和撤销预测的最佳表示。要在序列中应用时间上下文,本研究介绍了一个新型的学生建模框架,SAICL:\ textbf {s}使用\ textbf {a} uxiliary \ textbf {i} nteraction-level \ level \ evel \ evel \ textbf {c} intrastive \ textive \ textive \ textbbf {c}。详细说明,SAICL可以同时使用提议的自我监督/监督的互动级对比目标:MILCPC(\ TextBf {M} Ulti- \ textbf {i} supcpc(\ textbf {sup} ervisised \ textbf {c} ontrastive \ textbf {p} redictive \ textbf {c} oding)。虽然以前用于学生建模的样本级对比方法高度依赖于数据增强方法,但SAICL没有数据扩展,同时在自我监督和监督的设置中显示出更好的性能。通过将跨凝结与对比目标相结合,提议的SAICL实现了可比的知识追踪和辍学预测性能与其他最先进的模型,而不会损害推理成本。

Knowledge tracing and dropout prediction are crucial for online education to estimate students' knowledge states or to prevent dropout rates. While traditional systems interacting with students suffered from data sparsity and overfitting, recent sample-level contrastive learning helps to alleviate this issue. One major limitation of sample-level approaches is that they regard students' behavior interaction sequences as a bundle, so they often fail to encode temporal contexts and track their dynamic changes, making it hard to find optimal representations for knowledge tracing and dropout prediction. To apply temporal context within the sequence, this study introduces a novel student modeling framework, SAICL: \textbf{s}tudent modeling with \textbf{a}uxiliary \textbf{i}nteraction-level \textbf{c}ontrastive \textbf{l}earning. In detail, SAICL can utilize both proposed self-supervised/supervised interaction-level contrastive objectives: MilCPC (\textbf{M}ulti-\textbf{I}nteraction-\textbf{L}evel \textbf{C}ontrastive \textbf{P}redictive \textbf{C}oding) and SupCPC (\textbf{Sup}ervised \textbf{C}ontrastive \textbf{P}redictive \textbf{C}oding). While previous sample-level contrastive methods for student modeling are highly dependent on data augmentation methods, the SAICL is free of data augmentation while showing better performance in both self-supervised and supervised settings. By combining cross-entropy with contrastive objectives, the proposed SAICL achieved comparable knowledge tracing and dropout prediction performance with other state-of-art models without compromising inference costs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源