2023-08-21

【學術亮點】基於Pseudo Kullback-Leibler Divergence控制與生成語言模型之干擾項生成

Font Size
Small
Middle
Large
AI core Technology: Advanced Research and Resource Integration Platform or AI TechnologyDepartment of Computer Science and Engineering / Yao-Chung Fan / Associate Professor
核心技術:AI核心技術之進階研究與資源整合平台【資訊工程學系范耀中副教授】
 
論文篇名 英文:Distractor Generation based on Text2Text Language Models with Pseudo Kullback-Leibler Divergence Regulation
中文:基於Pseudo Kullback-Leibler Divergence控制與生成語言模型之干擾項生成
研討會名稱 Findings of the Association for Computational Linguistics: ACL 2023 (指標清單會議)
發表年份, 卷數, 起迄頁數 2023, 12477-12497
作者 Hui-Juan Wang, Kai-Yu Hsieh, Han-Cheng Yu, Jui-Ching Tsou, Yu-An Shih, Chen-Hua Huang, Yao-Chung Fan(范耀中)
DOI 10.18653/v1/2023.findings-acl.790
中文摘要 在這篇論文中,我們探討了填空式多重選擇題(MCQs)的干擾選項生成任務。我們的研究具有以下設計特點。首先,我們提議將填空式干擾選項生成定義為一個Text2Text任務。其次,我們提出了偽Kullback-Leibler散度,用於調節生成過程,以考慮教育評估中的題目區分度指標。第三,我們探索了候選增強策略,並結合與填空相關的任務進行多任務訓練,以進一步提升生成效果。通過對標準數據集的實驗,我們最佳的模型將最先進的結果從10.81提升至22.00p@1得分)
英文摘要 In this paper, we address the task of cloze-style multiple choice question (MCQs) distractor generation. Our study is featured by the following designs. First, we propose to formulate the cloze distractor generation as a Text2Text task. Second, we propose pseudo Kullback-Leibler Divergence for regulating the generation to consider the item discrimination index in education evaluation. Third, we explore the candidate augmentation strategy and multi-tasking training with cloze-related tasks to further boost the generation performance. Through experiments with benchmarking datasets, our best perfomring model advances the state-of-the-art result from 10.81 to 22.00 (p@1 score).
發表成果與AI計畫研究主題相關性 選擇題題型(Multiple-Choice Question)為預訓練語言模型(Pretrained Language Model)強化的關鍵資料類型。本研究著眼此議題,提出Text2Text架構來進行干擾選項之生成,進而達成語言模型訓練資料之增量。此方面之研究開發為目前研究成果(神農GPT)閱讀理解能力之關鍵所在。
上架日期2023-07-09~07-14
 
Contact Us