2024-06-24

【學術亮點-頂級會議論文】基於 Text2Text 模型與 Pseudo KL Divergence 之干擾選項生成

Font Size
Small
Middle
Large
【學術亮點-頂級會議論文】基於 Text2Text 模型與 Pseudo KL Divergence 之干擾選項生成
AI core Technology: Advanced Research and Resource Integration Platform or AI Technology
Department of Computer Science and Engineering / Yao-Chung Fan / Associate Professor

核心技術:AI核心技術之進階研究與資源整合平台【資訊工程學系范耀中副教授】
 
論文篇名 中文:基於 Text2Text 模型與 Pseudo KL Divergence 之干擾選項生成
英文:Distractor Generation based on Text2Text Language Models with Pseudo Kullback-Leibler Divergence Regulation
期刊名稱 Findings of the Association for Computational Linguistics: ACL 2023 (指標清單期刊)
發表年份, 卷數, 起迄頁數 The 61st Annual Meeting of the Association for Computational Linguistics (ACL 203) in Toronto, Canada from July 9th to 14th, 2023
作者 Hui-Juan Wang, Kai-Yu Hsieh, Han-Cheng Yu, Jui-Ching Tsou, Yu-An Shih, Chen-Hua Huang, Yao-Chung Fan(范耀中)
DOI 10.18653/v1/2023.findings-acl.790
英文摘要 In this paper, we address the task of cloze-style multiple choice question (MCQs) distractor generation. Our study is featured by the following designs. First, we propose to formulate the cloze distractor generation as a Text2Text task. Second, we propose pseudo Kullback-Leibler Divergence for regulating the generation to consider the item discrimination index in education evaluation. Third, we explore the candidate augmentation strategy and multi-tasking training with cloze-related tasks to further boost the generation performance. Through experiments with benchmarking datasets, our best perfomring model advances the state-of-the-art result from 10.81 to 22.00 (p@1 score).
上架日期:2023/7/9-7/14
 
Contact Us