XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
2021
Online
report
In this paper, we introduce ELECTRA-style tasks to cross-lingual language model pre-training. Specifically, we present two pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection. Besides, we pretrain the model, named as XLM-E, on both multilingual and parallel corpora. Our model outperforms the baseline models on various cross-lingual understanding tasks with much less computation cost. Moreover, analysis shows that XLM-E tends to obtain better cross-lingual transferability.
Comment: ACL-2022
Titel: |
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
|
---|---|
Autor/in / Beteiligte Person: | Chi, Zewen ; Huang, Shaohan ; Dong, Li ; Ma, Shuming ; Zheng, Bo ; Singhal, Saksham ; Bajaj, Payal ; Song, Xia ; Mao, Xian-Ling ; Huang, Heyan ; Wei, Furu |
Link: | |
Veröffentlichung: | 2021 |
Medientyp: | report |
Schlagwort: |
|
Sonstiges: |
|