taesiri commited on
Commit
c7c2c37
1 Parent(s): 13ff8d3

Upload abstract/2307.01163.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2307.01163.txt +1 -0
abstract/2307.01163.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Pretrained language models (PLMs) are today the primary model for natural language processing. Despite their impressive downstream performance, it can be difficult to apply PLMs to new languages, a barrier to making their capabilities universally accessible. While prior work has shown it possible to address this issue by learning a new embedding layer for the new language, doing so is both data and compute inefficient. We propose to use an active forgetting mechanism during pretraining, as a simple way of creating PLMs that can quickly adapt to new languages. Concretely, by resetting the embedding layer every K updates during pretraining, we encourage the PLM to improve its ability of learning new embeddings within a limited number of updates, similar to a meta-learning effect. Experiments with RoBERTa show that models pretrained with our forgetting mechanism not only demonstrate faster convergence during language adaptation but also outperform standard ones in a low-data regime, particularly for languages that are distant from English.