Hailay commited on
Commit
87d8aea
1 Parent(s): a76c508

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ library_name: transformers
16
  ---
17
 
18
  # XLM-R and EXLMR Model
19
-
20
  ## Model Overview
21
 
22
  The **XLM-R** (Cross-lingual Language Model - RoBERTa) is a multilingual model trained on 100 languages. The **EXLMR** (Extended XLM-RoBERTa) is an extended version designed to improve performance on low-resource languages spoken in Ethiopia, including Amharic, Tigrinya, and Afaan Oromo.
@@ -32,7 +32,7 @@ The **XLM-R** (Cross-lingual Language Model - RoBERTa) is a multilingual model t
32
 
33
 
34
  EXLMR addresses tokenization issues inherent to the XLM-R model, such as out-of-vocabulary (OOV) tokens and over-tokenization, especially for low-resource languages.
35
- Fine-tuning on specific datasets will help adapt the model to particular tasks and improve its performance.You can use this model with the `transformers` library for various NLP tasks.
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
38
  import torch
@@ -44,7 +44,7 @@ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
44
  model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
45
 
46
 
47
- EXLMR has been designed with specific support for underrepresented languages, particularly those spoken in Ethiopia (such as Amharic, Tigrinya, and Afaan Oromo). Like XLM-RoBERTa, EXLMR can be finetuned to handle multiple languages simultaneously, making it effective for cross-lingual tasks such as machine translation, multilingual text classification, and question answering.EXLMR-base follows the same architecture as RoBERTa-base, with 12 layers, 768 hidden dimensions, and 12 attention heads, totaling approximately 270M parameters.
48
 
49
  |Model|Vocabulary Size|
50
  |---|---|
 
16
  ---
17
 
18
  # XLM-R and EXLMR Model
19
+ We introduce the EXLMR model, an extension of XLM-R, which expands its tokenizer vocabulary to incorporate new languages and alleviate out-of-vocabulary (OOV) issues. We initialize the embeddings for the newly added vocabulary in a way that allows the model to leverage this newly added vocabularies effectively. Our approach not only benefits low-resource languages but also improves performance on high-resource languages, that were part of the original XLM-R model.
20
  ## Model Overview
21
 
22
  The **XLM-R** (Cross-lingual Language Model - RoBERTa) is a multilingual model trained on 100 languages. The **EXLMR** (Extended XLM-RoBERTa) is an extended version designed to improve performance on low-resource languages spoken in Ethiopia, including Amharic, Tigrinya, and Afaan Oromo.
 
32
 
33
 
34
  EXLMR addresses tokenization issues inherent to the XLM-R model, such as out-of-vocabulary (OOV) tokens and over-tokenization, especially for low-resource languages.
35
+ Fine-tuning on specific datasets will help adapt the model to particular tasks and improve its performance. You can use this model with the `transformers` library for various NLP tasks.
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
38
  import torch
 
44
  model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
45
 
46
 
47
+ EXLMR has been designed to support underrepresented languages, particularly those spoken in Ethiopia (such as Amharic, Tigrinya, and Afaan Oromo). Like XLM-RoBERTa, EXLMR can be finetuned to handle multiple languages simultaneously, making it effective for cross-lingual tasks such as machine translation, multilingual text classification, and question answering. EXLMR-base follows the same architecture as RoBERTa-base, with 12 layers, 768 hidden dimensions, and 12 attention heads, totaling approximately 270M parameters.
48
 
49
  |Model|Vocabulary Size|
50
  |---|---|