Datasets:
Tasks:
Fill-Mask
Formats:
csv
Sub-tasks:
masked-language-modeling
Size:
1M - 10M
ArXiv:
Tags:
afrolm
active learning
language modeling
research papers
natural language processing
self-active learning
License:
bonadossou
commited on
Commit
•
daaa87d
1
Parent(s):
b25c891
Update README.md
Browse files
README.md
CHANGED
@@ -84,6 +84,8 @@ tokenizer = XLMRobertaTokenizer.from_pretrained("bonadossou/afrolm_active_learni
|
|
84 |
tokenizer.model_max_length = 256
|
85 |
```
|
86 |
|
|
|
|
|
87 |
## Reproducing our result: Training and Evaluation
|
88 |
|
89 |
- To train the network, run `python active_learning.py`. You can also wrap it around a `bash` script.
|
|
|
84 |
tokenizer.model_max_length = 256
|
85 |
```
|
86 |
|
87 |
+
`Autotokenizer` class does not successfully load our tokenizer. So we recommend to use directly the `XLMRobertaTokenizer` class. Depending on your task, you will load the according mode of the model. Read the [XLMRoberta Documentation](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)
|
88 |
+
|
89 |
## Reproducing our result: Training and Evaluation
|
90 |
|
91 |
- To train the network, run `python active_learning.py`. You can also wrap it around a `bash` script.
|