Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,8 @@ inference: false
|
|
12 |
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
|
13 |
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
|
14 |
|
|
|
|
|
15 |
# BERT Tagalog Base Uncased (Whole Word Masking)
|
16 |
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
|
17 |
|
|
|
12 |
This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available.
|
13 |
Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance.
|
14 |
|
15 |
+
---
|
16 |
+
|
17 |
# BERT Tagalog Base Uncased (Whole Word Masking)
|
18 |
Tagalog version of BERT trained on a large preprocessed text corpus scraped and sourced from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This particular version uses whole word masking.
|
19 |
|