|
---
|
|
language:
|
|
- tr
|
|
tags:
|
|
- roberta
|
|
license: cc-by-nc-sa-4.0
|
|
datasets:
|
|
- oscar
|
|
---
|
|
|
|
# RoBERTa Turkish medium Character-level 16k (uncased)
|
|
|
|
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
|
|
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
|
|
|
|
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Character-level, which means that text is split by individual characters. Vocabulary size is 16.7k.
|
|
|
|
## Note that this model does not include a tokenizer file, because it uses ByT5Tokenizer. The following code can be used for tokenization, example max length(1024) can be changed:
|
|
```
|
|
tokenizer = ByT5Tokenizer.from_pretrained("google/byt5-small")
|
|
tokenizer.mask_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][0]
|
|
tokenizer.cls_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][1]
|
|
tokenizer.bos_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][1]
|
|
tokenizer.sep_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][2]
|
|
tokenizer.eos_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][2]
|
|
tokenizer.pad_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][3]
|
|
tokenizer.unk_token = tokenizer.special_tokens_map_extended['additional_special_tokens'][3]
|
|
tokenizer.model_max_length = 1024
|
|
```
|
|
|
|
The details can be found at this paper:
|
|
https://arxiv.org/...
|
|
|
|
### BibTeX entry and citation info
|
|
```bibtex
|
|
@article{}
|
|
```
|
|
|