pantheon commited on
Commit
3174a64
1 Parent(s): 5b1a1d0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: tr
3
+ ---
4
+
5
+ # Turkish Language Models with Huggingface's Transformers
6
+
7
+ As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
8
+
9
+ # Turkish BERT-Base (cased)
10
+
11
+ This is BERT-Base model which has 12 encoder layers with 768 hidden layer size trained on cased Turkish dataset.
12
+
13
+ ## Usage
14
+
15
+ Using AutoModel and AutoTokenizer from Transformers, you can import the model as described below.
16
+
17
+ ```python
18
+ from transformers import AutoModel, AutoTokenizer
19
+
20
+ tokenizer = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-cased")
21
+
22
+ model = AutoModel.from_pretrained("loodos/bert-base-turkish-cased")
23
+ ```
24
+
25
+ ## Details and Contact
26
+
27
+ You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
28
+
29
+ ## Acknowledgments
30
+
31
+ Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
32
+
config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "gradient_checkpointing": false,
4
+ "hidden_act": "gelu",
5
+ "hidden_dropout_prob": 0.1,
6
+ "hidden_size": 768,
7
+ "initializer_range": 0.02,
8
+ "intermediate_size": 3072,
9
+ "layer_norm_eps": 1e-12,
10
+ "max_position_embeddings": 512,
11
+ "model_type": "bert",
12
+ "num_attention_heads": 12,
13
+ "num_hidden_layers": 12,
14
+ "pad_token_id": 0,
15
+ "type_vocab_size": 2,
16
+ "vocab_size": 32000
17
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e87a39081555f6ca1c9104884c96c7265d1b073bc0674fdd58ff66ca31bbb991
3
+ size 445058649
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb165cfb9cb53afa8d7bea84fcc73f77763ec08ec0a6d5205b403d54cd79eb41
3
+ size 442731288
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "model_max_length": 512, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff