Ahmet commited on
Commit
996fb38
1 Parent(s): 7a9ae4a

upload model

Browse files
README.md CHANGED
@@ -21,6 +21,12 @@ This model was adapted from [ytu-ce-cosmos/turkish-mini-bert-uncased](https://hu
21
  - [nli_tr](https://huggingface.co/datasets/nli_tr)
22
  - [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish)
23
 
 
 
 
 
 
 
24
  ## Usage (Sentence-Transformers)
25
 
26
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
@@ -85,10 +91,10 @@ print(sentence_embeddings)
85
  Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below:
86
 
87
  ```txt
88
- Cosine-Similarity : Pearson: 0.7039 Spearman: 0.6850
89
- Manhattan-Distance: Pearson: 0.6774 Spearman: 0.6740
90
- Euclidean-Distance: Pearson: 0.6770 Spearman: 0.6731
91
- Dot-Product-Similarity: Pearson: 0.6716 Spearman: 0.6559
92
  ```
93
 
94
 
@@ -97,9 +103,9 @@ The model was trained with the parameters:
97
 
98
  **DataLoader**:
99
 
100
- `torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
101
  ```
102
- {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
103
  ```
104
 
105
  **Loss**:
@@ -109,8 +115,8 @@ The model was trained with the parameters:
109
  Parameters of the fit()-Method:
110
  ```
111
  {
112
- "epochs": 5,
113
- "evaluation_steps": 45,
114
  "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
115
  "max_grad_norm": 1,
116
  "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
@@ -128,7 +134,7 @@ Parameters of the fit()-Method:
128
  ## Full Model Architecture
129
  ```
130
  SentenceTransformer(
131
- (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
132
  (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
133
  )
134
  ```
 
21
  - [nli_tr](https://huggingface.co/datasets/nli_tr)
22
  - [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish)
23
 
24
+ :warning: **All texts were manually lowercased,** [as stated](https://huggingface.co/ytu-ce-cosmos/turkish-tiny-bert-uncased#%E2%9A%A0-uncased-use-requires-manual-lowercase-conversion) by the model's authors:
25
+
26
+ ```python
27
+ text.replace("I", "ı").lower()
28
+ ```
29
+
30
  ## Usage (Sentence-Transformers)
31
 
32
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
91
  Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below:
92
 
93
  ```txt
94
+ Cosine-Similarity : Pearson: 0.8117 Spearman: 0.8074
95
+ Manhattan-Distance: Pearson: 0.8029 Spearman: 0.7972
96
+ Euclidean-Distance: Pearson: 0.8028 Spearman: 0.7977
97
+ Dot-Product-Similarity: Pearson: 0.7563 Spearman: 0.7435
98
  ```
99
 
100
 
 
103
 
104
  **DataLoader**:
105
 
106
+ `torch.utils.data.dataloader.DataLoader` of length 45 with parameters:
107
  ```
108
+ {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
109
  ```
110
 
111
  **Loss**:
 
115
  Parameters of the fit()-Method:
116
  ```
117
  {
118
+ "epochs": 10,
119
+ "evaluation_steps": 4,
120
  "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
121
  "max_grad_norm": 1,
122
  "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
 
134
  ## Full Model Architecture
135
  ```
136
  SentenceTransformer(
137
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
138
  (1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
139
  )
140
  ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "e5_b64_turkish_mini_bert_uncased-mean-nli\\",
3
  "architectures": [
4
  "BertModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "output/ytu_ce_cosmos-turkish_mini_bert_uncased-b128-e10-nli/",
3
  "architectures": [
4
  "BertModel"
5
  ],
config_sentence_transformers.json CHANGED
@@ -2,6 +2,6 @@
2
  "__version__": {
3
  "sentence_transformers": "2.2.2",
4
  "transformers": "4.28.0",
5
- "pytorch": "2.0.1+cu118"
6
  }
7
  }
 
2
  "__version__": {
3
  "sentence_transformers": "2.2.2",
4
  "transformers": "4.28.0",
5
+ "pytorch": "2.1.0+cu121"
6
  }
7
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a2fd81857a006e7c045e1716e3d4a92bd0783ee6f1cece36b63533498e09c376
3
- size 46223689
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b8f1f189b3b2f879d341806d765e0856cf2869b1a1a76ab42c441fe86ac983b
3
+ size 46224134
sentence_bert_config.json CHANGED
@@ -1,4 +1,4 @@
1
  {
2
- "max_seq_length": 75,
3
  "do_lower_case": false
4
  }
 
1
  {
2
+ "max_seq_length": 256,
3
  "do_lower_case": false
4
  }
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 75,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 256,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },