File size: 4,773 Bytes
27a1a08 302578e 27a1a08 7c4e56b 27a1a08 7c4e56b 27a1a08 7c4e56b 27a1a08 7c4e56b 27a1a08 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
language:
- tr
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- nli_tr
- emrecan/stsb-mt-turkish
license: mit
library_name: sentence-transformers
base_model: ytu-ce-cosmos/turkish-medium-bert-uncased
---
# turkish-medium-bert-uncased-mean-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was adapted from [ytu-ce-cosmos/turkish-medium-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-medium-bert-uncased) and fine-tuned on these datasets:
- [nli_tr](https://huggingface.co/datasets/nli_tr)
- [emrecan/stsb-mt-turkish](https://huggingface.co/datasets/emrecan/stsb-mt-turkish)
:warning: **All texts were manually lowercased,** [as stated](https://huggingface.co/ytu-ce-cosmos/turkish-medium-bert-uncased#%E2%9A%A0-uncased-use-requires-manual-lowercase-conversion) by the model's authors:
```python
text.replace("I", "ı").lower()
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Bu örnek bir cümle", "Her cümle dönüştürülür"]
model = SentenceTransformer('atasoglu/turkish-medium-bert-uncased-mean-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Bu örnek bir cümle", "Her cümle dönüştürülür"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('atasoglu/turkish-medium-bert-uncased-mean-nli-stsb-tr')
model = AutoModel.from_pretrained('atasoglu/turkish-medium-bert-uncased-mean-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Achieved results on the [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) test split are given below:
```txt
Cosine-Similarity : Pearson: 0.8329 Spearman: 0.8336
Manhattan-Distance: Pearson: 0.8193 Spearman: 0.8188
Euclidean-Distance: Pearson: 0.8198 Spearman: 0.8195
Dot-Product-Similarity: Pearson: 0.7888 Spearman: 0.7822
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 9,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |