pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language: ko
ko-sroberta-nli
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["์๋
ํ์ธ์?", "ํ๊ตญ์ด ๋ฌธ์ฅ ์๋ฒ ๋ฉ์ ์ํ ๋ฒํธ ๋ชจ๋ธ์
๋๋ค."]
model = SentenceTransformer('jhgan/ko-sroberta-nli')
embeddings = model.encode(sentences)
print(embeddings)
Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sroberta-nli')
model = AutoModel.from_pretrained('jhgan/ko-sroberta-nli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
Evaluation Results
KorNLI ํ์ต ๋ฐ์ดํฐ์ ์ผ๋ก ํ์ตํ ํ KorSTS ํ๊ฐ ๋ฐ์ดํฐ์ ์ผ๋ก ํ๊ฐํ ๊ฒฐ๊ณผ์ ๋๋ค.
- Cosine Pearson: 82.83
- Cosine Spearman: 83.85
- Euclidean Pearson: 82.87
- Euclidean Spearman: 83.29
- Manhattan Pearson: 82.88
- Manhattan Spearman: 83.28
- Dot Pearson: 80.34
- Dot Spearman: 79.69
Training
The model was trained with the parameters:
DataLoader:
sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader
of length 8885 with parameters:
{'batch_size': 64}
Loss:
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss
with parameters:
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
Parameters of the fit()-Method:
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 889,
"weight_decay": 0.01
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Citing & Authors
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. โSentence-BERT: Sentence Embeddings using Siamese BERT-Networks.โ ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. โMaking Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.โ EMNLP (2020).