metadata
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
kpf-sbert-v1
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
jinmang2/kpfbert ๋ชจ๋ธ์ sentencebert๋ก ํ์ธ๋๋ํ ๋ชจ๋ธ
Evaluation Results
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
ํ๊ตญ์ด : korsts(1,379์๋ฌธ์ฅ) ์ klue-sts(519์๋ฌธ์ฅ)
์์ด : stsb_multi_mt(1,376์๋ฌธ์ฅ) ์ glue:stsb (1,500์๋ฌธ์ฅ) - ์ฑ๋ฅ ์งํ๋ cosin.spearman
- ํ๊ฐ ์ธก์ ์ฝ๋๋ ์ฌ๊ธฐ ์ฐธ์กฐ
๋ชจ๋ธ korsts klue-sts glue(stsb) stsb_multi_mt(en) distiluse-base-multilingual-cased-v2 0.7475 0.7855 0.8193 0.8075 paraphrase-multilingual-mpnet-base-v2 0.8201 0.7993 0.8907 0.8682 bongsoo/albert-small-kor-sbert-v1 0.8305 0.8588 0.8419 0.7965 bongsoo/klue-sbert-v1.0 0.8529 0.8952 0.8813 0.8469 bongsoo/kpf-sbert-v1.0 0.8590 0.8924 0.8840 0.8531
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Training
- jinmang2/kpfbert ๋ชจ๋ธ์ sts(10)-distil(10)-nli(3)-sts(10) ํ๋ จ ์ํด
The model was trained with the parameters:
๊ณตํต
- do_lower_case=1, correct_bios=0, polling_mode=mean
1.STS
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- Param : lr: 1e-4, eps: 1e-6, warm_step=10%, epochs: 10, train_batch: 128, eval_batch: 64, max_token_len: 72
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
2.distilation
- ๊ต์ฌ ๋ชจ๋ธ : paraphrase-multilingual-mpnet-base-v2(max_token_len:128)
- ๋ง๋ญ์น : news_talk_en_ko_train.tsv (์์ด-ํ๊ตญ์ด ๋ํ-๋ด์ค ๋ณ๋ ฌ ๋ง๋ญ์น : 1.38M)
- Param : lr: 5e-5, eps: 1e-8, epochs: 10, train_batch: 128, eval/test_batch: 64, max_token_len: 128(๊ต์ฌ๋ชจ๋ธ์ด 128์ด๋ฏ๋ก ๋ง์ถฐ์ค)
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
3.NLI - ๋ง๋ญ์น : ํ๋ จ(967,852) : kornli(550,152), kluenli(24,998), glue-mnli(392,702) / ํ๊ฐ(3,519) : korsts(1,500), kluests(519), gluests(1,500) () - HyperParameter : lr: 3e-5, eps: 1e-8, warm_step=10%, epochs: 3, train/eval_batch: 64, max_token_len: 128 - ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
Citing & Authors
bongsoo