Sentence Similarity
Safetensors
Japanese
distilbert
feature-extraction
hpprc commited on
Commit
9115bba
1 Parent(s): 8e9818e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -86
README.md CHANGED
@@ -12,36 +12,8 @@ pipeline_tag: sentence-similarity
12
  license: apache-2.0
13
  ---
14
 
15
- # SentenceTransformer based on cl-nagoya/ruri-small-pt
16
 
17
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [cl-nagoya/ruri-small-pt](https://huggingface.co/cl-nagoya/ruri-small-pt). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
18
-
19
- ## Model Details
20
-
21
- ### Model Description
22
- - **Model Type:** Sentence Transformer
23
- - **Base model:** [cl-nagoya/ruri-small-pt](https://huggingface.co/cl-nagoya/ruri-small-pt) <!-- at revision 7fc406373e1b317cddbf9962bb2d55270dca7ea8 -->
24
- - **Maximum Sequence Length:** 512 tokens
25
- - **Output Dimensionality:** 768 tokens
26
- - **Similarity Function:** Cosine Similarity
27
- <!-- - **Training Dataset:** Unknown -->
28
- <!-- - **Language:** Unknown -->
29
- <!-- - **License:** Unknown -->
30
-
31
- ### Model Sources
32
-
33
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
34
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
35
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
36
-
37
- ### Full Model Architecture
38
-
39
- ```
40
- MySentenceTransformer(
41
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
42
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
43
- )
44
- ```
45
 
46
  ## Usage
47
 
@@ -55,64 +27,86 @@ pip install -U sentence-transformers
55
 
56
  Then you can load this model and run inference.
57
  ```python
 
58
  from sentence_transformers import SentenceTransformer
59
 
60
  # Download from the 🤗 Hub
61
- model = SentenceTransformer("cl-nagoya/ruri-small-55-alpha0.0-0")
62
- # Run inference
 
63
  sentences = [
64
- 'The weather is lovely today.',
65
- "It's so sunny outside!",
66
- 'He drove to the stadium.',
 
67
  ]
68
- embeddings = model.encode(sentences)
69
- print(embeddings.shape)
70
- # [3, 768]
71
-
72
- # Get the similarity scores for the embeddings
73
- similarities = model.similarity(embeddings, embeddings)
74
- print(similarities.shape)
75
- # [3, 3]
76
- ```
77
-
78
- <!--
79
- ### Direct Usage (Transformers)
80
 
81
- <details><summary>Click to see the direct usage in Transformers</summary>
 
 
82
 
83
- </details>
84
- -->
85
-
86
- <!--
87
- ### Downstream Usage (Sentence Transformers)
 
 
88
 
89
- You can finetune this model on your own dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
- <details><summary>Click to expand</summary>
92
 
93
- </details>
94
- -->
95
 
96
- <!--
97
- ### Out-of-Scope Use
98
 
99
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
100
- -->
 
 
 
 
 
 
 
101
 
102
- <!--
103
- ## Bias, Risks and Limitations
104
 
105
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
106
- -->
107
 
108
- <!--
109
- ### Recommendations
 
 
 
 
110
 
111
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
112
- -->
113
 
114
  ## Training Details
115
 
 
116
  ### Framework Versions
117
  - Python: 3.10.13
118
  - Sentence Transformers: 3.0.0
@@ -122,24 +116,10 @@ You can finetune this model on your own dataset.
122
  - Datasets: 2.19.1
123
  - Tokenizers: 0.19.1
124
 
125
- ## Citation
126
 
127
  ### BibTeX
 
128
 
129
- <!--
130
- ## Glossary
131
-
132
- *Clearly define terms in order to be accessible across audiences.*
133
- -->
134
-
135
- <!--
136
- ## Model Card Authors
137
-
138
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
139
- -->
140
-
141
- <!--
142
- ## Model Card Contact
143
-
144
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
145
- -->
 
12
  license: apache-2.0
13
  ---
14
 
15
+ # Ruri: Japanese General Text Embeddings
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ## Usage
19
 
 
27
 
28
  Then you can load this model and run inference.
29
  ```python
30
+ import torch.nn.functional as F
31
  from sentence_transformers import SentenceTransformer
32
 
33
  # Download from the 🤗 Hub
34
+ model = SentenceTransformer("cl-nagoya/ruri-small", trust_remote_code=True)
35
+
36
+ # Don't forget to add the prefix "クエリ: " for query-side or "文章: " for passage-side texts.
37
  sentences = [
38
+ "クエリ: 瑠璃色はどんな色?",
39
+ "文章: 瑠璃色(るりいろ)は、紫みを帯びた濃い青。名は、半貴石の瑠璃(ラピスラズリ、英: lapis lazuli)による。JIS慣用色名では「こい紫みの青」(略号 dp-pB)と定義している[1][2]。",
40
+ "クエリ: ワシやタカのように、鋭いくちばしと爪を持った大型の鳥類を総称して「何類」というでしょう?",
41
+ "文章: ワシ、タカ、ハゲワシ、ハヤブサ、コンドル、フクロウが代表的である。これらの猛禽類はリンネ前後の時代(17~18世紀)には鷲類・鷹類・隼類及び梟類に分類された。ちなみにリンネは狩りをする鳥を単一の目(もく)にまとめ、vultur(コンドル、ハゲワシ)、falco(ワシ、タカ、ハヤブサなど)、strix(フクロウ)、lanius(モズ)の4属を含めている。",
42
  ]
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ embeddings = model.encode(sentences, convert_to_tensor=True)
45
+ print(embeddings.size())
46
+ # [4, 768]
47
 
48
+ similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
49
+ print(similarities)
50
+ # [[1.0000, 0.9453, 0.6860, 0.7225],
51
+ # [0.9453, 1.0000, 0.6852, 0.7005],
52
+ # [0.6860, 0.6852, 1.0000, 0.8567],
53
+ # [0.7225, 0.7005, 0.8567, 1.0000]]
54
+ ```
55
 
56
+ ## Benchmarks
57
+
58
+ ### JMTEB
59
+ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
60
+
61
+ |Model|#Param.|Retrieval|STS|Classfification|Reranking|Clustering|PairClassification|Avg.|
62
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
63
+ |[cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base)|111M|49.64|82.05|73.47|91.83|51.79|62.57|68.56|
64
+ |[cl-nagoya/sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large)|337M|37.62|83.18|73.73|91.48|50.56|62.51|66.51|
65
+ |[cl-nagoya/unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base)|111M|40.23|78.72|73.07|91.16|44.77|62.44|65.07|
66
+ |[cl-nagoya/unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large)|337M|40.53|80.56|74.66|90.95|48.41|62.49|66.27|
67
+ |[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|133M|59.02|78.71|76.82|91.90|49.78|66.39|70.44|
68
+ ||||||||||
69
+ |[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)|472M|40.12|76.56|72.66|91.63|44.88|62.33|64.70|
70
+ |[intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)|118M|67.27|80.07|67.62|93.03|46.91|62.19|69.52|
71
+ |[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|278M|68.21|79.84|69.30|92.85|48.26|62.26|70.12|
72
+ |[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|560M|70.98|79.70|72.89|92.96|51.24|62.15|71.65|
73
+ ||||||||||
74
+ |OpenAI/text-embedding-ada-002|-|64.38|79.02|69.75|93.04|48.30|62.40|69.48|
75
+ |OpenAI/text-embedding-3-small|-|66.39|79.46|73.06|92.92|51.06|62.27|70.86|
76
+ |OpenAI/text-embedding-3-large|-|74.48|82.52|77.58|93.58|53.32|62.35|73.97|
77
+ ||||||||||
78
+ |**[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)**|68M|69.41|82.79|76.22|93.00|51.19|62.11|71.53|
79
+ |[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|69.82|82.87|75.58|92.91|54.16|62.38|71.91|
80
+ |[Ruri-Large](https://huggingface.co/cl-nagoya/ruri-large)|337M|73.02|83.13|77.43|92.99|51.82|62.29|73.31|
81
 
 
82
 
 
 
83
 
84
+ ## Model Details
 
85
 
86
+ ### Model Description
87
+ - **Model Type:** Sentence Transformer
88
+ - **Base model:** [cl-nagoya/ruri-pt-small](https://huggingface.co/cl-nagoya/ruri-pt-small)
89
+ - **Maximum Sequence Length:** 512 tokens
90
+ - **Output Dimensionality:** 768
91
+ - **Similarity Function:** Cosine Similarity
92
+ - **Language:** Japanese
93
+ - **License:** Apache 2.0
94
+ <!-- - **Training Dataset:** Unknown -->
95
 
 
 
96
 
97
+ ### Full Model Architecture
 
98
 
99
+ ```
100
+ SentenceTransformer(
101
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
102
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
103
+ )
104
+ ```
105
 
 
 
106
 
107
  ## Training Details
108
 
109
+
110
  ### Framework Versions
111
  - Python: 3.10.13
112
  - Sentence Transformers: 3.0.0
 
116
  - Datasets: 2.19.1
117
  - Tokenizers: 0.19.1
118
 
119
+ <!-- ## Citation
120
 
121
  ### BibTeX
122
+ -->
123
 
124
+ ## License
125
+ This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).