add readme
Browse files- README.md +145 -5
- README_JA.md +140 -0
README.md
CHANGED
@@ -1,5 +1,145 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ja
|
4 |
+
- en
|
5 |
+
license_name: sarahina-non-commercial-license
|
6 |
+
license_link: LICENSE
|
7 |
+
tags:
|
8 |
+
- transformers
|
9 |
+
- sentence-similarity
|
10 |
+
- feature-extraction
|
11 |
+
- sentence-transformers
|
12 |
+
pipeline_tag: sentence-similarity
|
13 |
+
inference: false
|
14 |
+
datasets:
|
15 |
+
- hpprc/emb
|
16 |
+
- hpprc/mqa-ja
|
17 |
+
- sentence-transformers/NQ-retrieval
|
18 |
+
- izumi-lab/llm-japanese-dataset
|
19 |
+
- shunk031/JGLUE
|
20 |
+
- cl-nagoya/ruri-dataset-ft
|
21 |
+
|
22 |
+
---
|
23 |
+
|
24 |
+
# sarashina-embedding-v1-1b
|
25 |
+
|
26 |
+
**[日本語のREADME/Japanese README](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/README_JA.md)**
|
27 |
+
|
28 |
+
|
29 |
+
"sarashina-embedding-v1-1b" is a Japanese text embedding model, based on the 1.2B-parameter Japansese LLM "Sarashina".
|
30 |
+
We trained this model with multi-stage contrastive learning. We achieved the state-of-the-art average score in the average of 16 datasets in [JMTEB](https://huggingface.co/datasets/sbintuitions/JMTEB)(Japanese Massive Text Embedding Benchmark).
|
31 |
+
|
32 |
+
This model maps sentences & paragraphs to a 1792-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
|
33 |
+
|
34 |
+
## Model Details
|
35 |
+
|
36 |
+
### Model Description
|
37 |
+
- **Model Type:** Sentence Transformer
|
38 |
+
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
|
39 |
+
- **Maximum Sequence Length:** 8192 tokens
|
40 |
+
- **Output Dimensionality:** 1792 tokens
|
41 |
+
- **Similarity Function:** Cosine Similarity
|
42 |
+
- **Language:** Japanese
|
43 |
+
- **License:** [Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/LICENSE)
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
|
48 |
+
### Full Model Architecture
|
49 |
+
|
50 |
+
```
|
51 |
+
SentenceTransformer(
|
52 |
+
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: LlamaModel
|
53 |
+
(1): Pooling({'word_embedding_dimension': 1792, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': False})
|
54 |
+
)
|
55 |
+
```
|
56 |
+
|
57 |
+
## Usage
|
58 |
+
|
59 |
+
### Direct Usage (Sentence Transformers)
|
60 |
+
|
61 |
+
First install the Sentence Transformers library:
|
62 |
+
|
63 |
+
```bash
|
64 |
+
pip install -U sentence-transformers
|
65 |
+
```
|
66 |
+
|
67 |
+
Then you can load this model and run inference.
|
68 |
+
```python
|
69 |
+
from sentence_transformers import SentenceTransformer
|
70 |
+
|
71 |
+
# Download from the 🤗 Hub
|
72 |
+
model = SentenceTransformer("sbintuitions/sarashina-embedding-v1-1b")
|
73 |
+
# Run inference
|
74 |
+
sentences = [
|
75 |
+
'更級日記は、平安時代中期に菅原孝標女によって書かれた回想録です。',
|
76 |
+
'Sarashinaは、SB Intuitionsが開発した日本語大規模言語モデルです。これまでに7B, 13B, 70B, 8x70Bのモデルが公開されています。',
|
77 |
+
'更科蕎麦とはなんですか?'
|
78 |
+
]
|
79 |
+
embeddings = model.encode(sentences)
|
80 |
+
print(embeddings.shape)
|
81 |
+
# [3, 1792]
|
82 |
+
|
83 |
+
# Get the similarity scores for the embeddings
|
84 |
+
similarities = model.similarity(embeddings, embeddings)
|
85 |
+
print(similarities.shape)
|
86 |
+
# [3, 3]
|
87 |
+
```
|
88 |
+
|
89 |
+
**Note**
|
90 |
+
|
91 |
+
- You do not need to add prefixes such as "Query: " and "Document: " at the beginning of the input sentence.
|
92 |
+
- This model is licensed under the [Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/LICENSE), which has restrictions on commercial use. If you are interested in utilizing this model for your business, please feel free to contact us through our [contact page](https://www.sbintuitions.co.jp/#contact).
|
93 |
+
|
94 |
+
## Training
|
95 |
+
|
96 |
+
sarashina-embedding-v1-1b is created through the following two-stage learning process:
|
97 |
+
|
98 |
+
### Stage 1: Weakly-supervised Learning
|
99 |
+
|
100 |
+
To achieve generic text embedding performance across a wide range of domains, we performed contrastive training on weakly-supervised data consisting of our own web-crawled data and open data.
|
101 |
+
|
102 |
+
#### Dataset
|
103 |
+
|
104 |
+
|dataset|counts|
|
105 |
+
|:-:|:-:|
|
106 |
+
|AutoWikiQA|50,521,135|
|
107 |
+
|web-crawled data|47,370,649|
|
108 |
+
|MQA|12,941,472|
|
109 |
+
|llm-japanese-dataset|9,074,340|
|
110 |
+
|wikipedia|5,555,212|
|
111 |
+
|Quiz dataset|988,478|
|
112 |
+
|Natural Questions|132,796|
|
113 |
+
|JSQuAD|62,859|
|
114 |
+
|snow|62,758|
|
115 |
+
|JaQuAD|31,746|
|
116 |
+
|mkqa|3,318|
|
117 |
+
|||
|
118 |
+
|**total**|**126,744,763**|
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
### Step2: Supervised Fine-tuning
|
123 |
+
|
124 |
+
To enable the model to learn a more accurate query-document similarity, We performed supervised fine-tuning using the following dataset.
|
125 |
+
|
126 |
+
|
127 |
+
# Benchmarks
|
128 |
+
### [JMTEB](https://huggingface.co/datasets/sbintuitions/JMTEB)
|
129 |
+
|
130 |
+
Model |Max Tokens|Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
|
131 |
+
|:----------------------------------------------|:----------|:----------|:------------|:----------|:-----------------|:------------|:-------------|:---------------------|
|
132 |
+
| OpenAI/text-embedding-3-large | 8191 |74.05 | 74.48 | 82.52 | 77.58 | 93.58 | 53.32 | 62.35 |
|
133 |
+
| [cl-nagoya/ruri-large](https://huggingface.co/intfloat/multilingual-e5-large) | 512 |73.31 | 73.02 | **83.13** | 77.43 | 92.99 | 51.82 | 62.29 |
|
134 |
+
| [pkshatech/GLuCoSE-base-ja-v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2) | 512 |72.23 | 73.36 | 82.96 | 74.21 | 93.01 | 48.65 | **62.37** |
|
135 |
+
| [pkshatech/RoSEtta-base-ja](https://huggingface.co/pkshatech/RoSEtta-base-ja) |1024 |72.04 | 73.21 | 81.39 | 72.41 | 92.69 | 53.23 | 61.74 |
|
136 |
+
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 512|70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
|
137 |
+
|||
|
138 |
+
|[**sarashina-embedding-v1-1b**](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b)(This model)|**8192**|**75.50**|**77.61**|82.71|**78.37**|**93.74**|**53.86**|62.00|
|
139 |
+
|
140 |
+
|
141 |
+
## License
|
142 |
+
|
143 |
+
This model is licensed under [Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/LICENSE).
|
144 |
+
|
145 |
+
**If you are interested in using this model for commercial purposes, please feel free to contact us through our [contact page](https://www.sbintuitions.co.jp/#contact).**
|
README_JA.md
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ja
|
4 |
+
license_name: sarahina-non-commercial-license
|
5 |
+
license_link: LICENSE
|
6 |
+
tags:
|
7 |
+
- transformers
|
8 |
+
- sentence-similarity
|
9 |
+
- feature-extraction
|
10 |
+
- sentence-transformers
|
11 |
+
inference: false
|
12 |
+
---
|
13 |
+
|
14 |
+
# sarashina-embedding-v1-1b
|
15 |
+
|
16 |
+
「sarashina-embedding-v1-1b」は、1.2Bパラメータの日本語LLM「Sarashina」をベースにした日本語テキスト埋め込みモデルです。
|
17 |
+
|
18 |
+
このモデルは、マルチステージの対照学習で訓練し、 [JMTEB](https://huggingface.co/datasets/sbintuitions/JMTEB)(Japanese Massive Text Embedding Benchmark)の16個のデータセットの平均で、(2024/12/1時点で)最高水準の平均スコアを達成しました。
|
19 |
+
|
20 |
+
このモデルは、文や段落を1792次元の高密度ベクトル空間にマッピングし、意味的テキスト類似度、意味的検索、paraphrase mining、テキスト分類、クラスタリングなどに使用できます。
|
21 |
+
|
22 |
+
## モデル詳細
|
23 |
+
|
24 |
+
### モデル説明
|
25 |
+
|
26 |
+
- **モデルタイプ:** Sentence Transformer
|
27 |
+
- **最大シーケンス長:** 8192トークン
|
28 |
+
- **出力次元数:** 1792トークン
|
29 |
+
- **類似度関数:** コサイン類似度
|
30 |
+
- **言語:** 日本語
|
31 |
+
- **ライセンス:** [Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/LICENSE)
|
32 |
+
|
33 |
+
|
34 |
+
### モデルアーキテクチャ
|
35 |
+
|
36 |
+
```
|
37 |
+
SentenceTransformer(
|
38 |
+
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: LlamaModel
|
39 |
+
(1): Pooling({'word_embedding_dimension': 1792, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': False})
|
40 |
+
)
|
41 |
+
```
|
42 |
+
|
43 |
+
## 使用方法
|
44 |
+
|
45 |
+
### Sentence Transformersを使う方法
|
46 |
+
|
47 |
+
まず、Sentence Transformersライブラリをインストールします。
|
48 |
+
|
49 |
+
```bash
|
50 |
+
pip install -U sentence-transformers
|
51 |
+
```
|
52 |
+
|
53 |
+
次に、このモデルをロードし、推論を実行します。
|
54 |
+
|
55 |
+
```python
|
56 |
+
from sentence_transformers import SentenceTransformer
|
57 |
+
|
58 |
+
# 🤗 Hubからモデルをダウンロードする
|
59 |
+
model = SentenceTransformer("sbintuitions/sarashina-embedding-v1-1b")
|
60 |
+
# 推論を実行する
|
61 |
+
sentences = [
|
62 |
+
'更級日記は、平安時代中期に菅原孝標女によって書かれた回想録です。',
|
63 |
+
'Sarashinaは、SB Intuitionsが開発した日本語大規模言語モデルです。これまでに7B, 13B, 70B, 8x70Bのモデルが公開されています。',
|
64 |
+
'更科蕎麦とはなんですか?'
|
65 |
+
]
|
66 |
+
embeddings = model.encode(sentences)
|
67 |
+
print(embeddings.shape)
|
68 |
+
# [3, 1792]
|
69 |
+
|
70 |
+
# 埋め込みの類似度スコアを取得する
|
71 |
+
similarities = model.similarity(embeddings, embeddings)
|
72 |
+
print(similarities.shape)
|
73 |
+
# [3, 3]
|
74 |
+
```
|
75 |
+
|
76 |
+
**注意**
|
77 |
+
|
78 |
+
- "Query: ", "Document: "などのprefixを入力文の先頭に加える必要はありません。
|
79 |
+
- このモデルは[Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/LICENSE)でライセンスされており、商用利用には制限があります。もしあなたのビジネスでこのモデルを活用することに興味がある場合は、気軽に[コンタクトページ](https://www.sbintuitions.co.jp/#contact)にご連絡ください。
|
80 |
+
|
81 |
+
## 学習
|
82 |
+
|
83 |
+
sarashina-embedding-v1-1bは、以下の2段階の学習ステージによって行われています。
|
84 |
+
|
85 |
+
### Stage 1: 弱教師あり学習
|
86 |
+
|
87 |
+
幅広いドメインに対して汎用的なテキスト埋め込みの性能を達成するために、私たちは、独自webクロールデータとオープンデータで構成された弱教師データによる対照学習を行いまいした。
|
88 |
+
|
89 |
+
#### データセット
|
90 |
+
|
91 |
+
|dataset|counts|
|
92 |
+
|:-:|:-:|
|
93 |
+
|AutoWikiQA|50,521,135|
|
94 |
+
|web-crawled data|47,370,649|
|
95 |
+
|MQA|12,941,472|
|
96 |
+
|llm-japanese-dataset|9,074,340|
|
97 |
+
|wikipedia|5,555,212|
|
98 |
+
|Quiz dataset|988,478|
|
99 |
+
|Natural Questions|132,796|
|
100 |
+
|JSQuAD|62,859|
|
101 |
+
|snow|62,758|
|
102 |
+
|JaQuAD|31,746|
|
103 |
+
|mkqa|3,318|
|
104 |
+
|||
|
105 |
+
|**total**|**126,744,763**|
|
106 |
+
|
107 |
+
### Stage 2: ファインチューニング
|
108 |
+
|
109 |
+
より正確なクエリ-ドキュメント間の類似度をモデルに学習させるために、私たちは以下のようなデータセットでファインチューニングを行いました。
|
110 |
+
|
111 |
+
#### データセット
|
112 |
+
|
113 |
+
|dataset|counts|
|
114 |
+
|:-:|:-:|
|
115 |
+
|JSNLI|141,388 |
|
116 |
+
|NU-MNLI|67,987|
|
117 |
+
|Mr. TyDi (only Japanese subset)| 3,697 |
|
118 |
+
|Natural Question (sampled)| 20,000|
|
119 |
+
|||
|
120 |
+
|**total**|**233,072**|
|
121 |
+
|
122 |
+
## ベンチマーク
|
123 |
+
|
124 |
+
### [JMTEB](https://huggingface.co/datasets/sbintuitions/JMTEB)
|
125 |
+
|
126 |
+
Model |Max Tokens|Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
|
127 |
+
|:----------------------------------------------|:----------|:----------|:------------|:----------|:-----------------|:------------|:-------------|:---------------------|
|
128 |
+
| OpenAI/text-embedding-3-large | 8191 |74.05 | 74.48 | 82.52 | 77.58 | 93.58 | 53.32 | 62.35 |
|
129 |
+
| [cl-nagoya/ruri-large](https://huggingface.co/intfloat/multilingual-e5-large) | 512 |73.31 | 73.02 | **83.13** | 77.43 | 92.99 | 51.82 | 62.29 |
|
130 |
+
| [pkshatech/GLuCoSE-base-ja-v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2) | 512 |72.23 | 73.36 | 82.96 | 74.21 | 93.01 | 48.65 | **62.37** |
|
131 |
+
| [pkshatech/RoSEtta-base-ja](https://huggingface.co/pkshatech/RoSEtta-base-ja) |1024 |72.04 | 73.21 | 81.39 | 72.41 | 92.69 | 53.23 | 61.74 |
|
132 |
+
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 512|70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
|
133 |
+
|||
|
134 |
+
|[**sarashina-embedding-v1-1b**](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b)(This model)|**8192**|**75.50**|**77.61**|82.71|**78.37**|**93.74**|**53.86**|62.00|
|
135 |
+
|
136 |
+
## ライセンス
|
137 |
+
|
138 |
+
このモデルは[Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina-embedding-v1-1b/blob/main/LICENSE)に基づいて公開されています.
|
139 |
+
|
140 |
+
**もしこのモデルの商用利用に興味がある場合は、気軽に[コンタクトページ](https://www.sbintuitions.co.jp/#contact)にご連絡ください。**
|