LamaDiab commited on
Commit
b7cc62a
·
verified ·
1 Parent(s): 70c75c8

Updating model weights

Browse files
Files changed (1) hide show
  1. README.md +395 -144
README.md CHANGED
@@ -1,173 +1,424 @@
1
  ---
2
- language: en
3
- license: apache-2.0
4
- library_name: sentence-transformers
5
  tags:
6
  - sentence-transformers
7
- - feature-extraction
8
  - sentence-similarity
9
- - transformers
10
- datasets:
11
- - s2orc
12
- - flax-sentence-embeddings/stackexchange_xml
13
- - ms_marco
14
- - gooaq
15
- - yahoo_answers_topics
16
- - code_search_net
17
- - search_qa
18
- - eli5
19
- - snli
20
- - multi_nli
21
- - wikihow
22
- - natural_questions
23
- - trivia_qa
24
- - embedding-data/sentence-compression
25
- - embedding-data/flickr30k-captions
26
- - embedding-data/altlex
27
- - embedding-data/simple-wiki
28
- - embedding-data/QQP
29
- - embedding-data/SPECTER
30
- - embedding-data/PAQ_pairs
31
- - embedding-data/WikiAnswers
 
 
 
 
 
 
 
 
 
32
  pipeline_tag: sentence-similarity
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ---
34
 
 
 
 
35
 
36
- # all-MiniLM-L6-v2
37
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
38
 
39
- ## Usage (Sentence-Transformers)
40
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
 
 
 
 
 
 
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```
 
 
 
 
 
 
 
 
43
  pip install -U sentence-transformers
44
  ```
45
 
46
- Then you can use the model like this:
47
  ```python
48
  from sentence_transformers import SentenceTransformer
49
- sentences = ["This is an example sentence", "Each sentence is converted"]
50
 
51
- model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
 
 
 
 
 
 
 
52
  embeddings = model.encode(sentences)
53
- print(embeddings)
 
 
 
 
 
 
 
 
54
  ```
55
 
56
- ## Usage (HuggingFace Transformers)
57
- Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
58
 
59
- ```python
60
- from transformers import AutoTokenizer, AutoModel
61
- import torch
62
- import torch.nn.functional as F
63
 
64
- #Mean Pooling - Take attention mask into account for correct averaging
65
- def mean_pooling(model_output, attention_mask):
66
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
67
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
68
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
- # Sentences we want sentence embeddings for
72
- sentences = ['This is an example sentence', 'Each sentence is converted']
73
-
74
- # Load model from HuggingFace Hub
75
- tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
76
- model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
77
-
78
- # Tokenize sentences
79
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
80
 
81
- # Compute token embeddings
82
- with torch.no_grad():
83
- model_output = model(**encoded_input)
84
 
85
- # Perform pooling
86
- sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
87
 
88
- # Normalize embeddings
89
- sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
90
 
91
- print("Sentence embeddings:")
92
- print(sentence_embeddings)
93
- ```
94
 
95
- ------
96
-
97
- ## Background
98
-
99
- The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
100
- contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
101
- 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
102
-
103
- We developed this model during the
104
- [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
105
- organized by Hugging Face. We developed this model as part of the project:
106
- [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
107
-
108
- ## Intended uses
109
-
110
- Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
111
- the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
112
-
113
- By default, input text longer than 256 word pieces is truncated.
114
-
115
-
116
- ## Training procedure
117
-
118
- ### Pre-training
119
-
120
- We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
121
-
122
- ### Fine-tuning
123
-
124
- We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
125
- We then apply the cross entropy loss by comparing with true pairs.
126
-
127
- #### Hyper parameters
128
-
129
- We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
130
- We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
131
- a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
132
-
133
- #### Training data
134
-
135
- We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
136
- We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
137
-
138
-
139
- | Dataset | Paper | Number of training tuples |
140
- |--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
141
- | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
142
- | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
143
- | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
144
- | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
145
- | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
146
- | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
147
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
148
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
149
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
150
- | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
151
- | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
152
- | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
153
- | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
154
- | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
155
- | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
156
- | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
157
- | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
158
- | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
159
- | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
160
- | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
161
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
162
- | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
163
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
164
- | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
165
- | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
166
- | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
167
- | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
168
- | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
169
- | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
170
- | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
171
- | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
172
- | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
173
- | **Total** | | **1,170,060,424** |
 
1
  ---
 
 
 
2
  tags:
3
  - sentence-transformers
 
4
  - sentence-similarity
5
+ - feature-extraction
6
+ - dense
7
+ - generated_from_trainer
8
+ - dataset_size:704378
9
+ - loss:MultipleNegativesSymmetricRankingLoss
10
+ base_model: sentence-transformers/all-MiniLM-L6-v2
11
+ widget:
12
+ - source_sentence: must kindergarten backpack mermazing 2 cases
13
+ sentences:
14
+ - wide leg popline pants b22
15
+ - ' kindergarten mermazing backpack '
16
+ - bag
17
+ - source_sentence: derby cap toe shoes - brown
18
+ sentences:
19
+ - natural leather shoes
20
+ - shoe
21
+ - 925 sterling silver heart ear studs with genuine european crystals
22
+ - source_sentence: rembrandt's eyes
23
+ sentences:
24
+ - art book
25
+ - ' rembrandt''s eyes book'
26
+ - canvas frame 100% cotton 350 gsm 20 cm triangle m e5303t
27
+ - source_sentence: essence multi task concealer 15 natural nude
28
+ sentences:
29
+ - face make-up
30
+ - ' essence concealer'
31
+ - rowntrees fruit pastilles
32
+ - source_sentence: parker ingenuity ct black lacquer so959210
33
+ sentences:
34
+ - lagu-family barber shop toy
35
+ - ' pen'
36
+ - pen
37
  pipeline_tag: sentence-similarity
38
+ library_name: sentence-transformers
39
+ metrics:
40
+ - cosine_accuracy
41
+ model-index:
42
+ - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
43
+ results:
44
+ - task:
45
+ type: triplet
46
+ name: Triplet
47
+ dataset:
48
+ name: Unknown
49
+ type: unknown
50
+ metrics:
51
+ - type: cosine_accuracy
52
+ value: 0.9562519788742065
53
+ name: Cosine Accuracy
54
  ---
55
 
56
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
57
+
58
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
59
 
60
+ ## Model Details
 
61
 
62
+ ### Model Description
63
+ - **Model Type:** Sentence Transformer
64
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
65
+ - **Maximum Sequence Length:** 256 tokens
66
+ - **Output Dimensionality:** 384 dimensions
67
+ - **Similarity Function:** Cosine Similarity
68
+ <!-- - **Training Dataset:** Unknown -->
69
+ <!-- - **Language:** Unknown -->
70
+ <!-- - **License:** Unknown -->
71
 
72
+ ### Model Sources
73
+
74
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
75
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/huggingface/sentence-transformers)
76
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
77
+
78
+ ### Full Model Architecture
79
+
80
+ ```
81
+ SentenceTransformer(
82
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
83
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
84
+ (2): Normalize()
85
+ )
86
  ```
87
+
88
+ ## Usage
89
+
90
+ ### Direct Usage (Sentence Transformers)
91
+
92
+ First install the Sentence Transformers library:
93
+
94
+ ```bash
95
  pip install -U sentence-transformers
96
  ```
97
 
98
+ Then you can load this model and run inference.
99
  ```python
100
  from sentence_transformers import SentenceTransformer
 
101
 
102
+ # Download from the 🤗 Hub
103
+ model = SentenceTransformer("LamaDiab/MiniLM-V15Data-128BATCH-SemanticEngine")
104
+ # Run inference
105
+ sentences = [
106
+ 'parker ingenuity ct black lacquer so959210',
107
+ ' pen',
108
+ 'lagu-family barber shop toy',
109
+ ]
110
  embeddings = model.encode(sentences)
111
+ print(embeddings.shape)
112
+ # [3, 384]
113
+
114
+ # Get the similarity scores for the embeddings
115
+ similarities = model.similarity(embeddings, embeddings)
116
+ print(similarities)
117
+ # tensor([[1.0000, 0.3281, 0.1032],
118
+ # [0.3281, 1.0000, 0.0042],
119
+ # [0.1032, 0.0042, 1.0000]])
120
  ```
121
 
122
+ <!--
123
+ ### Direct Usage (Transformers)
124
 
125
+ <details><summary>Click to see the direct usage in Transformers</summary>
 
 
 
126
 
127
+ </details>
128
+ -->
 
 
 
129
 
130
+ <!--
131
+ ### Downstream Usage (Sentence Transformers)
132
+
133
+ You can finetune this model on your own dataset.
134
+
135
+ <details><summary>Click to expand</summary>
136
+
137
+ </details>
138
+ -->
139
+
140
+ <!--
141
+ ### Out-of-Scope Use
142
+
143
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
144
+ -->
145
+
146
+ ## Evaluation
147
+
148
+ ### Metrics
149
+
150
+ #### Triplet
151
+
152
+ * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
153
+
154
+ | Metric | Value |
155
+ |:--------------------|:-----------|
156
+ | **cosine_accuracy** | **0.9563** |
157
+
158
+ <!--
159
+ ## Bias, Risks and Limitations
160
+
161
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
162
+ -->
163
+
164
+ <!--
165
+ ### Recommendations
166
+
167
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
168
+ -->
169
+
170
+ ## Training Details
171
+
172
+ ### Training Dataset
173
+
174
+ #### Unnamed Dataset
175
+
176
+ * Size: 704,378 training samples
177
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>itemCategory</code>
178
+ * Approximate statistics based on the first 1000 samples:
179
+ | | anchor | positive | itemCategory |
180
+ |:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
181
+ | type | string | string | string |
182
+ | details | <ul><li>min: 3 tokens</li><li>mean: 8.06 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.35 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.93 tokens</li><li>max: 9 tokens</li></ul> |
183
+ * Samples:
184
+ | anchor | positive | itemCategory |
185
+ |:-------------------------------------------------------------|:--------------------------------------------------|:-------------------------------------|
186
+ | <code>rilastil sunlaude comfort dye fluid spf50 50 ml</code> | <code>spf50 sunscreen</code> | <code>sunscreen</code> |
187
+ | <code>lemon and powder leather slippers</code> | <code>genuine cow leather</code> | <code>slipper</code> |
188
+ | <code>erastapex trio</code> | <code>erastapex trio olmesartan medoxomil</code> | <code>blood disorder medicine</code> |
189
+ * Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
190
+ ```json
191
+ {
192
+ "scale": 20.0,
193
+ "similarity_fct": "cos_sim",
194
+ "gather_across_devices": false
195
+ }
196
+ ```
197
+
198
+ ### Evaluation Dataset
199
+
200
+ #### Unnamed Dataset
201
+
202
+ * Size: 9,509 evaluation samples
203
+ * Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, and <code>itemCategory</code>
204
+ * Approximate statistics based on the first 1000 samples:
205
+ | | anchor | positive | negative | itemCategory |
206
+ |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
207
+ | type | string | string | string | string |
208
+ | details | <ul><li>min: 3 tokens</li><li>mean: 9.63 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 6.17 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.79 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.88 tokens</li><li>max: 10 tokens</li></ul> |
209
+ * Samples:
210
+ | anchor | positive | negative | itemCategory |
211
+ |:---------------------------------------------------------------------|:----------------------------------|:----------------------------------------------------------|:------------------------------------|
212
+ | <code>pilot mechanical pencil progrex h-127 - 0.7 mm</code> | <code>0.7 mm pencil</code> | <code>tracing sketch a3 70 gr 50 sheets</code> | <code>pencil</code> |
213
+ | <code>superior drawing marker -pen - set of 12 colors - 2 nib</code> | <code> marker pen set </code> | <code>wunder chocolate strawberry ganache & coulis</code> | <code>marker</code> |
214
+ | <code>first person singular author: haruki murakami</code> | <code>haruki murakami book</code> | <code>dark hot chocolate sugar free</code> | <code>literature and fiction</code> |
215
+ * Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
216
+ ```json
217
+ {
218
+ "scale": 20.0,
219
+ "similarity_fct": "cos_sim",
220
+ "gather_across_devices": false
221
+ }
222
+ ```
223
+
224
+ ### Training Hyperparameters
225
+ #### Non-Default Hyperparameters
226
+
227
+ - `eval_strategy`: steps
228
+ - `per_device_train_batch_size`: 128
229
+ - `per_device_eval_batch_size`: 128
230
+ - `learning_rate`: 2e-05
231
+ - `weight_decay`: 0.001
232
+ - `num_train_epochs`: 5
233
+ - `warmup_ratio`: 0.2
234
+ - `fp16`: True
235
+ - `dataloader_num_workers`: 1
236
+ - `dataloader_prefetch_factor`: 2
237
+ - `dataloader_persistent_workers`: True
238
+ - `push_to_hub`: True
239
+ - `hub_model_id`: LamaDiab/MiniLM-V15Data-128BATCH-SemanticEngine
240
+ - `hub_strategy`: all_checkpoints
241
+
242
+ #### All Hyperparameters
243
+ <details><summary>Click to expand</summary>
244
+
245
+ - `overwrite_output_dir`: False
246
+ - `do_predict`: False
247
+ - `eval_strategy`: steps
248
+ - `prediction_loss_only`: True
249
+ - `per_device_train_batch_size`: 128
250
+ - `per_device_eval_batch_size`: 128
251
+ - `per_gpu_train_batch_size`: None
252
+ - `per_gpu_eval_batch_size`: None
253
+ - `gradient_accumulation_steps`: 1
254
+ - `eval_accumulation_steps`: None
255
+ - `torch_empty_cache_steps`: None
256
+ - `learning_rate`: 2e-05
257
+ - `weight_decay`: 0.001
258
+ - `adam_beta1`: 0.9
259
+ - `adam_beta2`: 0.999
260
+ - `adam_epsilon`: 1e-08
261
+ - `max_grad_norm`: 1.0
262
+ - `num_train_epochs`: 5
263
+ - `max_steps`: -1
264
+ - `lr_scheduler_type`: linear
265
+ - `lr_scheduler_kwargs`: {}
266
+ - `warmup_ratio`: 0.2
267
+ - `warmup_steps`: 0
268
+ - `log_level`: passive
269
+ - `log_level_replica`: warning
270
+ - `log_on_each_node`: True
271
+ - `logging_nan_inf_filter`: True
272
+ - `save_safetensors`: True
273
+ - `save_on_each_node`: False
274
+ - `save_only_model`: False
275
+ - `restore_callback_states_from_checkpoint`: False
276
+ - `no_cuda`: False
277
+ - `use_cpu`: False
278
+ - `use_mps_device`: False
279
+ - `seed`: 42
280
+ - `data_seed`: None
281
+ - `jit_mode_eval`: False
282
+ - `use_ipex`: False
283
+ - `bf16`: False
284
+ - `fp16`: True
285
+ - `fp16_opt_level`: O1
286
+ - `half_precision_backend`: auto
287
+ - `bf16_full_eval`: False
288
+ - `fp16_full_eval`: False
289
+ - `tf32`: None
290
+ - `local_rank`: 0
291
+ - `ddp_backend`: None
292
+ - `tpu_num_cores`: None
293
+ - `tpu_metrics_debug`: False
294
+ - `debug`: []
295
+ - `dataloader_drop_last`: False
296
+ - `dataloader_num_workers`: 1
297
+ - `dataloader_prefetch_factor`: 2
298
+ - `past_index`: -1
299
+ - `disable_tqdm`: False
300
+ - `remove_unused_columns`: True
301
+ - `label_names`: None
302
+ - `load_best_model_at_end`: False
303
+ - `ignore_data_skip`: False
304
+ - `fsdp`: []
305
+ - `fsdp_min_num_params`: 0
306
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
307
+ - `fsdp_transformer_layer_cls_to_wrap`: None
308
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
309
+ - `deepspeed`: None
310
+ - `label_smoothing_factor`: 0.0
311
+ - `optim`: adamw_torch
312
+ - `optim_args`: None
313
+ - `adafactor`: False
314
+ - `group_by_length`: False
315
+ - `length_column_name`: length
316
+ - `ddp_find_unused_parameters`: None
317
+ - `ddp_bucket_cap_mb`: None
318
+ - `ddp_broadcast_buffers`: False
319
+ - `dataloader_pin_memory`: True
320
+ - `dataloader_persistent_workers`: True
321
+ - `skip_memory_metrics`: True
322
+ - `use_legacy_prediction_loop`: False
323
+ - `push_to_hub`: True
324
+ - `resume_from_checkpoint`: None
325
+ - `hub_model_id`: LamaDiab/MiniLM-V15Data-128BATCH-SemanticEngine
326
+ - `hub_strategy`: all_checkpoints
327
+ - `hub_private_repo`: None
328
+ - `hub_always_push`: False
329
+ - `hub_revision`: None
330
+ - `gradient_checkpointing`: False
331
+ - `gradient_checkpointing_kwargs`: None
332
+ - `include_inputs_for_metrics`: False
333
+ - `include_for_metrics`: []
334
+ - `eval_do_concat_batches`: True
335
+ - `fp16_backend`: auto
336
+ - `push_to_hub_model_id`: None
337
+ - `push_to_hub_organization`: None
338
+ - `mp_parameters`:
339
+ - `auto_find_batch_size`: False
340
+ - `full_determinism`: False
341
+ - `torchdynamo`: None
342
+ - `ray_scope`: last
343
+ - `ddp_timeout`: 1800
344
+ - `torch_compile`: False
345
+ - `torch_compile_backend`: None
346
+ - `torch_compile_mode`: None
347
+ - `include_tokens_per_second`: False
348
+ - `include_num_input_tokens_seen`: False
349
+ - `neftune_noise_alpha`: None
350
+ - `optim_target_modules`: None
351
+ - `batch_eval_metrics`: False
352
+ - `eval_on_start`: False
353
+ - `use_liger_kernel`: False
354
+ - `liger_kernel_config`: None
355
+ - `eval_use_gather_object`: False
356
+ - `average_tokens_across_devices`: False
357
+ - `prompts`: None
358
+ - `batch_sampler`: batch_sampler
359
+ - `multi_dataset_batch_sampler`: proportional
360
+ - `router_mapping`: {}
361
+ - `learning_rate_mapping`: {}
362
+
363
+ </details>
364
+
365
+ ### Training Logs
366
+ | Epoch | Step | Training Loss | Validation Loss | cosine_accuracy |
367
+ |:------:|:-----:|:-------------:|:---------------:|:---------------:|
368
+ | 0.0002 | 1 | 3.0984 | - | - |
369
+ | 0.1817 | 1000 | 2.7134 | 1.3784 | 0.9391 |
370
+ | 0.3634 | 2000 | 2.1597 | 1.2863 | 0.9412 |
371
+ | 1.1176 | 3000 | 1.8694 | 1.2364 | 0.9423 |
372
+ | 1.2993 | 4000 | 1.6564 | 1.1890 | 0.9449 |
373
+ | 2.0534 | 5000 | 1.4993 | 1.1735 | 0.9468 |
374
+ | 2.2351 | 6000 | 1.3577 | 1.1353 | 0.9508 |
375
+ | 2.4169 | 7000 | 1.2577 | 1.1203 | 0.9535 |
376
+ | 3.1710 | 8000 | 1.1667 | 1.1059 | 0.9549 |
377
+ | 3.3527 | 9000 | 1.1052 | 1.1047 | 0.9571 |
378
+ | 4.1069 | 10000 | 1.0559 | 1.1142 | 0.9553 |
379
+ | 4.2886 | 11000 | 1.0006 | 1.1014 | 0.9563 |
380
+
381
+
382
+ ### Framework Versions
383
+ - Python: 3.11.13
384
+ - Sentence Transformers: 5.1.2
385
+ - Transformers: 4.53.3
386
+ - PyTorch: 2.6.0+cu124
387
+ - Accelerate: 1.9.0
388
+ - Datasets: 4.4.1
389
+ - Tokenizers: 0.21.2
390
+
391
+ ## Citation
392
+
393
+ ### BibTeX
394
+
395
+ #### Sentence Transformers
396
+ ```bibtex
397
+ @inproceedings{reimers-2019-sentence-bert,
398
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
399
+ author = "Reimers, Nils and Gurevych, Iryna",
400
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
401
+ month = "11",
402
+ year = "2019",
403
+ publisher = "Association for Computational Linguistics",
404
+ url = "https://arxiv.org/abs/1908.10084",
405
+ }
406
+ ```
407
 
408
+ <!--
409
+ ## Glossary
 
 
 
 
 
 
 
410
 
411
+ *Clearly define terms in order to be accessible across audiences.*
412
+ -->
 
413
 
414
+ <!--
415
+ ## Model Card Authors
416
 
417
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
418
+ -->
419
 
420
+ <!--
421
+ ## Model Card Contact
 
422
 
423
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
424
+ -->