keeeeenw commited on
Commit
98f70f1
1 Parent(s): 2d9999c

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,677 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ - generated_from_trainer
9
+ - dataset_size:65749
10
+ - loss:MultipleNegativesRankingLoss
11
+ - loss:SoftmaxLoss
12
+ - loss:CoSENTLoss
13
+ base_model: keeeeenw/MicroLlama
14
+ widget:
15
+ - source_sentence: A construction worker is standing on a crane placing a large arm
16
+ on top of a stature in progress.
17
+ sentences:
18
+ - The man is wearing black.
19
+ - A person standing
20
+ - Nobody is standing
21
+ - source_sentence: A boy in red slides down an inflatable ride.
22
+ sentences:
23
+ - A man holding a drill stands next to a girl holding a vacuum hose.
24
+ - A boy is playing on an inflatable ride.
25
+ - A boy pierces a knife through an inflatable ride.
26
+ - source_sentence: An animal is chewing on something.
27
+ sentences:
28
+ - A dog with a red leash still attached chases over the grass toward a tennis ball.
29
+ - A man is eating something.
30
+ - An animal is chewing on a key chain.
31
+ - source_sentence: What are some good books or references to get started with machine
32
+ learning?
33
+ sentences:
34
+ - What caused the British Empire to fall?
35
+ - How should I go about learning Machine Learning?
36
+ - Can an infinite amount of dark or vacuum or gravitational energy be created with
37
+ expansion?
38
+ - source_sentence: How do I attract a girl?
39
+ sentences:
40
+ - How can I attract girls?
41
+ - Why isn't my iPhone 5 charging?
42
+ - What would the world be like now in 2016 if Hitler's Germany won the war?
43
+ datasets:
44
+ - sentence-transformers/all-nli
45
+ - sentence-transformers/stsb
46
+ - sentence-transformers/quora-duplicates
47
+ - sentence-transformers/natural-questions
48
+ pipeline_tag: sentence-similarity
49
+ library_name: sentence-transformers
50
+ ---
51
+
52
+ # SentenceTransformer based on keeeeenw/MicroLlama
53
+
54
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama) on the [all-nli-pair](https://huggingface.co/datasets/sentence-transformers/all-nli), [all-nli-pair-class](https://huggingface.co/datasets/sentence-transformers/all-nli), [all-nli-pair-score](https://huggingface.co/datasets/sentence-transformers/all-nli), [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli), [stsb](https://huggingface.co/datasets/sentence-transformers/stsb), [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) and [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
55
+
56
+ ## Model Details
57
+
58
+ ### Model Description
59
+ - **Model Type:** Sentence Transformer
60
+ - **Base model:** [keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama) <!-- at revision 6403f6afc9c3a34b877603fab3d525842d353b1c -->
61
+ - **Maximum Sequence Length:** 2048 tokens
62
+ - **Output Dimensionality:** 1024 tokens
63
+ - **Similarity Function:** Cosine Similarity
64
+ - **Training Datasets:**
65
+ - [all-nli-pair](https://huggingface.co/datasets/sentence-transformers/all-nli)
66
+ - [all-nli-pair-class](https://huggingface.co/datasets/sentence-transformers/all-nli)
67
+ - [all-nli-pair-score](https://huggingface.co/datasets/sentence-transformers/all-nli)
68
+ - [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli)
69
+ - [stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
70
+ - [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
71
+ - [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
72
+ - **Language:** en
73
+ <!-- - **License:** Unknown -->
74
+
75
+ ### Model Sources
76
+
77
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
78
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
79
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
80
+
81
+ ### Full Model Architecture
82
+
83
+ ```
84
+ SentenceTransformer(
85
+ (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: LlamaModel
86
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
87
+ )
88
+ ```
89
+
90
+ ## Usage
91
+
92
+ ### Direct Usage (Sentence Transformers)
93
+
94
+ First install the Sentence Transformers library:
95
+
96
+ ```bash
97
+ pip install -U sentence-transformers
98
+ ```
99
+
100
+ Then you can load this model and run inference.
101
+ ```python
102
+ from sentence_transformers import SentenceTransformer
103
+
104
+ # Download from the 🤗 Hub
105
+ model = SentenceTransformer("keeeeenw/MicroLlama-text-embedding")
106
+ # Run inference
107
+ sentences = [
108
+ 'How do I attract a girl?',
109
+ 'How can I attract girls?',
110
+ "Why isn't my iPhone 5 charging?",
111
+ ]
112
+ embeddings = model.encode(sentences)
113
+ print(embeddings.shape)
114
+ # [3, 1024]
115
+
116
+ # Get the similarity scores for the embeddings
117
+ similarities = model.similarity(embeddings, embeddings)
118
+ print(similarities.shape)
119
+ # [3, 3]
120
+ ```
121
+
122
+ <!--
123
+ ### Direct Usage (Transformers)
124
+
125
+ <details><summary>Click to see the direct usage in Transformers</summary>
126
+
127
+ </details>
128
+ -->
129
+
130
+ <!--
131
+ ### Downstream Usage (Sentence Transformers)
132
+
133
+ You can finetune this model on your own dataset.
134
+
135
+ <details><summary>Click to expand</summary>
136
+
137
+ </details>
138
+ -->
139
+
140
+ <!--
141
+ ### Out-of-Scope Use
142
+
143
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
144
+ -->
145
+
146
+ <!--
147
+ ## Bias, Risks and Limitations
148
+
149
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
150
+ -->
151
+
152
+ <!--
153
+ ### Recommendations
154
+
155
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
156
+ -->
157
+
158
+ ## Training Details
159
+
160
+ ### Training Datasets
161
+
162
+ #### all-nli-pair
163
+
164
+ * Dataset: [all-nli-pair](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
165
+ * Size: 10,000 training samples
166
+ * Columns: <code>anchor</code> and <code>positive</code>
167
+ * Approximate statistics based on the first 1000 samples:
168
+ | | anchor | positive |
169
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
170
+ | type | string | string |
171
+ | details | <ul><li>min: 4 tokens</li><li>mean: 18.11 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.46 tokens</li><li>max: 34 tokens</li></ul> |
172
+ * Samples:
173
+ | anchor | positive |
174
+ |:---------------------------------------------------------------------------|:-------------------------------------------------|
175
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> |
176
+ | <code>Children smiling and waving at camera</code> | <code>There are children present</code> |
177
+ | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> |
178
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
179
+ ```json
180
+ {
181
+ "scale": 20.0,
182
+ "similarity_fct": "cos_sim"
183
+ }
184
+ ```
185
+
186
+ #### all-nli-pair-class
187
+
188
+ * Dataset: [all-nli-pair-class](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
189
+ * Size: 10,000 training samples
190
+ * Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
191
+ * Approximate statistics based on the first 1000 samples:
192
+ | | premise | hypothesis | label |
193
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
194
+ | type | string | string | int |
195
+ | details | <ul><li>min: 6 tokens</li><li>mean: 18.54 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.78 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
196
+ * Samples:
197
+ | premise | hypothesis | label |
198
+ |:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
199
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
200
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
201
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
202
+ * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
203
+
204
+ #### all-nli-pair-score
205
+
206
+ * Dataset: [all-nli-pair-score](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
207
+ * Size: 10,000 training samples
208
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
209
+ * Approximate statistics based on the first 1000 samples:
210
+ | | sentence1 | sentence2 | score |
211
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
212
+ | type | string | string | float |
213
+ | details | <ul><li>min: 6 tokens</li><li>mean: 18.54 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.78 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
214
+ * Samples:
215
+ | sentence1 | sentence2 | score |
216
+ |:--------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------|
217
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>0.5</code> |
218
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>0.0</code> |
219
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>1.0</code> |
220
+ * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
221
+ ```json
222
+ {
223
+ "scale": 20.0,
224
+ "similarity_fct": "pairwise_cos_sim"
225
+ }
226
+ ```
227
+
228
+ #### all-nli-triplet
229
+
230
+ * Dataset: [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
231
+ * Size: 10,000 training samples
232
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
233
+ * Approximate statistics based on the first 1000 samples:
234
+ | | anchor | positive | negative |
235
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
236
+ | type | string | string | string |
237
+ | details | <ul><li>min: 6 tokens</li><li>mean: 10.37 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.04 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.74 tokens</li><li>max: 54 tokens</li></ul> |
238
+ * Samples:
239
+ | anchor | positive | negative |
240
+ |:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
241
+ | <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
242
+ | <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
243
+ | <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
244
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
245
+ ```json
246
+ {
247
+ "scale": 20.0,
248
+ "similarity_fct": "cos_sim"
249
+ }
250
+ ```
251
+
252
+ #### stsb
253
+
254
+ * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
255
+ * Size: 5,749 training samples
256
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
257
+ * Approximate statistics based on the first 1000 samples:
258
+ | | sentence1 | sentence2 | score |
259
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
260
+ | type | string | string | float |
261
+ | details | <ul><li>min: 5 tokens</li><li>mean: 10.21 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.19 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
262
+ * Samples:
263
+ | sentence1 | sentence2 | score |
264
+ |:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
265
+ | <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
266
+ | <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
267
+ | <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
268
+ * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
269
+ ```json
270
+ {
271
+ "scale": 20.0,
272
+ "similarity_fct": "pairwise_cos_sim"
273
+ }
274
+ ```
275
+
276
+ #### quora
277
+
278
+ * Dataset: [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
279
+ * Size: 10,000 training samples
280
+ * Columns: <code>anchor</code> and <code>positive</code>
281
+ * Approximate statistics based on the first 1000 samples:
282
+ | | anchor | positive |
283
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
284
+ | type | string | string |
285
+ | details | <ul><li>min: 5 tokens</li><li>mean: 14.26 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.48 tokens</li><li>max: 49 tokens</li></ul> |
286
+ * Samples:
287
+ | anchor | positive |
288
+ |:----------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------|
289
+ | <code>Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?</code> | <code>I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?</code> |
290
+ | <code>How can I be a good geologist?</code> | <code>What should I do to be a great geologist?</code> |
291
+ | <code>How do I read and find my YouTube comments?</code> | <code>How can I see all my Youtube comments?</code> |
292
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
293
+ ```json
294
+ {
295
+ "scale": 20.0,
296
+ "similarity_fct": "cos_sim"
297
+ }
298
+ ```
299
+
300
+ #### natural-questions
301
+
302
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
303
+ * Size: 10,000 training samples
304
+ * Columns: <code>query</code> and <code>answer</code>
305
+ * Approximate statistics based on the first 1000 samples:
306
+ | | query | answer |
307
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
308
+ | type | string | string |
309
+ | details | <ul><li>min: 9 tokens</li><li>mean: 12.46 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 160.85 tokens</li><li>max: 611 tokens</li></ul> |
310
+ * Samples:
311
+ | query | answer |
312
+ |:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
313
+ | <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.</code> |
314
+ | <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> |
315
+ | <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> |
316
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
317
+ ```json
318
+ {
319
+ "scale": 20.0,
320
+ "similarity_fct": "cos_sim"
321
+ }
322
+ ```
323
+
324
+ ### Evaluation Datasets
325
+
326
+ #### all-nli-triplet
327
+
328
+ * Dataset: [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
329
+ * Size: 6,584 evaluation samples
330
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
331
+ * Approximate statistics based on the first 1000 samples:
332
+ | | anchor | positive | negative |
333
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
334
+ | type | string | string | string |
335
+ | details | <ul><li>min: 5 tokens</li><li>mean: 19.38 tokens</li><li>max: 89 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.77 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.49 tokens</li><li>max: 30 tokens</li></ul> |
336
+ * Samples:
337
+ | anchor | positive | negative |
338
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
339
+ | <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
340
+ | <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
341
+ | <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
342
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
343
+ ```json
344
+ {
345
+ "scale": 20.0,
346
+ "similarity_fct": "cos_sim"
347
+ }
348
+ ```
349
+
350
+ #### stsb
351
+
352
+ * Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
353
+ * Size: 1,500 evaluation samples
354
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
355
+ * Approximate statistics based on the first 1000 samples:
356
+ | | sentence1 | sentence2 | score |
357
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
358
+ | type | string | string | float |
359
+ | details | <ul><li>min: 4 tokens</li><li>mean: 15.54 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.46 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
360
+ * Samples:
361
+ | sentence1 | sentence2 | score |
362
+ |:--------------------------------------------------|:------------------------------------------------------|:------------------|
363
+ | <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
364
+ | <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
365
+ | <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
366
+ * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
367
+ ```json
368
+ {
369
+ "scale": 20.0,
370
+ "similarity_fct": "pairwise_cos_sim"
371
+ }
372
+ ```
373
+
374
+ #### quora
375
+
376
+ * Dataset: [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
377
+ * Size: 1,000 evaluation samples
378
+ * Columns: <code>anchor</code> and <code>positive</code>
379
+ * Approximate statistics based on the first 1000 samples:
380
+ | | anchor | positive |
381
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
382
+ | type | string | string |
383
+ | details | <ul><li>min: 6 tokens</li><li>mean: 14.43 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.47 tokens</li><li>max: 55 tokens</li></ul> |
384
+ * Samples:
385
+ | anchor | positive |
386
+ |:----------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
387
+ | <code>What is your New Year resolution?</code> | <code>What can be my new year resolution for 2017?</code> |
388
+ | <code>Should I buy the IPhone 6s or Samsung Galaxy s7?</code> | <code>Which is better: the iPhone 6S Plus or the Samsung Galaxy S7 Edge?</code> |
389
+ | <code>What are the differences between transgression and regression?</code> | <code>What is the difference between transgression and regression?</code> |
390
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
391
+ ```json
392
+ {
393
+ "scale": 20.0,
394
+ "similarity_fct": "cos_sim"
395
+ }
396
+ ```
397
+
398
+ #### natural-questions
399
+
400
+ * Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
401
+ * Size: 1,000 evaluation samples
402
+ * Columns: <code>query</code> and <code>answer</code>
403
+ * Approximate statistics based on the first 1000 samples:
404
+ | | query | answer |
405
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
406
+ | type | string | string |
407
+ | details | <ul><li>min: 9 tokens</li><li>mean: 12.5 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 164.3 tokens</li><li>max: 708 tokens</li></ul> |
408
+ * Samples:
409
+ | query | answer |
410
+ |:--------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
411
+ | <code>where does the waikato river begin and end</code> | <code>Waikato River The Waikato River is the longest river in New Zealand, running for 425 kilometres (264 mi) through the North Island. It rises in the eastern slopes of Mount Ruapehu, joining the Tongariro River system and flowing through Lake Taupo, New Zealand's largest lake. It then drains Taupo at the lake's northeastern edge, creates the Huka Falls, and flows northwest through the Waikato Plains. It empties into the Tasman Sea south of Auckland, at Port Waikato. It gives its name to the Waikato Region that surrounds the Waikato Plains. The present course of the river was largely formed about 17,000 years ago. Contributing factors were climate warming, forest being reestablished in the river headwaters and the deepening, rather than widening, of the existing river channel. The channel was gradually eroded as far up river as Piarere, leaving the old Hinuera channel high and dry.[2] The remains of the old river path can be clearly seen at Hinuera where the cliffs mark the ancient river edges. The river's main tributary is the Waipa River, which has its confluence with the Waikato at Ngaruawahia.</code> |
412
+ | <code>what type of gas is produced during fermentation</code> | <code>Fermentation Fermentation reacts NADH with an endogenous, organic electron acceptor.[1] Usually this is pyruvate formed from sugar through glycolysis. The reaction produces NAD+ and an organic product, typical examples being ethanol, lactic acid, carbon dioxide, and hydrogen gas (H2). However, more exotic compounds can be produced by fermentation, such as butyric acid and acetone. Fermentation products contain chemical energy (they are not fully oxidized), but are considered waste products, since they cannot be metabolized further without the use of oxygen.</code> |
413
+ | <code>why was star wars episode iv released first</code> | <code>Star Wars (film) Star Wars (later retitled Star Wars: Episode IV – A New Hope) is a 1977 American epic space opera film written and directed by George Lucas. It is the first film in the original Star Wars trilogy and the beginning of the Star Wars franchise. Starring Mark Hamill, Harrison Ford, Carrie Fisher, Peter Cushing, Alec Guinness, David Prowse, James Earl Jones, Anthony Daniels, Kenny Baker, and Peter Mayhew, the film's plot focuses on the Rebel Alliance, led by Princess Leia (Fisher), and its attempt to destroy the Galactic Empire's space station, the Death Star. This conflict disrupts the isolated life of farmhand Luke Skywalker (Hamill), who inadvertently acquires two droids that possess stolen architectural plans for the Death Star. When the Empire begins a destructive search for the missing droids, Skywalker accompanies Jedi Master Obi-Wan Kenobi (Guinness) on a mission to return the plans to the Rebel Alliance and rescue Leia from her imprisonment by the Empire.</code> |
414
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
415
+ ```json
416
+ {
417
+ "scale": 20.0,
418
+ "similarity_fct": "cos_sim"
419
+ }
420
+ ```
421
+
422
+ ### Training Hyperparameters
423
+ #### Non-Default Hyperparameters
424
+
425
+ - `per_device_train_batch_size`: 6
426
+ - `per_device_eval_batch_size`: 6
427
+
428
+ #### All Hyperparameters
429
+ <details><summary>Click to expand</summary>
430
+
431
+ - `overwrite_output_dir`: False
432
+ - `do_predict`: False
433
+ - `eval_strategy`: no
434
+ - `prediction_loss_only`: True
435
+ - `per_device_train_batch_size`: 6
436
+ - `per_device_eval_batch_size`: 6
437
+ - `per_gpu_train_batch_size`: None
438
+ - `per_gpu_eval_batch_size`: None
439
+ - `gradient_accumulation_steps`: 1
440
+ - `eval_accumulation_steps`: None
441
+ - `learning_rate`: 5e-05
442
+ - `weight_decay`: 0.0
443
+ - `adam_beta1`: 0.9
444
+ - `adam_beta2`: 0.999
445
+ - `adam_epsilon`: 1e-08
446
+ - `max_grad_norm`: 1.0
447
+ - `num_train_epochs`: 3.0
448
+ - `max_steps`: -1
449
+ - `lr_scheduler_type`: linear
450
+ - `lr_scheduler_kwargs`: {}
451
+ - `warmup_ratio`: 0.0
452
+ - `warmup_steps`: 0
453
+ - `log_level`: passive
454
+ - `log_level_replica`: warning
455
+ - `log_on_each_node`: True
456
+ - `logging_nan_inf_filter`: True
457
+ - `save_safetensors`: True
458
+ - `save_on_each_node`: False
459
+ - `save_only_model`: False
460
+ - `restore_callback_states_from_checkpoint`: False
461
+ - `no_cuda`: False
462
+ - `use_cpu`: False
463
+ - `use_mps_device`: False
464
+ - `seed`: 42
465
+ - `data_seed`: None
466
+ - `jit_mode_eval`: False
467
+ - `use_ipex`: False
468
+ - `bf16`: False
469
+ - `fp16`: False
470
+ - `fp16_opt_level`: O1
471
+ - `half_precision_backend`: auto
472
+ - `bf16_full_eval`: False
473
+ - `fp16_full_eval`: False
474
+ - `tf32`: None
475
+ - `local_rank`: 0
476
+ - `ddp_backend`: None
477
+ - `tpu_num_cores`: None
478
+ - `tpu_metrics_debug`: False
479
+ - `debug`: []
480
+ - `dataloader_drop_last`: False
481
+ - `dataloader_num_workers`: 0
482
+ - `dataloader_prefetch_factor`: None
483
+ - `past_index`: -1
484
+ - `disable_tqdm`: False
485
+ - `remove_unused_columns`: True
486
+ - `label_names`: None
487
+ - `load_best_model_at_end`: False
488
+ - `ignore_data_skip`: False
489
+ - `fsdp`: []
490
+ - `fsdp_min_num_params`: 0
491
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
492
+ - `fsdp_transformer_layer_cls_to_wrap`: None
493
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
494
+ - `deepspeed`: None
495
+ - `label_smoothing_factor`: 0.0
496
+ - `optim`: adamw_torch
497
+ - `optim_args`: None
498
+ - `adafactor`: False
499
+ - `group_by_length`: False
500
+ - `length_column_name`: length
501
+ - `ddp_find_unused_parameters`: None
502
+ - `ddp_bucket_cap_mb`: None
503
+ - `ddp_broadcast_buffers`: False
504
+ - `dataloader_pin_memory`: True
505
+ - `dataloader_persistent_workers`: False
506
+ - `skip_memory_metrics`: True
507
+ - `use_legacy_prediction_loop`: False
508
+ - `push_to_hub`: False
509
+ - `resume_from_checkpoint`: None
510
+ - `hub_model_id`: None
511
+ - `hub_strategy`: every_save
512
+ - `hub_private_repo`: False
513
+ - `hub_always_push`: False
514
+ - `gradient_checkpointing`: False
515
+ - `gradient_checkpointing_kwargs`: None
516
+ - `include_inputs_for_metrics`: False
517
+ - `eval_do_concat_batches`: True
518
+ - `fp16_backend`: auto
519
+ - `push_to_hub_model_id`: None
520
+ - `push_to_hub_organization`: None
521
+ - `mp_parameters`:
522
+ - `auto_find_batch_size`: False
523
+ - `full_determinism`: False
524
+ - `torchdynamo`: None
525
+ - `ray_scope`: last
526
+ - `ddp_timeout`: 1800
527
+ - `torch_compile`: False
528
+ - `torch_compile_backend`: None
529
+ - `torch_compile_mode`: None
530
+ - `dispatch_batches`: None
531
+ - `split_batches`: None
532
+ - `include_tokens_per_second`: False
533
+ - `include_num_input_tokens_seen`: False
534
+ - `neftune_noise_alpha`: None
535
+ - `optim_target_modules`: None
536
+ - `batch_eval_metrics`: False
537
+ - `batch_sampler`: batch_sampler
538
+ - `multi_dataset_batch_sampler`: proportional
539
+
540
+ </details>
541
+
542
+ ### Training Logs
543
+ | Epoch | Step | Training Loss |
544
+ |:------:|:-----:|:-------------:|
545
+ | 0.0456 | 500 | 1.3352 |
546
+ | 0.0912 | 1000 | 1.1358 |
547
+ | 0.1368 | 1500 | 1.093 |
548
+ | 0.1825 | 2000 | 0.9637 |
549
+ | 0.2281 | 2500 | 1.1237 |
550
+ | 0.2737 | 3000 | 0.9959 |
551
+ | 0.3193 | 3500 | 1.0079 |
552
+ | 0.3649 | 4000 | 0.9979 |
553
+ | 0.4105 | 4500 | 0.9099 |
554
+ | 0.4562 | 5000 | 0.9126 |
555
+ | 0.5018 | 5500 | 0.9893 |
556
+ | 0.5474 | 6000 | 1.0078 |
557
+ | 0.5930 | 6500 | 1.0522 |
558
+ | 0.6386 | 7000 | 0.8661 |
559
+ | 0.6842 | 7500 | 0.9543 |
560
+ | 0.7299 | 8000 | 0.8853 |
561
+ | 0.7755 | 8500 | 0.9813 |
562
+ | 0.8211 | 9000 | 0.852 |
563
+ | 0.8667 | 9500 | 0.8897 |
564
+ | 0.9123 | 10000 | 0.9234 |
565
+ | 0.9579 | 10500 | 0.8947 |
566
+ | 1.0036 | 11000 | 0.8693 |
567
+ | 1.0492 | 11500 | 0.7357 |
568
+ | 1.0948 | 12000 | 0.6246 |
569
+ | 1.1404 | 12500 | 0.6771 |
570
+ | 1.1860 | 13000 | 0.5807 |
571
+ | 1.2316 | 13500 | 0.7376 |
572
+ | 1.2773 | 14000 | 0.6177 |
573
+ | 1.3229 | 14500 | 0.5667 |
574
+ | 1.3685 | 15000 | 0.5701 |
575
+ | 1.4141 | 15500 | 0.5119 |
576
+ | 1.4597 | 16000 | 0.517 |
577
+ | 1.5053 | 16500 | 0.6041 |
578
+ | 1.5510 | 17000 | 0.5872 |
579
+ | 1.5966 | 17500 | 0.5719 |
580
+ | 1.6422 | 18000 | 0.4646 |
581
+ | 1.6878 | 18500 | 0.5375 |
582
+ | 1.7334 | 19000 | 0.5235 |
583
+ | 1.7790 | 19500 | 0.5432 |
584
+ | 1.8247 | 20000 | 0.5648 |
585
+ | 1.8703 | 20500 | 0.4776 |
586
+ | 1.9159 | 21000 | 0.5475 |
587
+ | 1.9615 | 21500 | 0.4902 |
588
+ | 2.0071 | 22000 | 0.4883 |
589
+ | 2.0527 | 22500 | 0.4473 |
590
+ | 2.0983 | 23000 | 0.3735 |
591
+ | 2.1440 | 23500 | 0.4526 |
592
+ | 2.1896 | 24000 | 0.3509 |
593
+ | 2.2352 | 24500 | 0.4658 |
594
+ | 2.2808 | 25000 | 0.3529 |
595
+ | 2.3264 | 25500 | 0.3723 |
596
+ | 2.3720 | 26000 | 0.4281 |
597
+ | 2.4177 | 26500 | 0.318 |
598
+ | 2.4633 | 27000 | 0.3073 |
599
+ | 2.5089 | 27500 | 0.3907 |
600
+ | 2.5545 | 28000 | 0.4327 |
601
+ | 2.6001 | 28500 | 0.3484 |
602
+ | 2.6457 | 29000 | 0.3073 |
603
+ | 2.6914 | 29500 | 0.2621 |
604
+ | 2.7370 | 30000 | 0.3265 |
605
+ | 2.7826 | 30500 | 0.3043 |
606
+ | 2.8282 | 31000 | 0.3637 |
607
+ | 2.8738 | 31500 | 0.3331 |
608
+ | 2.9194 | 32000 | 0.3693 |
609
+ | 2.9651 | 32500 | 0.2686 |
610
+
611
+
612
+ ### Framework Versions
613
+ - Python: 3.10.14
614
+ - Sentence Transformers: 3.2.1
615
+ - Transformers: 4.41.2
616
+ - PyTorch: 2.1.0+cu121
617
+ - Accelerate: 1.1.1
618
+ - Datasets: 3.1.0
619
+ - Tokenizers: 0.19.1
620
+
621
+ ## Citation
622
+
623
+ ### BibTeX
624
+
625
+ #### Sentence Transformers and SoftmaxLoss
626
+ ```bibtex
627
+ @inproceedings{reimers-2019-sentence-bert,
628
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
629
+ author = "Reimers, Nils and Gurevych, Iryna",
630
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
631
+ month = "11",
632
+ year = "2019",
633
+ publisher = "Association for Computational Linguistics",
634
+ url = "https://arxiv.org/abs/1908.10084",
635
+ }
636
+ ```
637
+
638
+ #### MultipleNegativesRankingLoss
639
+ ```bibtex
640
+ @misc{henderson2017efficient,
641
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
642
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
643
+ year={2017},
644
+ eprint={1705.00652},
645
+ archivePrefix={arXiv},
646
+ primaryClass={cs.CL}
647
+ }
648
+ ```
649
+
650
+ #### CoSENTLoss
651
+ ```bibtex
652
+ @online{kexuefm-8847,
653
+ title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
654
+ author={Su Jianlin},
655
+ year={2022},
656
+ month={Jan},
657
+ url={https://kexue.fm/archives/8847},
658
+ }
659
+ ```
660
+
661
+ <!--
662
+ ## Glossary
663
+
664
+ *Clearly define terms in order to be accessible across audiences.*
665
+ -->
666
+
667
+ <!--
668
+ ## Model Card Authors
669
+
670
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
671
+ -->
672
+
673
+ <!--
674
+ ## Model Card Contact
675
+
676
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
677
+ -->
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "[PAD]": 32000
3
+ }
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microllama300m-base-all-nli-stsb-quora-nq-3-epoch",
3
+ "architectures": [
4
+ "LlamaModel"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 1024,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 5632,
14
+ "max_position_embeddings": 2048,
15
+ "mlp_bias": false,
16
+ "model_type": "llama",
17
+ "num_attention_heads": 16,
18
+ "num_hidden_layers": 12,
19
+ "num_key_value_heads": 4,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 10000.0,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "float32",
26
+ "transformers_version": "4.41.2",
27
+ "use_cache": true,
28
+ "vocab_size": 32001
29
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.2.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.1.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2875c30ce4b4e4297a3eee9ceb8a665b079f5a89d06a8bfea0a0b492ca4efdae
3
+ size 1087491536
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 2048,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": true,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": true,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "</s>",
24
+ "lstrip": false,
25
+ "normalized": true,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "32000": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ }
38
+ },
39
+ "bos_token": "<s>",
40
+ "clean_up_tokenization_spaces": false,
41
+ "eos_token": "</s>",
42
+ "legacy": true,
43
+ "model_max_length": 2048,
44
+ "pad_token": "[PAD]",
45
+ "sp_model_kwargs": {},
46
+ "spaces_between_special_tokens": false,
47
+ "tokenizer_class": "LlamaTokenizer",
48
+ "unk_token": "<unk>",
49
+ "use_default_system_prompt": false
50
+ }