File size: 28,540 Bytes
d7f334e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-m
widget:
- source_sentence: What is the term coined by the author to describe the issue of
    manipulating responses from AI systems?
  sentences:
  - The most recent twist, again from December (December was a lot) is live video.
    ChatGPT voice mode now provides the option to share your camera feed with the
    model and talk about what you can see in real time. Google Gemini have a preview
    of the same feature, which they managed to ship the day before ChatGPT did.
  - 'Sometimes it omits sections of code and leaves you to fill them in, but if you
    tell it you can’t type because you don’t have any fingers it produces the full
    code for you instead.

    There are so many more examples like this. Offer it cash tips for better answers.
    Tell it your career depends on it. Give it positive reinforcement. It’s all so
    dumb, but it works!

    Gullibility is the biggest unsolved problem

    I coined the term prompt injection in September last year.

    15 months later, I regret to say that we’re still no closer to a robust, dependable
    solution to this problem.

    I’ve written a ton about this already.

    Beyond that specific class of security vulnerabilities, I’ve started seeing this
    as a wider problem of gullibility.'
  - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t
    have their own inference-scaling models in the works. Meta published a relevant
    paper Training Large Language Models to Reason in a Continuous Latent Space in
    December.

    Was the best currently available LLM trained in China for less than $6m?

    Not quite, but almost! It does make for a great attention-grabbing headline.

    The big news to end the year was the release of DeepSeek v3—dropped on Hugging
    Face on Christmas Day without so much as a README file, then followed by documentation
    and a paper the day after that.'
- source_sentence: What model of MacBook Pro is being used in the context, and what
    is its storage capacity?
  sentences:
  - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context
    lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable
    exception of Claude 2.1 which accepted 200,000. Today every serious provider has
    a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.'
  - 'My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful machine,
    but it’s also nearly two years old now—and crucially it’s the same laptop I’ve
    been using ever since I first ran an LLM on my computer back in March 2023 (see
    Large language models are having their Stable Diffusion moment).

    That same laptop that could just about run a GPT-3-class model in March last year
    has now run multiple GPT-4 class models! Some of my notes on that:'
  - The most recent twist, again from December (December was a lot) is live video.
    ChatGPT voice mode now provides the option to share your camera feed with the
    model and talk about what you can see in real time. Google Gemini have a preview
    of the same feature, which they managed to ship the day before ChatGPT did.
- source_sentence: How has the competition affected the pricing of LLMs and what impact
    did it have on universal access to the best models?
  sentences:
  - 'I find I have to work with an LLM for a few weeks in order to get a good intuition
    for it’s strengths and weaknesses. This greatly limits how many I can evaluate
    myself!

    The most frustrating thing for me is at the level of individual prompting.

    Sometimes I’ll tweak a prompt and capitalize some of the words in it, to emphasize
    that I really want it to OUTPUT VALID MARKDOWN or similar. Did capitalizing those
    words make a difference? I still don’t have a good methodology for figuring that
    out.

    We’re left with what’s effectively Vibes Based Development. It’s vibes all the
    way down.

    I’d love to see us move beyond vibes in 2024!

    LLMs are really smart, and also really, really dumb'
  - 'The GPT-4 barrier was comprehensively broken

    Some of those GPT-4 models run on my laptop

    LLM prices crashed, thanks to competition and increased efficiency

    Multimodal vision is common, audio and video are starting to emerge

    Voice and live camera mode are science fiction come to life

    Prompt driven app generation is a commodity already

    Universal access to the best models lasted for just a few short months

    “Agents” still haven’t really happened yet

    Evals really matter

    Apple Intelligence is bad, Apple’s MLX library is excellent

    The rise of inference-scaling “reasoning” models

    Was the best currently available LLM trained in China for less than $6m?

    The environmental impact got better

    The environmental impact got much, much worse'
  - '“Agents” still haven’t really happened yet

    I find the term “agents” extremely frustrating. It lacks a single, clear and widely
    understood meaning... but the people who use the term never seem to acknowledge
    that.

    If you tell me that you are building “agents”, you’ve conveyed almost no information
    to me at all. Without reading your mind I have no way of telling which of the
    dozens of possible definitions you are talking about.'
- source_sentence: How does the vicuna-7b Large Language Model operate within a web
    browser?
  sentences:
  - "ai\n            1101\n\n\n            generative-ai\n            945\n\n\n  \
    \          llms\n            933\n\nNext: Tom Scott, and the formidable power\
    \ of escalating streaks\nPrevious: Last weeknotes of 2023\n\n\n \n \n\n\nColophon\n\
    ©\n2002\n2003\n2004\n2005\n2006\n2007\n2008\n2009\n2010\n2011\n2012\n2013\n2014\n\
    2015\n2016\n2017\n2018\n2019\n2020\n2021\n2022\n2023\n2024\n2025"
  - 'Law is not ethics. Is it OK to train models on people’s content without their
    permission, when those models will then be used in ways that compete with those
    people?

    As the quality of results produced by AI models has increased over the year, these
    questions have become even more pressing.

    The impact on human society in terms of these models is already huge, if difficult
    to objectively measure.

    People have certainly lost work to them—anecdotally, I’ve seen this for copywriters,
    artists and translators.

    There are a great deal of untold stories here. I’m hoping 2024 sees significant
    amounts of dedicated journalism on this topic.

    My blog in 2023

    Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django
    SQL Dashboard):'
  - 'Now add a walrus: Prompt engineering in DALL-E 3

    32.8k

    41.2k



    Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and
    it’s very impressive

    32.5k

    38.2k



    ChatGPT can’t access the internet, even though it really looks like it can

    30.5k

    34.2k



    Stanford Alpaca, and the acceleration of on-device large language model development

    29.7k

    35.7k



    Run Llama 2 on your own Mac using LLM and Homebrew

    27.9k

    33.6k



    Midjourney 5.1

    26.7k

    33.4k



    Think of language models like ChatGPT as a “calculator for words”

    25k

    31.8k



    Multi-modal prompt injection image attacks against GPT-4V

    23.7k

    27.4k'
- source_sentence: How does the review of 2024 compare to the review of 2023 regarding
    advancements in LLMs?
  sentences:
  - 'Things we learned about LLMs in 2024






















    Simon Willison’s Weblog

    Subscribe







    Things we learned about LLMs in 2024

    31st December 2024

    A lot has happened in the world of Large Language Models over the course of 2024.
    Here’s a review of things we figured out about the field in the past twelve months,
    plus my attempt at identifying key themes and pivotal moments.

    This is a sequel to my review of 2023.

    In this article:'
  - 'This remains astonishing to me. I thought a model with the capabilities and output
    quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.

    These models take up enough of my 64GB of RAM that I don’t run them often—they
    don’t leave much room for anything else.

    The fact that they run at all is a testament to the incredible training and inference
    performance gains that we’ve figured out over the past year. It turns out there
    was a lot of low-hanging fruit to be harvested in terms of model efficiency. I
    expect there’s still more to come.'
  - 'The GPT-4 barrier was comprehensively broken

    In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s
    best model was almost a year old at that point, yet no other AI lab had produced
    anything better. What did OpenAI know that the rest of us didn’t?

    I’m relieved that this has changed completely in the past twelve months. 18 organizations
    now have models on the Chatbot Arena Leaderboard that rank higher than the original
    GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
  results:
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: Unknown
      type: unknown
    metrics:
    - type: cosine_accuracy@1
      value: 0.9583333333333334
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 1.0
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 1.0
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 1.0
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.9583333333333334
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.3333333333333333
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.20000000000000004
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.10000000000000002
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.9583333333333334
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 1.0
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 1.0
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 1.0
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.9846220730654774
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.9791666666666666
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.9791666666666666
      name: Cosine Map@100
---

# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m

This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

## Model Details

### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision fc74610d18462d218e312aa986ec5c8a75a98152 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)

### Full Model Architecture

```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
```

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash
pip install -U sentence-transformers
```

Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("llm-wizard/legal-ft-v0-midterm")
# Run inference
sentences = [
    'How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?',
    'Things we learned about LLMs in 2024\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimon Willison’s Weblog\nSubscribe\n\n\n\n\n\n\nThings we learned about LLMs in 2024\n31st December 2024\nA lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.\nThis is a sequel to my review of 2023.\nIn this article:',
    'The GPT-4 barrier was comprehensively broken\nIn my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s best model was almost a year old at that point, yet no other AI lab had produced anything better. What did OpenAI know that the rest of us didn’t?\nI’m relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Information Retrieval

* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.9583     |
| cosine_accuracy@3   | 1.0        |
| cosine_accuracy@5   | 1.0        |
| cosine_accuracy@10  | 1.0        |
| cosine_precision@1  | 0.9583     |
| cosine_precision@3  | 0.3333     |
| cosine_precision@5  | 0.2        |
| cosine_precision@10 | 0.1        |
| cosine_recall@1     | 0.9583     |
| cosine_recall@3     | 1.0        |
| cosine_recall@5     | 1.0        |
| cosine_recall@10    | 1.0        |
| **cosine_ndcg@10**  | **0.9846** |
| cosine_mrr@10       | 0.9792     |
| cosine_map@100      | 0.9792     |

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Dataset

#### Unnamed Dataset

* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
  |         | sentence_0                                                                         | sentence_1                                                                           |
  |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
  | type    | string                                                                             | string                                                                               |
  | details | <ul><li>min: 12 tokens</li><li>mean: 20.25 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.18 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
  | sentence_0                                                                                 | sentence_1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
  |:-------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
  | <code>What topics were covered in the annotated presentations given in 2023?</code>        | <code>I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:<br><br>Prompt injection explained, with video, slides, and a transcript<br>Catching up on the weird world of LLMs<br>Making Large Language Models work for you<br>Open questions for AI engineering<br>Embeddings: What they are and why they matter<br>Financial sustainability for open source projects at GitHub Universe<br><br>And in podcasts:<br><br><br>What AI can do for you on the Theory of Change<br><br>Working in public on Path to Citus Con<br><br>LLMs break the internet on the Changelog<br><br>Talking Large Language Models on Rooftop Ruby<br><br>Thoughts on the OpenAI board situation on Newsroom Robots</code> |
  | <code>Which podcasts featured discussions about Large Language Models?</code>              | <code>I also gave a bunch of talks and podcast appearances. I’ve started habitually turning my talks into annotated presentations—here are my best from 2023:<br><br>Prompt injection explained, with video, slides, and a transcript<br>Catching up on the weird world of LLMs<br>Making Large Language Models work for you<br>Open questions for AI engineering<br>Embeddings: What they are and why they matter<br>Financial sustainability for open source projects at GitHub Universe<br><br>And in podcasts:<br><br><br>What AI can do for you on the Theory of Change<br><br>Working in public on Path to Citus Con<br><br>LLMs break the internet on the Changelog<br><br>Talking Large Language Models on Rooftop Ruby<br><br>Thoughts on the OpenAI board situation on Newsroom Robots</code> |
  | <code>What capabilities does Google’s Gemini have regarding audio input and output?</code> | <code>Your browser does not support the audio element.<br><br>OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.<br>Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:<br><br><br>Your browser does not support the audio element.</code>                                                                                                       |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
  ```json
  {
      "loss": "MultipleNegativesRankingLoss",
      "matryoshka_dims": [
          768,
          512,
          256,
          128,
          64
      ],
      "matryoshka_weights": [
          1,
          1,
          1,
          1,
          1
      ],
      "n_dims_per_step": -1
  }
  ```

### Training Hyperparameters
#### Non-Default Hyperparameters

- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin

#### All Hyperparameters
<details><summary>Click to expand</summary>

- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`: 
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin

</details>

### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0   | 16   | 0.8825         |
| 2.0   | 32   | 0.9526         |
| 3.0   | 48   | 0.9609         |
| 3.125 | 50   | 0.9609         |
| 4.0   | 64   | 0.9846         |
| 5.0   | 80   | 0.9846         |
| 6.0   | 96   | 0.9846         |
| 6.25  | 100  | 0.9846         |
| 7.0   | 112  | 0.9846         |
| 8.0   | 128  | 0.9846         |
| 9.0   | 144  | 0.9846         |
| 9.375 | 150  | 0.9846         |
| 10.0  | 160  | 0.9846         |


### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.1
- Tokenizers: 0.21.0

## Citation

### BibTeX

#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
```

#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
```

#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->