legal-ft-2 / README.md
KireetiKunam's picture
Add new SentenceTransformer model
1418072 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:164
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: What significant multi-modal models were released in 2024
    sentences:
      - >-
        In 2024, almost every significant model vendor released multi-modal
        models. We saw the Claude 3 series from Anthropic in March, Gemini 1.5
        Pro in April (images, audio and video), then September brought Qwen2-VL
        and Mistral’s Pixtral 12B and Meta’s Llama 3.2 11B and 90B vision
        models. We got audio input and output from OpenAI in October, then
        November saw SmolVLM from Hugging Face and December saw image and video
        models from Amazon Nova.

        In October I upgraded my LLM CLI tool to support multi-modal models via
        attachments. It now has plugins for a whole collection of different
        vision models.
      - >-
        When @v0 first came out we were paranoid about protecting the prompt
        with all kinds of pre and post processing complexity.

        We completely pivoted to let it rip. A prompt without the evals, models,
        and especially UX is like getting a broken ASML machine without a manual
      - >-
        Terminology aside, I remain skeptical as to their utility based, once
        again, on the challenge of gullibility. LLMs believe anything you tell
        them. Any systems that attempts to make meaningful decisions on your
        behalf will run into the same roadblock: how good is a travel agent, or
        a digital assistant, or even a research tool if it can’t distinguish
        truth from fiction?

        Just the other day Google Search was caught serving up an entirely fake
        description of the non-existant movie “Encanto 2”. It turned out to be
        summarizing an imagined movie listing from a fan fiction wiki.
  - source_sentence: What is the advantage of a 64GB Mac for running models
    sentences:
      - >-
        The boring yet crucial secret behind good system prompts is test-driven
        development. You don’t write down a system prompt and find ways to test
        it. You write down tests and find a system prompt that passes them.


        It’s become abundantly clear over the course of 2024 that writing good
        automated evals for LLM-powered systems is the skill that’s most needed
        to build useful applications on top of these models. If you have a
        strong eval suite you can adopt new models faster, iterate better and
        build more reliable and useful product features than your competition.

        Vercel’s Malte Ubl:
      - >-
        On paper, a 64GB Mac should be a great machine for running models due to
        the way the CPU and GPU can share the same memory. In practice, many
        models are released as model weights and libraries that reward NVIDIA’s
        CUDA over other platforms.

        The llama.cpp ecosystem helped a lot here, but the real breakthrough has
        been Apple’s MLX library, “an array framework for Apple Silicon”. It’s
        fantastic.

        Apple’s mlx-lm Python library supports running a wide range of
        MLX-compatible models on my Mac, with excellent performance.
        mlx-community on Hugging Face offers more than 1,000 models that have
        been converted to the necessary format.
      - >-
        OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was
        freely available from its launch in June. This was a momentus change,
        because for the previous year free users had mostly been restricted to
        GPT-3.5 level models, meaning new users got a very inaccurate mental
        model of what a capable LLM could actually do.

        That era appears to have ended, likely permanently, with OpenAI’s launch
        of ChatGPT Pro. This $200/month subscription service is the only way to
        access their most capable model, o1 Pro.

        Since the trick behind the o1 series (and the future models it will
        undoubtedly inspire) is to expend more compute time to get better
        results, I don’t think those days of free access to the best available
        models are likely to return.
  - source_sentence: >-
      What is the main innovation discussed in the context regarding model
      scaling?
    sentences:
      - >-
        The biggest innovation here is that it opens up a new way to scale a
        model: instead of improving model performance purely through additional
        compute at training time, models can now take on harder problems by
        spending more compute on inference.

        The sequel to o1, o3 (they skipped “o2” for European trademark reasons)
        was announced on 20th December with an impressive result against the
        ARC-AGI benchmark, albeit one that likely involved more than $1,000,000
        of compute time expense!

        o3 is expected to ship in January. I doubt many people have real-world
        problems that would benefit from that level of compute expenditure—I
        certainly don’t!—but it appears to be a genuine next step in LLM
        architecture for taking on much harder problems.
      - >-
        Meanwhile, it’s increasingly common for end users to develop wildly
        inaccurate mental models of how these things work and what they are
        capable of. I’ve seen so many examples of people trying to win an
        argument with a screenshot from ChatGPT—an inherently ludicrous
        proposition, given the inherent unreliability of these models crossed
        with the fact that you can get them to say anything if you prompt them
        right.
      - >-
        I think this means that, as individual users, we don’t need to feel any
        guilt at all for the energy consumed by the vast majority of our
        prompts. The impact is likely neglible compared to driving a car down
        the street or maybe even watching a video on YouTube.

        Likewise, training. DeepSeek v3 training for less than $6m is a
        fantastic sign that training costs can and should continue to drop.

        For less efficient models I find it useful to compare their energy usage
        to commercial flights. The largest Llama 3 model cost about the same as
        a single digit number of fully loaded passenger flights from New York to
        London. That’s certainly not nothing, but once trained that model can be
        used by millions of people at no extra training cost.
  - source_sentence: What new feature was introduced in ChatGPT's voice mode in December?
    sentences:
      - >-
        Nothing yet from Anthropic or Meta but I would be very surprised if they
        don’t have their own inference-scaling models in the works. Meta
        published a relevant paper Training Large Language Models to Reason in a
        Continuous Latent Space in December.

        Was the best currently available LLM trained in China for less than $6m?

        Not quite, but almost! It does make for a great attention-grabbing
        headline.

        The big news to end the year was the release of DeepSeek v3—dropped on
        Hugging Face on Christmas Day without so much as a README file, then
        followed by documentation and a paper the day after that.
      - >-
        Then in December, the Chatbot Arena team introduced a whole new
        leaderboard for this feature, driven by users building the same
        interactive app twice with two different models and voting on the
        answer. Hard to come up with a more convincing argument that this
        feature is now a commodity that can be effectively implemented against
        all of the leading models.

        I’ve been tinkering with a version of this myself for my Datasette
        project, with the goal of letting users use prompts to build and iterate
        on custom widgets and data visualizations against their own data. I also
        figured out a similar pattern for writing one-shot Python programs,
        enabled by uv.
      - >-
        The most recent twist, again from December (December was a lot) is live
        video. ChatGPT voice mode now provides the option to share your camera
        feed with the model and talk about what you can see in real time. Google
        Gemini have a preview of the same feature, which they managed to ship
        the day before ChatGPT did.
  - source_sentence: >-
      Why is it important to learn how to work with unreliable technology like
      LLMs?
    sentences:
      - >-
        Longer inputs dramatically increase the scope of problems that can be
        solved with an LLM: you can now throw in an entire book and ask
        questions about its contents, but more importantly you can feed in a lot
        of example code to help the model correctly solve a coding problem. LLM
        use-cases that involve long inputs are far more interesting to me than
        short prompts that rely purely on the information already baked into the
        model weights. Many of my tools were built using this pattern.
      - >-
        There’s a flipside to this too: a lot of better informed people have
        sworn off LLMs entirely because they can’t see how anyone could benefit
        from a tool with so many flaws. The key skill in getting the most out of
        LLMs is learning to work with tech that is both inherently unreliable
        and incredibly powerful at the same time. This is a decidedly
        non-obvious skill to acquire!

        There is so much space for helpful education content here, but we need
        to do do a lot better than outsourcing it all to AI grifters with
        bombastic Twitter threads.

        Knowledge is incredibly unevenly distributed

        Most people have heard of ChatGPT by now. How many have heard of Claude?
      - >-
        I think people who complain that LLM improvement has slowed are often
        missing the enormous advances in these multi-modal models. Being able to
        run prompts against images (and audio and video) is a fascinating new
        way to apply these models.

        Voice and live camera mode are science fiction come to life

        The audio and live video modes that have started to emerge deserve a
        special mention.

        The ability to talk to ChatGPT first arrived in September 2023, but it
        was mostly an illusion: OpenAI used their excellent Whisper
        speech-to-text model and a new text-to-speech model (creatively named
        tts-1) to enable conversations with the ChatGPT mobile apps, but the
        actual model just saw text.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.875
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9583333333333334
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.875
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3194444444444444
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.875
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9583333333333334
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9455223360506796
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9270833333333334
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9270833333333334
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("KireetiKunam/legal-ft-2")
# Run inference
sentences = [
    'Why is it important to learn how to work with unreliable technology like LLMs?',
    'There’s a flipside to this too: a lot of better informed people have sworn off LLMs entirely because they can’t see how anyone could benefit from a tool with so many flaws. The key skill in getting the most out of LLMs is learning to work with tech that is both inherently unreliable and incredibly powerful at the same time. This is a decidedly non-obvious skill to acquire!\nThere is so much space for helpful education content here, but we need to do do a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.\nKnowledge is incredibly unevenly distributed\nMost people have heard of ChatGPT by now. How many have heard of Claude?',
    'I think people who complain that LLM improvement has slowed are often missing the enormous advances in these multi-modal models. Being able to run prompts against images (and audio and video) is a fascinating new way to apply these models.\nVoice and live camera mode are science fiction come to life\nThe audio and live video modes that have started to emerge deserve a special mention.\nThe ability to talk to ChatGPT first arrived in September 2023, but it was mostly an illusion: OpenAI used their excellent Whisper speech-to-text model and a new text-to-speech model (creatively named tts-1) to enable conversations with the ChatGPT mobile apps, but the actual model just saw text.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.875
cosine_accuracy@3 0.9583
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.875
cosine_precision@3 0.3194
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.875
cosine_recall@3 0.9583
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9455
cosine_mrr@10 0.9271
cosine_map@100 0.9271

Training Details

Training Dataset

Unnamed Dataset

  • Size: 164 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 164 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 3 tokens
    • mean: 15.43 tokens
    • max: 28 tokens
    • min: 43 tokens
    • mean: 130.65 tokens
    • max: 204 tokens
  • Samples:
    sentence_0 sentence_1
    What key themes were identified in the review of LLMs in 2024? Things we learned about LLMs in 2024





















    Simon Willison’s Weblog
    Subscribe






    Things we learned about LLMs in 2024
    31st December 2024
    A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
    This is a sequel to my review of 2023.
    In this article:
    What pivotal moments in the field of LLMs were highlighted in the article? Things we learned about LLMs in 2024





















    Simon Willison’s Weblog
    Subscribe






    Things we learned about LLMs in 2024
    31st December 2024
    A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
    This is a sequel to my review of 2023.
    In this article:
    What advancements have been made in multimodal vision technology? The GPT-4 barrier was comprehensively broken
    Some of those GPT-4 models run on my laptop
    LLM prices crashed, thanks to competition and increased efficiency
    Multimodal vision is common, audio and video are starting to emerge
    Voice and live camera mode are science fiction come to life
    Prompt driven app generation is a commodity already
    Universal access to the best models lasted for just a few short months
    “Agents” still haven’t really happened yet
    Evals really matter
    Apple Intelligence is bad, Apple’s MLX library is excellent
    The rise of inference-scaling “reasoning” models
    Was the best currently available LLM trained in China for less than $6m?
    The environmental impact got better
    The environmental impact got much, much worse
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 17 0.9382
2.0 34 0.9161
2.9412 50 0.9270
3.0 51 0.9270
4.0 68 0.9283
5.0 85 0.9437
5.8824 100 0.9455
6.0 102 0.9455
7.0 119 0.9455
8.0 136 0.9455
8.8235 150 0.9455
9.0 153 0.9455
10.0 170 0.9455

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}