Edit model card

SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L12-v2
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Trelis/all-MiniLM-L12-v2-ft-triplets-10Qs")
# Run inference
sentences = [
    'What is the ruling if the referee causes obstruction on either an attacking or defending player, including when the ball makes contact with the referee?',
    'fringement occurs in the In-Goal Area. \n16.4\tPlayers in the Defending Team may not obstruct or interfere with an attacking \nplayer.\nRuling = A Penalty to the non-offending Team at the point of the Infringement or on the \nseven (7) metre line if the Infringement occurs in the In-Goal Area. \n16.5\tShould a supporting, attacking player cause an apparent and involuntary or \naccidental Obstruction and the player in Possession ceases movement to allow \na Touch to be made, the Touch is to count.\n16.6\tIf the Referee causes Obstruction on either an attacking player or a defending \nplayer including when the ball makes contact with the Referee, play should \npause and recommence with a Rollball at the Mark where the interference \noccurred and the Touch count remains unchanged.\n17\u2002 Interchange  \n17.1\tPlayers may Interchange at any time. \n17.2\tThere is no limit to the number of times a player may Interchange.\n17.3\tInterchange players must remain in their Interchange Area for the duration of \nthe match.\n17.4\tInterchanges may only occur after the player leaving the Field of Play has \nentered the Interchange Area. \n17.5\tPlayers leaving or entering the Field of Play shall not hinder or obstruct play.\nRuling = A Penalty to the non-offending Team at the point of the Infringement.\n17.6\tPlayers entering the Field of Play must take up an Onside position before \nbecoming involved in play.\nFIT Playing Rules - 5th Edition\n14\nCOPYRIGHT © Touch Football Australia 2020\nRuling = A Penalty to the non-offending Team at the point of the Infringement.\n17.7\tWhen an intercept has occurred or a line break made, players are not permitted \nto Interchange until the next Touch has been made or ball becomes Dead.\nRuling A = If a player enters the Field of Play and prevents the scoring of a Try, a Penalty Try \nwill be awarded and the offending player sent to the Sin Bin.\nRuling B = If a player enters the Field of Play but does not impede the scoring of a Try the \noffending player will be sent to the Sin Bin.\n17.8\tFollowing a Try, players may Interchange at will, without having to wait for',
    ' Player\nThe player who replaces another player during Interchange. There is \na maximum of eight (8) substitute players in any Team and except \nwhen interchanging, in the Sin Bin, dismissed or on the Field of Play, \nthey must remain in the Substitution Box.\nTap and Tap Penalty\nThe method of commencing the match, recommencing the match \nafter Half Time and after a Try has been scored. The Tap is also the \nmethod of recommencing play when a Penalty is awarded. The Tap \nis taken by placing the ball on the ground at or behind the Mark, \nreleasing both hands from the ball, tapping the ball gently with either \nfoot or touching the foot on the ball. The ball must not roll or move \nmore than one (1) metre in any direction and must be retrieved \ncleanly, without touching the ground again. The player may face any \ndirection and use either foot. Provided it is at the Mark, the ball does \nnot have to be lifted from the ground prior to a Tap being taken.\nTeam\nA group of players constituting one (1) side in a competition match.\nTFA\nTouch Football Australia Limited\nTouch\nAny contact between the player in Possession and a defending \nplayer. A Touch includes contact on the ball, hair or clothing and may \nbe made by a defending player or by the player in Possession.\nTouch Count\nThe progressive number of Touches that each Team has before a \nChange of Possession, from zero (0) to six (6).\nTry\nThe result of any attacking player, except the Half, placing the ball on \nor over the Team’s Attacking Try Line before being Touched.\nTry Lines\nThe lines separating the In-Goal Areas from the Field of Play. See \nAppendix 1.\nVoluntary Rollball\nThe player in Possession performs a Rollball before a Touch is made \nwith a defending player.\nWing\nThe player outside the Link player.\nWinner\nThe Team that scores the most Tries during the match.\nFIT Playing Rules - 5th Edition\n4\nCOPYRIGHT © Touch Football Australia 2020\n  Rules of Play  \n  Mode of Play    \nThe object of the game of Touch is for each Team to score Tries and to prevent the \nopposition from scoring. The ball may be passed, knocked or handed between players \nof the Attacking Team who may in turn run',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 0.0001
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.3

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 0.0001
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.3
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss
0.1667 2 4.8893 -
0.3333 4 4.9073 -
0.5 6 4.8582 -
0.6667 8 4.8634 4.8319
0.8333 10 4.81 -
1.0 12 4.8214 -
1.1667 14 4.6917 -
1.3333 16 4.571 4.6944
1.5 18 4.5726 -
1.6667 20 4.6054 -
1.8333 22 4.4568 -
2.0 24 4.5025 4.5390
2.1667 26 4.3231 -
2.3333 28 4.1362 -
2.5 30 4.3427 -
2.6667 32 4.2574 4.4695
2.8333 34 4.3008 -
3.0 36 4.1244 -
3.1667 38 4.0408 -
3.3333 40 4.1497 4.3349
3.5 42 4.0795 -
3.6667 44 3.8948 -
3.8333 46 4.1476 -
4.0 48 4.0925 4.2929
4.1667 50 3.7692 -
4.3333 52 4.058 -
4.5 54 3.8418 -
4.6667 56 4.049 4.3185
4.8333 58 4.184 -
5.0 60 4.0321 -

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.17.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification}, 
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
5
Safetensors
Model size
33.4M params
Tensor type
F32
·

Finetuned from