Edit model card

SentenceTransformer based on FacebookAI/xlm-roberta-base

This is a sentence-transformers model finetuned from FacebookAI/xlm-roberta-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: FacebookAI/xlm-roberta-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Stern5497/nir-2024-xlm-roberta-base")
# Run inference
sentences = [
    'who plays oz in the wizard of oz',
    'where did the wizard of oz come from',
    'when did brazil win the soccer world cup',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 164,848 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 10 tokens
    • mean: 13.41 tokens
    • max: 27 tokens
    • min: 136 tokens
    • mean: 164.07 tokens
    • max: 239 tokens
    • min: 133 tokens
    • mean: 165.13 tokens
    • max: 256 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    who wrote treat you better by shawn mendes {'title': '', 'text': 'Treat You Better "Treat You Better" is a song recorded by Canadian singer and songwriter Shawn Mendes. It was co-written by Mendes with Teddy Geiger, and Scott Harris. It was released on June 3, 2016 through Island Records as the lead single from his second studio album, "Illuminate" (2016). The music video was released on July 12, 2016 and features a storyline about an abusive relationship. The song peaked at number six on the US "Billboard" Hot 100, making it Mendes' second top 10 single. In Canada, the song has peaked at number seven on the Canadian Hot 100. The'} {'title': '', 'text': 'Scott Harris (songwriter) Scott Harris Friedman is an American multi-platinum, Grammy nominated songwriter, producer, and musician best known for his work with Shawn Mendes and co-writing Grammy winning song, "Don't Let Me Down" by The Chainsmokers featuring Daya, which reached #1 on the US Mainstream Top 40 chart in 2016. Harris has most recently written 13 songs on the self-titled third album Shawn Mendes (album), which debuted at #1 on the Billboard 200 chart, in addition to 10 songs on Shawn Mendes' sophomore album "Illuminate" including the lead single "Treat You Better" which reached the top 3 at the US'}
    where is the tanami desert located in australia {'title': '', 'text': 'zone. Tanami Desert The Tanami Desert is a desert in northern Australia situated in the Northern Territory and Western Australia. It has a rocky terrain with small hills. The Tanami was the Northern Territory's final frontier and was not fully explored by Australians of European descent until well into the twentieth century. It is traversed by the Tanami Track. The name "Tanami" is thought to be a corruption of the Walpiri name for the area, "Chanamee", meaning "never die". This referred to certain rock holes in the desert which were said never to run dry. Under the name "Tanami", the'} {'title': '', 'text': '("glomerata") is from the Latin "glomeratus", meaning "heaped" or "form into a ball". Desert tea-tree occurs in the arid parts of Australia including the far north west of New South Wales, South Australia including the Flinders Ranges, the Northern Territory and Western Australia. In the latter state it has been recorded from the Carnarvon, Central Kimberley, Central Ranges, Dampierland, Gascoyne, Gibson Desert, Great Sandy Desert, Great Victoria Desert, Little Sandy Desert, Murchison, Ord Victoria Plain, Pilbara and Tanami biogeographic areas. It grows in red sand, clay and sandy loam in rocky river beds, shallow depressions and sandy flats. "Melaleuca globifera"'}
    who won the us open men s and women s singles in 2017 {'title': '', 'text': "that ended his season, while Kerber lost in the first round to Naomi Osaka. The men's singles tournament concluded with Rafael Nadal defeating Kevin Anderson in the final, while the women's singles tournament concluded with Sloane Stephens defeating Madison Keys in the final. The 2017 US Open was the 137th edition of the tournament and took place at the USTA Billie Jean King National Tennis Center in Flushing Meadows–Corona Park of Queens in New York City, New York, United States. The tournament was held on 14 DecoTurf hard courts. The tournament was an event run by the International Tennis Federation"} {'title': '', 'text': "2017 US Open – Women's Singles Angelique Kerber was the defending champion, but was defeated in the first round by Naomi Osaka. Kerber became the second US Open defending champion to lose in the first round after Svetlana Kuznetsova in 2005. Sloane Stephens won her first Grand Slam title, defeating Madison Keys in the final, 6–3, 6–0. It was the first all-American women's final at the US Open since 2002, and the second time in three years that the final featured two first-time Grand Slam singles finalists from the same country. Stephens became the second unseeded woman in the Open"}
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • fp16: True
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0485 500 1.6163
0.0971 1000 0.8086
0.1456 1500 0.6766
0.1941 2000 0.6124
0.2426 2500 0.5374
0.2912 3000 0.5115
0.3397 3500 0.4823
0.3882 4000 0.4268
0.4368 4500 0.422
0.4853 5000 0.4014
0.5338 5500 0.3765
0.5824 6000 0.3689
0.6309 6500 0.3551
0.6794 7000 0.3359
0.7279 7500 0.326
0.7765 8000 0.3158
0.8250 8500 0.2945
0.8735 9000 0.2836
0.9221 9500 0.3043
0.9706 10000 0.2761
1.0 10303 -

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.0
  • Transformers: 4.40.2
  • PyTorch: 2.3.0+cu118
  • Accelerate: 0.29.3
  • Datasets: 2.19.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
278M params
Tensor type
F32
·

Finetuned from