Edit model card

SentenceTransformer based on google/electra-large-discriminator

This is a sentence-transformers model finetuned from google/electra-large-discriminator on the PiC/phrase_similarity dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ElectraModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Deehan1866/finetuned-notrain10-electra-large-discriminator")
# Run inference
sentences = [
    'He also played with the Turkish 2nd Division team Pertevniyal, which was at the time the farm team of Efes, via a dual license.',
    'He also played with the Turkish 2nd Division team Pertevniyal, which was at the time the farm team of Efes, via a two-part authorization.',
    'Storage/centre tracks are found in the vicinity of the following stations:\nOther song highlights.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.747
cosine_accuracy_threshold 0.9788
cosine_f1 0.7606
cosine_f1_threshold 0.8711
cosine_precision 0.6831
cosine_recall 0.858
cosine_ap 0.7829
dot_accuracy 0.686
dot_accuracy_threshold 295.4237
dot_f1 0.7375
dot_f1_threshold 272.2527
dot_precision 0.6173
dot_recall 0.916
dot_ap 0.6362
manhattan_accuracy 0.749
manhattan_accuracy_threshold 78.4084
manhattan_f1 0.758
manhattan_f1_threshold 192.5798
manhattan_precision 0.6827
manhattan_recall 0.852
manhattan_ap 0.7776
euclidean_accuracy 0.749
euclidean_accuracy_threshold 4.0771
euclidean_f1 0.7571
euclidean_f1_threshold 9.8659
euclidean_precision 0.6799
euclidean_recall 0.854
euclidean_ap 0.7825
max_accuracy 0.749
max_accuracy_threshold 295.4237
max_f1 0.7606
max_f1_threshold 272.2527
max_precision 0.6831
max_recall 0.916
max_ap 0.7829

Training Details

Training Dataset

PiC/phrase_similarity

  • Dataset: PiC/phrase_similarity at fc67ce7
  • Size: 7,004 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 12 tokens
    • mean: 26.35 tokens
    • max: 57 tokens
    • min: 12 tokens
    • mean: 26.89 tokens
    • max: 58 tokens
    • 0: ~48.80%
    • 1: ~51.20%
  • Samples:
    sentence1 sentence2 label
    newly formed camp is released from the membrane and diffuses across the intracellular space where it serves to activate pka. recently made encampment is released from the membrane and diffuses across the intracellular space where it serves to activate pka. 0
    According to one data, in 1910, on others – in 1915, the mansion became Natalya Dmitriyevna Shchuchkina's property. According to a particular statistic, in 1910, on others – in 1915, the mansion became Natalya Dmitriyevna Shchuchkina's property. 1
    Note that Fact 1 does not assume any particular structure on the set formula_65. Note that Fact 1 does not assume any specific edifice on the set formula_65. 0
  • Loss: SoftmaxLoss

Evaluation Dataset

PiC/phrase_similarity

  • Dataset: PiC/phrase_similarity at fc67ce7
  • Size: 1,000 evaluation samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 9 tokens
    • mean: 26.21 tokens
    • max: 61 tokens
    • min: 10 tokens
    • mean: 26.8 tokens
    • max: 61 tokens
    • 0: ~50.00%
    • 1: ~50.00%
  • Samples:
    sentence1 sentence2 label
    after theo's apparent death, she decides to leave first colony and ends up traveling with the apostles. after theo's apparent death, she decides to leave original settlement and ends up traveling with the apostles. 0
    The guard assigned to Vivian leaves her to prevent the robbery, allowing her to connect to the bank's network. The guard assigned to Vivian leaves her to prevent the robbery, allowing her to connect to the bank's locations. 0
    Two days later Louis XVI banished Necker by a "lettre de cachet" for his very public exchange of pamphlets. Two days later Louis XVI banished Necker by a "lettre de cachet" for his very free forum of pamphlets. 0
  • Loss: SoftmaxLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss quora-duplicates-dev_max_ap
0 0 - - 0.6721
0.2283 100 - 0.6879 0.6801
0.4566 200 - 0.5565 0.7941
0.6849 300 - 0.5463 0.7829
0.9132 400 - 0.5501 0.7628
1.1416 500 0.6308 0.6947 0.5911
1.3699 600 - 0.6935 0.5532
1.5982 700 - 0.6960 0.5518
1.8265 800 - 0.6936 0.5543
2.0548 900 - 0.6958 0.5529
2.2831 1000 0.6944 0.6933 0.5545
2.5114 1100 - 0.6956 0.5574
2.7397 1200 - 0.6931 0.5582
2.9680 1300 - 0.6932 0.5593
3.1963 1400 - 0.6932 0.5573
3.4247 1500 0.6939 0.6934 0.5581
3.6530 1600 - 0.6932 0.5572
3.8813 1700 - 0.6932 0.5589
4.1096 1800 - 0.6967 0.5594
4.3379 1900 - 0.6945 0.5622
4.5662 2000 0.6944 0.6940 0.5615
4.7945 2100 - 0.6945 0.5637
5.0228 2200 - 0.6931 0.5730
5.2511 2300 - 0.6933 0.5811
5.4795 2400 - 0.6969 0.5872
5.7078 2500 0.6939 0.6940 0.5902
5.9361 2600 - 0.6936 0.5934
6.1644 2700 - 0.6932 0.5913
6.3927 2800 - 0.6935 0.5861
6.6210 2900 - 0.6947 0.5856
6.8493 3000 0.6936 0.6941 0.5978
7.0776 3100 - 0.6934 0.5955
7.3059 3200 - 0.6931 0.5910
7.5342 3300 - 0.6933 0.5963
7.7626 3400 - 0.6931 0.5966
7.9909 3500 0.6936 0.6932 0.6085
8.2192 3600 - 0.6931 0.6298
8.4475 3700 - 0.6934 0.6341
8.6758 3800 - 0.6931 0.6256
8.9041 3900 - 0.6931 0.6155
9.1324 4000 0.6936 0.6931 0.6215
9.3607 4100 - 0.6931 0.6214
9.5890 4200 - 0.6931 0.6339
9.8174 4300 - 0.6931 0.6440
10.0 4380 - - 0.7829
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.10
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.3
  • PyTorch: 2.2.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers and SoftmaxLoss

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
2
Safetensors
Model size
334M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train Deehan1866/finetuned-notrain10-electra-large-discriminator

Evaluation results