fine_tuned_miniLM / README.md
NilsML's picture
Upload folder using huggingface_hub
bec505f verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:46716
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
  - source_sentence: Structurally, diplomonads have two equal-sized what and multiple flagella?
    sentences:
      - >-
        deciding when to buy or sell a stock is not an easy task because the
        market is hard to predict, being influenced by political and economic
        factors. thus, methodologies based on computational intelligence have
        been applied to this challenging problem. in this work, every day the
        stocks are ranked by technique for order preference by similarity to
        ideal solution ( topsis ) using technical analysis criteria, and the
        most suitable stock is selected for purchase. even so, it may occur that
        the market is not favorable to purchase on certain days, or even, the
        topsis make an incorrect selection. to improve the selection, another
        method should be used. so, a hybrid
      - >-
        we present the analysis of the brightest flare that was recorded in the
        \ emph { insight } - hmxt data set, in a broad energy range ( 2 $ - $
        200 kev ) from the microquasar grs ~ 1915 + 105 during an unusual low -
        luminosity state. this flare was detected by \ emph { insight } - hxmt
        among a series of flares during 2 june 2019 utc 16 : 37 : 06 to 20 : 11
        : 36, with a 2 - 200 kev luminosity of 3. 4 $ - $ 7. 27 $ \ times10 ^ {
        38 } $ er
      - nuclei
  - source_sentence: >-
      What instruments used in guidance systems to indicate directions in space
      must have an angular momentum that does not change in direction?
    sentences:
      - >-
        magnetic and transport properties of near - stoichiometric metastable
        fexmnygaz alloys ( 46 < x < 52, 17 < y25, 26 < z < 30 ) with face -
        centered cubic ( fcc ), body - centered cubic ( bcc ), and two - phase (
        fcc + bcc ) structures are investigated. the experimental results are
        analyzed in terms of first - principles calculations of stoichiometric
        fe2mnga alloy with the l21, l12, and the tetragonally distorted l21
        structural orderings. it is shown that the pure bcc and fcc phases have
        distinct magnetic
      - >-
        k nearest neighbor ( knn ) joins are used in scientific domains for data
        analysis, and are building blocks of several well - known algorithms.
        knn - joins find the knn of all points in a dataset. this paper focuses
        on a hybrid cpu / gpu approach for low - dimensional knn - joins, where
        the gpu may not yield substantial performance gains over parallel cpu
        algorithms. we utilize a work queue that prioritizes computing data
        points in high density regions on the gpu, and low density regions on
        the cpu, thereby taking advantage of each architecture ' s relative
        strengths. our approach, hybridknn - join,
      - >-
        the fact that these states are effectively decoupled from propagating
        photons. we prove that scattering of a parity - invariant single photon
        on a qubit pair, combined with a properly engineered time variation of
        the qubit detuning, is not only feasible, but also more effective than
        strategies based on the relaxation of the excited states of the qubits.
        the use of tensor network methods to simulate the proposed scheme
        enables to include photon delays in collision models, thus opening the
        possibility to follow the time evolution of the full quantum system,
        including qubits and field, and to efficiently implement and
        characterize the dynamics in non
  - source_sentence: >-
      If pollination and fertilization occur, a diploid zygote forms within an
      ovule, located where
    sentences:
      - >-
        while cosmic rays $ ( e \ gtrsim 1 \, \ mathrm { gev } ) $ are well
        coupled to a galaxy ' s interstellar medium ( ism ) at scales of $ l >
        100 \, \ mathrm { pc } $, adjusting stratification and driving outflows,
        their impact on small scales is less clear. based on calculations of the
        cosmic ray diffusion coefficient from observations of the grammage in
        the milky way, cosmic rays have little time to dynamically impact the
        ism on those small scales. using numerical simulations, we explore how
        more complex cosmic ray transport could allow cosmic rays to couple
      - centripetal force
      - >-
        derived simple analytical expressions for the maximum growth rate,
        corresponding to the most unstable mode of the system. these expressions
        provide the explicit dependence of the growth rate on the various
        equilibrium parameters. for small angles the growth time is linearly
        proportional to the shear angle, and in this regime the single interface
        problem and the slab problem tend to the same result. on the contrary,
        in the limit of large angles and for the interface problem the growth
        time is essentially independent of the shear angle. in this regime we
        have also been able to calculate an approximate expression for the
        growth time for the slab configuration. magnetic shear can have a strong
        effect on the growth rates
  - source_sentence: >-
      When the hydrogen is nearly used up, the star can fuse which element into
      heavier elements?
    sentences:
      - >-
        the 50 ~ kton iron calorimeter ( ical ) detector at the underground
        india based neutrino observatory ( ino ) will make measurements on
        atmospheric neutrinos. muons produced in charged current ( cc )
        interactions of muon neutrinos with the iron are tracked spatially and
        temporally through the signals that they produce in the resistive plate
        chambers ~ ( rpcs ) that are interleaved with iron layers. since the
        rpcs will be operated in the avalanche mode the signal rise - time is $
        \ sim ~ 1 ~ \ rm { nsec } $ resulting in a fast time response
      - magnesium in air
      - >-
        pbhs. after discussing pbh formation as well as several inflation models
        leading to pbh production, we summarize various existing and future
        observational constraints. we then present topics on formation of pbh
        binaries, gravitational waves from pbh binaries, various observational
        tests of pbhs by using gravitational waves.
  - source_sentence: How many different main types of diabetes are there?
    sentences:
      - skin
      - two
      - >-
        a connection between relativistic quantum mechanics in the foldy -
        wouthuysen representation and the paraxial equations is established for
        a dirac particle in external fields. the paraxial form of the landau
        eigenfunction for a relativistic electron in a uniform magnetic field is
        determined. the obtained wave function contains the gouy phase and
        significantly approaches to the paraxial wave function for a free
        electron.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: sciq eval
          type: sciq-eval
        metrics:
          - type: cosine_accuracy@1
            value: 0.084
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.192
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.26
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.367
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.084
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.064
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.052
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0367
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.084
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.192
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.26
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.367
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.20773622543165599
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.1588857142857143
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.17411230071844377
            name: Cosine Map@100

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'How many different main types of diabetes are there?',
    'two',
    'a connection between relativistic quantum mechanics in the foldy - wouthuysen representation and the paraxial equations is established for a dirac particle in external fields. the paraxial form of the landau eigenfunction for a relativistic electron in a uniform magnetic field is determined. the obtained wave function contains the gouy phase and significantly approaches to the paraxial wave function for a free electron.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.084
cosine_accuracy@3 0.192
cosine_accuracy@5 0.26
cosine_accuracy@10 0.367
cosine_precision@1 0.084
cosine_precision@3 0.064
cosine_precision@5 0.052
cosine_precision@10 0.0367
cosine_recall@1 0.084
cosine_recall@3 0.192
cosine_recall@5 0.26
cosine_recall@10 0.367
cosine_ndcg@10 0.2077
cosine_mrr@10 0.1589
cosine_map@100 0.1741

Training Details

Training Dataset

Unnamed Dataset

  • Size: 46,716 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 7 tokens
    • mean: 17.62 tokens
    • max: 63 tokens
    • min: 3 tokens
    • mean: 92.89 tokens
    • max: 133 tokens
    • min: 0.0
    • mean: 0.23
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Unlike plants, animal species rely almost exclusively on what type of reproduction? 0. 4, \ sim 4, and \ sim 300 \ mum are stronger than 10 ^ 5, 10 ^ 8, and 10 ^ 4 times those of the local interstellar radiation field ( isrf ). below these values, the chemical pumping is the dominant source of excitation of the j > 1 levels, even at high kinetic temperatures ( \ sim 1000 k ). the far - infrared emission lines of ch + observed in the orion bar and the ngc 7027 pdrs are consistent with the predictions of our excitation model assuming an incident far - ultraviolet ( fuv ) radiation field of \ sim 3 \ times 10 0.0
    What type of energy occurs by splitting the nuclei of radioactive uranium? we study the potential of future electron - ion collider ( eic ) data to probe four - fermion operators in the standard model effective field theory ( smeft ). the ability to perform measurements with both polarized electron and proton beams at the eic provides a powerful tool that can disentangle the effects from different smeft operators. we compare the potential constraints from an eic with those obtained from drell - yan data at the large hadron collider. we show that eic data plays an important complementary role since it probes combinations of wilson coefficients not accessible through available drell - yan measurements. 0.0
    What element, which often forms polymers, has a unique ability to form covalent bonds with many other atoms? some divergent series $ f $. the convergence sets on $ \ gamma : = \ { [ 1 : z : \ psi ( z ) ] : z \ in \ mathbb { c } \ } \ subset \ mathbb { c } ^ 2 \ subset \ mathbb { p } ^ 2 $, where $ \ psi $ is a transcendental entire holomorphic function, are also studied and we obtain that a subset on $ \ gamma $ is a convergence set in $ \ mathbb { p } ^ 2 $ if and only if it is a countable union of compact projectively convex sets, and 0.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss sciq-eval_cosine_ndcg@10
0.0685 100 - 0.1200
0.1370 200 - 0.1562
0.2055 300 - 0.1780
0.2740 400 - 0.1811
0.3425 500 3.1705 0.1909
0.4110 600 - 0.1904
0.4795 700 - 0.1955
0.5479 800 - 0.2031
0.6164 900 - 0.2014
0.6849 1000 2.9054 0.2002
0.7534 1100 - 0.2058
0.8219 1200 - 0.2083
0.8904 1300 - 0.2084
0.9589 1400 - 0.2076
1.0 1460 - 0.2077

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}