Edit model card

SentenceTransformer based on mixedbread-ai/mxbai-embed-large-v1

This is a sentence-transformers model finetuned from mixedbread-ai/mxbai-embed-large-v1. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: mixedbread-ai/mxbai-embed-large-v1
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Daxtra/onet_sbert")
# Run inference
sentences = [
    'Distribute materials, supplies, or subassemblies to work areas.\nComplete, review, or maintain production, time, or component waste reports.\nAdjust, repair, or replace electrical or electronic components to correct defects and to ensure conformance to specifications.\nAssemble electrical or electronic systems or support structures and install components, units, subassemblies, wiring, or assembly casings, using rivets, bolts, soldering or micro-welding equipment.\nFabricate or form parts, coils, or structures according to specifications, using drills, calipers, cutters, or saws.',
    'Mark products, workpieces, or equipment with identifying information.\nPackage products for storage or shipment.\nRepair parts or assemblies.\nInstruct workers to use equipment or perform technical procedures.\nExchange information with colleagues.\nDrill holes in parts, equipment, or materials.\nReview blueprints or other instructions to determine operational methods or sequences.\nDistribute supplies to workers.\nAssemble electrical or electronic equipment.\nConfer with others to resolve production problems or equipment malfunctions.\nTest electrical equipment or systems to ensure proper functioning.\nAdjust flow of electricity to tools or production equipment.\nOperate welding equipment.\nClean workpieces or finished products.\nRead work orders or other instructions to determine product specifications or materials requirements.\nRecord operational or production data.',
    'Supervise service workers.\nManage budgets for personal services operations.\nArrange items for use or display.\nDeliver items.\nPrepare operational reports or records.\nCollaborate with others to determine production details.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,488 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 22 tokens
    • mean: 81.13 tokens
    • max: 128 tokens
    • min: 15 tokens
    • mean: 76.36 tokens
    • max: 128 tokens
  • Samples:
    sentence_0 sentence_1
    Perform business management duties, such as maintaining records or files, preparing reports, or ordering supplies or equipment.
    Maintain current electrician's license or identification card to meet governmental regulations.
    Inspect electrical systems, equipment, or components to identify hazards, defects, or the need for adjustment or repair, and to ensure compliance with codes.
    Perform physically demanding tasks, such as digging trenches to lay conduit or moving or lifting heavy objects.
    Construct or fabricate parts, using hand tools, according to specifications.
    Dig holes or trenches.
    Communicate with other construction or extraction personnel to discuss project details.
    Update job related knowledge or skills.
    Thread wire or cable through ducts or conduits.
    Repair electrical equipment.
    Assist skilled construction or extraction personnel.
    Test electrical equipment or systems to ensure proper functioning.
    Train construction or extraction personnel.
    Direct construction or extraction personnel.
    Fabricate parts or components.
    Plan layout of construction, installation, or repairs.
    Create construction or installation diagrams.
    Install electrical components, equipment, or systems.
    Direct the work of nurses, residents, or other staff to provide patient care.
    Treat lower urinary tract dysfunctions using equipment such as diathermy machines, catheters, cystoscopes, or radium emanation tubes.
    Treat urologic disorders using alternatives to traditional surgery such as extracorporeal shock wave lithotripsy, laparoscopy, or laser techniques.
    Document or review patients' histories.
    Perform brachytherapy, cryotherapy, high intensity focused ultrasound (HIFU), or photodynamic therapy to treat prostate or other cancers.
    Provide urology consultation to physicians or other health care professionals.
    Train medical providers.
    Operate diagnostic imaging equipment.
    Gather medical information from patient histories.
    Advise medical personnel regarding healthcare issues.
    Administer non-intravenous medications.
    Supervise patient care personnel.
    Record patient medical histories.
    Prescribe medications.
    Diagnose medical conditions.
    Examine roadway and clear obstructions from the path of travel.
    Observe hand signals, grade stakes, or other markings when operating machines.
    Read written instructions or confer with supervisors about schedules and materials to be moved.
    Measure, weigh, or verify levels of rock, gravel, or other excavated material to prevent equipment overloads.
    Monitor loading processes to ensure that materials are loaded according to specifications.
    Connect cables or electrical lines.
    Remove debris or damaged materials.
    Operate conveyors or other industrial material moving equipment.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step
0.0963 18
0.1925 36
0.2888 54
0.3850 72
0.4813 90
0.5775 108
0.6738 126
0.7701 144
0.8663 162
0.9626 180
1.0 187

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Daxtra/onet_sbert

Finetuned
(14)
this model