Edit model card

embed-andegpt-H384

This is a sentence-transformers model finetuned from nreimers/MiniLM-L6-H384-uncased. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nreimers/MiniLM-L6-H384-uncased
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity
  • Language: es
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("enpaiva/embed-andegpt-H384")
# Run inference
sentences = [
    '¿Cuál es el nombre del reglamento que se menciona en la información proporcionada?',
    'Reglamento de Baja Tensión de la ANDE: El 10- trata sobre Partes de que se compone una instalación eléctrica: y tiene las siguientes sub-secciones: <sub-section>10.1</sub-section>',
    'Reglamento de Baja Tensión de la ANDE: El 37- trata sobre Soldadura eléctrica: y tiene las siguientes sub-secciones: <sub-section>37.1</sub-section>, <sub-section>37.2</sub-section>, <sub-section>37.3</sub-section>, <sub-section>37.4</sub-section>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9983
dot_accuracy 0.0022
manhattan_accuracy 0.9985
euclidean_accuracy 0.9983
max_accuracy 0.9985

Triplet

Metric Value
cosine_accuracy 0.9973
dot_accuracy 0.0024
manhattan_accuracy 0.9971
euclidean_accuracy 0.9973
max_accuracy 0.9973

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • prediction_loss_only: False
  • per_device_train_batch_size: 32
  • learning_rate: 2e-05
  • lr_scheduler_type: cosine
  • log_level_replica: passive
  • log_on_each_node: False
  • logging_nan_inf_filter: False
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: False
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: passive
  • log_on_each_node: False
  • logging_nan_inf_filter: False
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss andegpt-dev_max_accuracy andegpt-test_max_accuracy
0 0 - - 0.5920 -
0.1079 250 2.3094 0.7200 0.9597 -
0.2158 500 0.7952 0.3598 0.9813 -
0.3237 750 0.4862 0.2162 0.9910 -
0.4316 1000 0.3304 0.1558 0.9927 -
0.5395 1250 0.2527 0.1140 0.9961 -
0.6474 1500 0.1987 0.0859 0.9964 -
0.7553 1750 0.1617 0.0729 0.9959 -
0.8632 2000 0.1419 0.0562 0.9966 -
0.9711 2250 0.1132 0.0495 0.9968 -
1.0790 2500 0.1043 0.0429 0.9971 -
1.1869 2750 0.0947 0.0368 0.9978 -
1.2948 3000 0.0736 0.0367 0.9976 -
1.4027 3250 0.0661 0.0296 0.9978 -
1.5106 3500 0.0613 0.0279 0.9985 -
1.6185 3750 0.0607 0.0264 0.9983 -
1.7264 4000 0.0521 0.0238 0.9985 -
1.8343 4250 0.0495 0.0216 0.9985 -
1.9422 4500 0.0425 0.0211 0.9983 -
2.0501 4750 0.0428 0.0200 0.9983 -
2.1580 5000 0.0435 0.0190 0.9985 -
2.2659 5250 0.0393 0.0188 0.9983 -
2.3738 5500 0.0356 0.0182 0.9983 -
2.4817 5750 0.0351 0.0180 0.9988 -
2.5896 6000 0.0394 0.0181 0.9985 -
2.5973 6018 - - - 0.9973

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 3.0.1
  • Transformers: 4.39.3
  • PyTorch: 2.2.0+cu121
  • Accelerate: 0.28.0
  • Datasets: 2.20.0
  • Tokenizers: 0.15.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
316
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for enpaiva/embed-andegpt-H384

Finetuned
this model

Evaluation results