Edit model card

SentenceTransformer based on indobenchmark/indobert-base-p2

This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p2 on the afaji/indonli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: indobenchmark/indobert-base-p2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: id

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("cassador/2bs4lr2")
# Run inference
sentences = [
    'Tujuan dari acara dengar pendapat CRTC adalah untuk mengumpulkan respons dari pada pemangku kepentingan industri ini dan dari masyarakat umum.',
    'Masyarakat umum dilibatkan untuk memberikan respon dalam acara dengar pendapat CRTC.',
    'Pembuat Rooms hanya bisa membuat meeting yang terbuka.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.5893
spearman_cosine 0.5802
pearson_manhattan 0.5822
spearman_manhattan 0.5749
pearson_euclidean 0.5862
spearman_euclidean 0.5765
pearson_dot 0.591
spearman_dot 0.5847
pearson_max 0.591
spearman_max 0.5847

Semantic Similarity

Metric Value
pearson_cosine 0.2927
spearman_cosine 0.267
pearson_manhattan 0.2598
spearman_manhattan 0.2546
pearson_euclidean 0.2669
spearman_euclidean 0.2563
pearson_dot 0.3098
spearman_dot 0.2953
pearson_max 0.3098
spearman_max 0.2953

Training Details

Training Dataset

afaji/indonli

  • Dataset: afaji/indonli
  • Size: 6,915 training samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 12 tokens
    • mean: 29.26 tokens
    • max: 135 tokens
    • min: 6 tokens
    • mean: 12.13 tokens
    • max: 36 tokens
    • 0: ~51.00%
    • 1: ~49.00%
  • Samples:
    premise hypothesis label
    Presiden Joko Widodo (Jokowi) menyampaikan prediksi bahwa wabah virus Corona (COVID-19) di Indonesia akan selesai akhir tahun ini. Prediksi akhir wabah tidak disampaikan Jokowi. 0
    Meski biasanya hanya digunakan di fasilitas kesehatan, saat ini masker dan sarung tangan sekali pakai banyak dipakai di tingkat rumah tangga. Masker sekali pakai banyak dipakai di tingkat rumah tangga. 1
    Seperti namanya, paket internet sahur Telkomsel ini ditujukan bagi pengguna yang menginginkan kuota ekstra, untuk menemani momen sahur sepanjang bulan puasa. Paket internet sahur tidak ditujukan untuk saat sahur. 0
  • Loss: SoftmaxLoss

Evaluation Dataset

afaji/indonli

  • Dataset: afaji/indonli
  • Size: 1,556 evaluation samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 9 tokens
    • mean: 28.07 tokens
    • max: 179 tokens
    • min: 6 tokens
    • mean: 12.15 tokens
    • max: 25 tokens
    • 0: ~47.90%
    • 1: ~52.10%
  • Samples:
    premise hypothesis label
    Manuskrip tersebut berisi tiga catatan yang menceritakan bagaimana peristiwa jatuhnya meteorit serta laporan kematian akibat kejadian tersebut seperti dilansir dari Science Alert, Sabtu (25/4/2020). Manuskrip tersebut tidak mencatat laporan kematian. 0
    Dilansir dari Business Insider, menurut observasi dari Mauna Loa Observatory di Hawaii pada karbon dioksida (CO2) di level mencapai 410 ppm tidak langsung memberikan efek pada pernapasan, karena tubuh manusia juga masih membutuhkan CO2 dalam kadar tertentu. Tidak ada observasi yang pernah dilansir oleh Business Insider. 0
    Seorang wanita asal New York mengaku sangat benci air putih. Tidak ada orang dari New York yang membenci air putih. 0
  • Loss: SoftmaxLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-test_spearman_cosine
0 0 - - 0.1277 -
0.0578 100 0.6725 - - -
0.1157 200 0.5884 - - -
0.1735 300 0.5397 - - -
0.2313 400 0.583 - - -
0.2892 500 0.6089 - - -
0.3470 600 0.5719 - - -
0.4049 700 0.6327 - - -
0.4627 800 0.5983 - - -
0.5205 900 0.5009 - - -
0.5784 1000 0.6115 - - -
0.6362 1100 0.5186 - - -
0.6940 1200 0.5574 - - -
0.7519 1300 0.5939 - - -
0.8097 1400 0.5022 - - -
0.8676 1500 0.5355 - - -
0.9254 1600 0.532 - - -
0.9832 1700 0.4894 - - -
1.0 1729 - 0.4545 0.5332 -
1.0411 1800 0.4036 - - -
1.0989 1900 0.4111 - - -
1.1567 2000 0.3725 - - -
1.2146 2100 0.4287 - - -
1.2724 2200 0.3846 - - -
1.3302 2300 0.387 - - -
1.3881 2400 0.361 - - -
1.4459 2500 0.4419 - - -
1.5038 2600 0.3893 - - -
1.5616 2700 0.4324 - - -
1.6194 2800 0.3965 - - -
1.6773 2900 0.4438 - - -
1.7351 3000 0.3788 - - -
1.7929 3100 0.4741 - - -
1.8508 3200 0.27 - - -
1.9086 3300 0.4175 - - -
1.9665 3400 0.5599 - - -
2.0 3458 - 0.5981 0.5802 0.2670

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers and SoftmaxLoss

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
7
Safetensors
Model size
124M params
Tensor type
F32
·

Finetuned from

Dataset used to train cassador/2bs4lr2

Evaluation results