Edit model card

SentenceTransformer based on indobenchmark/indobert-base-p2

This is a sentence-transformers model finetuned from indobenchmark/indobert-base-p2 on the afaji/indonli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: indobenchmark/indobert-base-p2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: id

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("cassador/2bs16lr2")
# Run inference
sentences = [
    'Tujuan dari acara dengar pendapat CRTC adalah untuk mengumpulkan respons dari pada pemangku kepentingan industri ini dan dari masyarakat umum.',
    'Masyarakat umum dilibatkan untuk memberikan respon dalam acara dengar pendapat CRTC.',
    'Pembuat Rooms hanya bisa membuat meeting yang terbuka.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.6041
spearman_cosine 0.5791
pearson_manhattan 0.5899
spearman_manhattan 0.5683
pearson_euclidean 0.5942
spearman_euclidean 0.5703
pearson_dot 0.5987
spearman_dot 0.5879
pearson_max 0.6041
spearman_max 0.5879

Semantic Similarity

Metric Value
pearson_cosine 0.284
spearman_cosine 0.2627
pearson_manhattan 0.2433
spearman_manhattan 0.2476
pearson_euclidean 0.2489
spearman_euclidean 0.2485
pearson_dot 0.302
spearman_dot 0.291
pearson_max 0.302
spearman_max 0.291

Training Details

Training Dataset

afaji/indonli

  • Dataset: afaji/indonli
  • Size: 6,915 training samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 12 tokens
    • mean: 29.26 tokens
    • max: 135 tokens
    • min: 6 tokens
    • mean: 12.13 tokens
    • max: 36 tokens
    • 0: ~51.00%
    • 1: ~49.00%
  • Samples:
    premise hypothesis label
    Presiden Joko Widodo (Jokowi) menyampaikan prediksi bahwa wabah virus Corona (COVID-19) di Indonesia akan selesai akhir tahun ini. Prediksi akhir wabah tidak disampaikan Jokowi. 0
    Meski biasanya hanya digunakan di fasilitas kesehatan, saat ini masker dan sarung tangan sekali pakai banyak dipakai di tingkat rumah tangga. Masker sekali pakai banyak dipakai di tingkat rumah tangga. 1
    Seperti namanya, paket internet sahur Telkomsel ini ditujukan bagi pengguna yang menginginkan kuota ekstra, untuk menemani momen sahur sepanjang bulan puasa. Paket internet sahur tidak ditujukan untuk saat sahur. 0
  • Loss: SoftmaxLoss

Evaluation Dataset

afaji/indonli

  • Dataset: afaji/indonli
  • Size: 1,556 evaluation samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 9 tokens
    • mean: 28.07 tokens
    • max: 179 tokens
    • min: 6 tokens
    • mean: 12.15 tokens
    • max: 25 tokens
    • 0: ~47.90%
    • 1: ~52.10%
  • Samples:
    premise hypothesis label
    Manuskrip tersebut berisi tiga catatan yang menceritakan bagaimana peristiwa jatuhnya meteorit serta laporan kematian akibat kejadian tersebut seperti dilansir dari Science Alert, Sabtu (25/4/2020). Manuskrip tersebut tidak mencatat laporan kematian. 0
    Dilansir dari Business Insider, menurut observasi dari Mauna Loa Observatory di Hawaii pada karbon dioksida (CO2) di level mencapai 410 ppm tidak langsung memberikan efek pada pernapasan, karena tubuh manusia juga masih membutuhkan CO2 dalam kadar tertentu. Tidak ada observasi yang pernah dilansir oleh Business Insider. 0
    Seorang wanita asal New York mengaku sangat benci air putih. Tidak ada orang dari New York yang membenci air putih. 0
  • Loss: SoftmaxLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-test_spearman_cosine
0 0 - - 0.1277 -
0.2309 100 0.5892 - - -
0.4619 200 0.5039 - - -
0.6928 300 0.4807 - - -
0.9238 400 0.4558 - - -
1.0 433 - 0.4203 0.5319 -
1.1547 500 0.3877 - - -
1.3857 600 0.3367 - - -
1.6166 700 0.3359 - - -
1.8476 800 0.3232 - - -
2.0 866 - 0.4346 0.5791 0.2627

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers and SoftmaxLoss

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
4
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cassador/2bs16lr2

Finetuned
(35)
this model

Dataset used to train cassador/2bs16lr2

Evaluation results