--- language: [] library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:10330 - loss:MultipleNegativesRankingLoss base_model: indobenchmark/indobert-base-p2 datasets: [] metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: Pura Ulun Danu terletak sekitar 56 kilometer dari Kota Denpasar. sentences: - Dalam tujuh bulan kehamilan, organ tubuh bayi sudah sempurna. - Dokter Adeline menjelaskan aturan-aturan agar diabetisi aman berpuasa. - Pura Ulun Danu terletak sekitar satu jam perjalanan dari Kota Denpasar. - source_sentence: Di luar ujung barat laut, taiga dominan, mencakup bagian besar dari seluruh Siberia. sentences: - Banyak keraguan mengenai tanggal kelahiran Gaudapa. - Sebagian besar Siberia terletak di ujung barat laut,. - Maia menyaksikan balapan tanpa alasan. - source_sentence: Widodo Cahyono Putro adalah seorang pelatih dan pemain sepak bola legendaris Indonesia. sentences: - Ia berjanji untuk jatuh di lubang yang sama. - Pemain sepak bola legendaris pasti menjadi pelatih sepak bola. - Nazaruddin menegaskan bahwa mantan Wakil Ketua Komisi II DPR itu menerima uang dari proyek e-KTP sebesar $500 ribu. - source_sentence: Salah satunya seorang lelaki yang sedang memakan permen karet yang dengan paksa dikeluarkan dari mulutnya. sentences: - Charles Leclerc gagal menjadi juara dunia F2. - Pendukung pembrontakan Cina sudah tidak ada. - Lelaki itu bukan salah satunya. - source_sentence: Tumenggung Wirapraja setelah mangkat dimakamkan di Kebon Alas Warudoyong, Kecamatan Panumbangan, Kabupaten Ciamis. sentences: - Peristiwa Pemberontakan Besar di Minahasa memiliki dampak besar pada tentara Sekutu. - Di hari libur ini, Pengunjung semua taman nasional tidak dibebaskan biaya. - Tumenggung Wirapraja dikremasi setelah dipastikan mangkat dan abunya kemudian dilarungkan ke Pantai Laut Selatan. pipeline_tag: sentence-similarity model-index: - name: SentenceTransformer based on indobenchmark/indobert-base-p2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: -0.05296221890135024 name: Pearson Cosine - type: spearman_cosine value: -0.06107163627723088 name: Spearman Cosine - type: pearson_manhattan value: -0.06399377304712585 name: Pearson Manhattan - type: spearman_manhattan value: -0.06835801919486152 name: Spearman Manhattan - type: pearson_euclidean value: -0.0642574675392147 name: Pearson Euclidean - type: spearman_euclidean value: -0.06906447787846218 name: Spearman Euclidean - type: pearson_dot value: -0.024528943319169508 name: Pearson Dot - type: spearman_dot value: -0.024236369255517205 name: Spearman Dot - type: pearson_max value: -0.024528943319169508 name: Pearson Max - type: spearman_max value: -0.024236369255517205 name: Spearman Max --- # SentenceTransformer based on indobenchmark/indobert-base-p2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) - **Maximum Sequence Length:** 75 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Hvare/Athena-indobert-finetuned-indonli") # Run inference sentences = [ 'Tumenggung Wirapraja setelah mangkat dimakamkan di Kebon Alas Warudoyong, Kecamatan Panumbangan, Kabupaten Ciamis.', 'Tumenggung Wirapraja dikremasi setelah dipastikan mangkat dan abunya kemudian dilarungkan ke Pantai Laut Selatan.', 'Di hari libur ini, Pengunjung semua taman nasional tidak dibebaskan biaya.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:------------| | pearson_cosine | -0.053 | | spearman_cosine | -0.0611 | | pearson_manhattan | -0.064 | | spearman_manhattan | -0.0684 | | pearson_euclidean | -0.0643 | | spearman_euclidean | -0.0691 | | pearson_dot | -0.0245 | | spearman_dot | -0.0242 | | pearson_max | -0.0245 | | **spearman_max** | **-0.0242** | ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 10,330 training samples * Columns: sentence_0, sentence_1, and label * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | | | | * Samples: | sentence_0 | sentence_1 | label | |:--------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|:---------------| | "" "Akan ada protes dan hal-hal lain, semua nya sudah direncanakan," "ungkap oposisi kepada El Mundo." | Protes dan hal-hal lain sudah direncanakan. | 0 | | Tak jarang, bangun kesiangan pun jadi alasan untuk tak berolahraga. | Salah satu alasan tidak berolahraga adalah bangun kesiangan. | 0 | | Namun, saingannya Prabowo Subianto juga mendeklarasikan kemenangan, membuat orang Indonesia bingung. | Prabowo menerima bahwa Dia kalah. | 2 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin
### Training Logs | Epoch | Step | Training Loss | sts-dev_spearman_max | |:------:|:----:|:-------------:|:--------------------:| | 0.0991 | 64 | - | -0.0411 | | 0.1981 | 128 | - | -0.0426 | | 0.2972 | 192 | - | -0.0419 | | 0.3963 | 256 | - | -0.0425 | | 0.4954 | 320 | - | -0.0384 | | 0.5944 | 384 | - | -0.0260 | | 0.6935 | 448 | - | -0.0216 | | 0.7740 | 500 | 0.0531 | - | | 0.7926 | 512 | - | -0.0243 | | 0.8916 | 576 | - | -0.0241 | | 0.9907 | 640 | - | -0.0242 | | 1.0 | 646 | - | -0.0242 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```