SentenceTransformer based on jinaai/jina-embeddings-v2-base-code

This is a sentence-transformers model finetuned from jinaai/jina-embeddings-v2-base-code. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: jinaai/jina-embeddings-v2-base-code
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Nutanix/jina-embeddings-v2-base-code-mbpp")
# Run inference
sentences = [
    'Write a function to find sum and average of first n natural numbers.',
    'def sum_average(number):\r\n total = 0\r\n for value in range(1, number + 1):\r\n    total = total + value\r\n average = total / number\r\n return (total,average)',
    'def long_words(n, str):\r\n    word_len = []\r\n    txt = str.split(" ")\r\n    for x in txt:\r\n        if len(x) > n:\r\n            word_len.append(x)\r\n    return word_len\t',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.4795
dot_accuracy 0.3189
manhattan_accuracy 0.4905
euclidean_accuracy 0.4795
max_accuracy 0.4905

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss sts-dev_max_accuracy
0 0 - 0.5027
0.0050 100 5.0 -
0.0101 200 5.0 -
0.0151 300 4.9999 -
0.0202 400 5.0001 -
0.0252 500 5.0 -
0.0302 600 5.0 -
0.0353 700 4.9999 -
0.0403 800 5.0001 -
0.0453 900 5.0 -
0.0504 1000 5.0001 -
0.0554 1100 4.9999 -
0.0605 1200 5.0 -
0.0655 1300 5.0 -
0.0705 1400 4.9999 -
0.0756 1500 5.0 -
0.0806 1600 4.9999 -
0.0857 1700 5.0001 -
0.0907 1800 5.0001 -
0.0957 1900 5.0 -
0.1008 2000 5.0001 -
0.1058 2100 5.0 -
0.1109 2200 4.9999 -
0.1159 2300 4.9999 -
0.1209 2400 5.0 -
0.1260 2500 5.0 -
0.1310 2600 5.0001 -
0.1360 2700 4.9999 -
0.1411 2800 5.0001 -
0.1461 2900 5.0001 -
0.1512 3000 5.0 -
0.1562 3100 5.0001 -
0.1612 3200 4.9999 -
0.1663 3300 5.0001 -
0.1713 3400 4.9999 -
0.1764 3500 4.9999 -
0.1814 3600 4.9999 -
0.1864 3700 5.0 -
0.1915 3800 4.9999 -
0.1965 3900 5.0 -
0.2016 4000 5.0 -
0.2066 4100 5.0 -
0.2116 4200 5.0002 -
0.2167 4300 5.0002 -
0.2217 4400 5.0 -
0.2267 4500 5.0001 -
0.2318 4600 5.0001 -
0.2368 4700 5.0001 -
0.2419 4800 4.9998 -
0.2469 4900 5.0 -
0.2519 5000 4.9999 -
0.2570 5100 4.9999 -
0.2620 5200 5.0001 -
0.2671 5300 5.0001 -
0.2721 5400 4.9999 -
0.2771 5500 5.0 -
0.2822 5600 5.0002 -
0.2872 5700 5.0002 -
0.2923 5800 4.9999 -
0.2973 5900 5.0 -
0.3023 6000 5.0001 -
0.3074 6100 4.9999 -
0.3124 6200 4.9997 -
0.3174 6300 4.9999 -
0.3225 6400 5.0 -
0.3275 6500 4.9998 -
0.3326 6600 5.0 -
0.3376 6700 4.9998 -
0.3426 6800 5.0001 -
0.3477 6900 5.0002 -
0.3527 7000 5.0 -
0.3578 7100 4.9998 -
0.3628 7200 5.0003 -
0.3678 7300 5.0 -
0.3729 7400 5.0002 -
0.3779 7500 5.0 -
0.3829 7600 5.0001 -
0.3880 7700 5.0002 -
0.3930 7800 5.0001 -
0.3981 7900 5.0001 -
0.4031 8000 5.0 -
0.4081 8100 4.9998 -
0.4132 8200 4.9999 -
0.4182 8300 5.0001 -
0.4233 8400 5.0001 -
0.4283 8500 5.0 -
0.4333 8600 5.0002 -
0.4384 8700 5.0001 -
0.4434 8800 5.0 -
0.4485 8900 4.9996 -
0.4535 9000 4.9999 -
0.4585 9100 5.0 -
0.4636 9200 4.9999 -
0.4686 9300 4.9999 -
0.4736 9400 4.9998 -
0.4787 9500 5.0001 -
0.4837 9600 4.9998 -
0.4888 9700 4.9999 -
0.4938 9800 5.0 -
0.4988 9900 4.9998 -
0.5039 10000 5.0 -
0.5089 10100 5.0002 -
0.5140 10200 5.0003 -
0.5190 10300 4.9998 -
0.5240 10400 4.9999 -
0.5291 10500 5.0 -
0.5341 10600 4.9999 -
0.5392 10700 5.0 -
0.5442 10800 5.0001 -
0.5492 10900 4.9999 -
0.5543 11000 5.0 -
0.5593 11100 4.9999 -
0.5643 11200 5.0 -
0.5694 11300 4.9999 -
0.5744 11400 4.9997 -
0.5795 11500 5.0002 -
0.5845 11600 4.9999 -
0.5895 11700 5.0001 -
0.5946 11800 5.0001 -
0.5996 11900 5.0004 -
0.6047 12000 4.9998 -
0.6097 12100 5.0002 -
0.6147 12200 4.9998 -
0.6198 12300 5.0001 -
0.6248 12400 5.0001 -
0.6298 12500 5.0001 -
0.6349 12600 4.9999 -
0.6399 12700 5.0001 -
0.6450 12800 4.9999 -
0.6500 12900 5.0001 -
0.6550 13000 4.9999 -
0.6601 13100 5.0002 -
0.6651 13200 5.0001 -
0.6702 13300 5.0002 -
0.6752 13400 4.9997 -
0.6802 13500 5.0001 -
0.6853 13600 4.9996 -
0.6903 13700 4.9999 -
0.6954 13800 5.0002 -
0.7004 13900 4.9997 -
0.7054 14000 5.0 -
0.7105 14100 5.0001 -
0.7155 14200 5.0001 -
0.7205 14300 4.9999 -
0.7256 14400 4.9999 -
0.7306 14500 4.9998 -
0.7357 14600 5.0 -
0.7407 14700 5.0002 -
0.7457 14800 5.0001 -
0.7508 14900 4.9998 -
0.7558 15000 5.0002 -
0.7609 15100 5.0002 -
0.7659 15200 5.0 -
0.7709 15300 5.0002 -
0.7760 15400 5.0 -
0.7810 15500 5.0001 -
0.7861 15600 5.0 -
0.7911 15700 5.0004 -
0.7961 15800 5.0 -
0.8012 15900 5.0001 -
0.8062 16000 5.0003 -
0.8112 16100 4.9999 -
0.8163 16200 5.0 -
0.8213 16300 4.9999 -
0.8264 16400 5.0 -
0.8314 16500 4.9999 -
0.8364 16600 4.9998 -
0.8415 16700 4.9998 -
0.8465 16800 5.0002 -
0.8516 16900 4.9999 -
0.8566 17000 4.9999 -
0.8616 17100 4.9997 -
0.8667 17200 5.0001 -
0.8717 17300 4.9999 -
0.8768 17400 5.0001 -
0.8818 17500 4.9999 -
0.8868 17600 5.0001 -
0.8919 17700 5.0001 -
0.8969 17800 5.0001 -
0.9019 17900 4.9996 -
0.9070 18000 5.0001 -
0.9120 18100 4.9997 -
0.9171 18200 5.0001 -
0.9221 18300 4.9998 -
0.9271 18400 4.9997 -
0.9322 18500 4.9999 -
0.9372 18600 5.0001 -
0.9423 18700 5.0004 -
0.9473 18800 4.9997 -
0.9523 18900 4.9999 -
0.9574 19000 5.0001 -
0.9624 19100 4.9999 -
0.9674 19200 5.0 -
0.9725 19300 4.9999 -
0.9775 19400 4.9999 -
0.9826 19500 4.9999 -
0.9876 19600 4.9998 -
0.9926 19700 5.0 -
0.9977 19800 4.9999 -
1.0 19846 - 0.4905

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.40.0
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification}, 
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
13
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Nutanix/jina-embeddings-v2-base-code-mbpp

Finetuned
(2)
this model

Evaluation results