SentenceTransformer based on jinaai/jina-embeddings-v3
This is a sentence-transformers model finetuned from jinaai/jina-embeddings-v3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: jinaai/jina-embeddings-v3
- Maximum Sequence Length: 8194 tokens
- Output Dimensionality: 1024 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(transformer): Transformer(
(auto_model): XLMRobertaLoRA(
(roberta): XLMRobertaModel(
(embeddings): XLMRobertaEmbeddings(
(word_embeddings): ParametrizedEmbedding(
250002, 1024, padding_idx=1
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
(token_type_embeddings): ParametrizedEmbedding(
1, 1024
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
)
(emb_drop): Dropout(p=0.1, inplace=False)
(emb_ln): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(encoder): XLMRobertaEncoder(
(layers): ModuleList(
(0-23): 24 x Block(
(mixer): MHA(
(rotary_emb): RotaryEmbedding()
(Wqkv): ParametrizedLinearResidual(
in_features=1024, out_features=3072, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
(inner_attn): FlashSelfAttention(
(drop): Dropout(p=0.1, inplace=False)
)
(inner_cross_attn): FlashCrossAttention(
(drop): Dropout(p=0.1, inplace=False)
)
(out_proj): ParametrizedLinear(
in_features=1024, out_features=1024, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
)
(dropout1): Dropout(p=0.1, inplace=False)
(drop_path1): StochasticDepth(p=0.0, mode=row)
(norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): ParametrizedLinear(
in_features=1024, out_features=4096, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
(fc2): ParametrizedLinear(
in_features=4096, out_features=1024, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
)
(dropout2): Dropout(p=0.1, inplace=False)
(drop_path2): StochasticDepth(p=0.0, mode=row)
(norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
)
(pooler): XLMRobertaPooler(
(dense): ParametrizedLinear(
in_features=1024, out_features=1024, bias=True
(parametrizations): ModuleDict(
(weight): ParametrizationList(
(0): LoRAParametrization()
)
)
)
(activation): Tanh()
)
)
)
)
(pooler): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(normalizer): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ELVISIO/jina_embeddings_v3_finetuned_online_contrastive_imdb_2")
# Run inference
sentences = [
"Saw this on cable back in the early 90's and loved it. Never saw it again until it showed up on cable again recently. Still find it a great Vietnam movie. Not sure why its not higher rated. I found everything about this film compelling. As a vet (not from Vietnam) I can relate to the situations brought by both Harris and De Niro. I can only imagine this film being more poignant now with our situation in Iraq. I wish this would be offered on cable more often for people to see. The human toll on our soldiers isn't left on the battlefield. Its brought home for the rest of there lives. And this film is one of many that brings that home in a very hard way. Excellent film.",
'positive positive positive positive',
'negative negative negative negative',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 16,000 training samples
- Columns:
sentence1
,sentence2
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 label type string string float details - min: 39 tokens
- mean: 173.59 tokens
- max: 291 tokens
- min: 6 tokens
- mean: 6.0 tokens
- max: 6 tokens
- min: 0.0
- mean: 0.5
- max: 1.0
- Samples:
sentence1 sentence2 label There are two kinds of 1950s musicals. First you have the glossy MGM productions with big names and great music. And then you have the minor league with a less famous cast, less famous music and second rate directors. 'The Girl Can't Help It' belongs to the latter category. Neither Tom Ewell or Edmond O'Brien became famous and Jayne Mansfield was famous for her... well, never mind. Seems like every decade has its share of Bo Dereks or Pamela Andersons. The plot itself is thin as a razorblade and one can't help suspect that it is mostly an attempt to sell records for Fats Domino, Little Richard or others of the 1950s rock acts that appear in the movie. If that music appeals to you this is worth watching. If not, don't bother.
negative negative negative negative
1.0
There are two kinds of 1950s musicals. First you have the glossy MGM productions with big names and great music. And then you have the minor league with a less famous cast, less famous music and second rate directors. 'The Girl Can't Help It' belongs to the latter category. Neither Tom Ewell or Edmond O'Brien became famous and Jayne Mansfield was famous for her... well, never mind. Seems like every decade has its share of Bo Dereks or Pamela Andersons. The plot itself is thin as a razorblade and one can't help suspect that it is mostly an attempt to sell records for Fats Domino, Little Richard or others of the 1950s rock acts that appear in the movie. If that music appeals to you this is worth watching. If not, don't bother.
positive positive positive positive
0.0
Thankfully as a student I have been able to watch "Diagnosis Murder" for a number of years now. It is basically about a doctor who solves murders with the help of his LAPD son, a young doctor and a pathologist. DM provided 8 seasons of exceptional entertainment. What made it different from the many other cop shows and worth watching many times over was its cast and quality of writing. The main cast gave good performances and Dick Van Dyke's entertainer roots shone through with the use of magic, dance and humor. The best aspects of DM was the fast pace, witty scripts and of course the toe tapping score. Sadly it has been unfairly compared to "Murder, She Wrote". DM is far superior boasting more difficult mysteries to solve and more variety. Now it is gone TV is a worse place. Gone are the days of feelgood, family friendly cop shows. Now there is just depressing 'gritty' ones.
positive positive positive positive
1.0
- Loss:
OnlineContrastiveLoss
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 64per_device_eval_batch_size
: 64
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3.0max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.2 | 50 | 2.9875 |
0.4 | 100 | 0.9284 |
0.6 | 150 | 0.7744 |
0.8 | 200 | 0.7551 |
1.0 | 250 | 0.6899 |
1.2 | 300 | 0.6892 |
1.4 | 350 | 0.6208 |
1.6 | 400 | 0.6831 |
1.8 | 450 | 0.6417 |
2.0 | 500 | 0.7181 |
2.2 | 550 | 0.7638 |
2.4 | 600 | 0.7152 |
2.6 | 650 | 0.6103 |
2.8 | 700 | 0.6801 |
3.0 | 750 | 0.5981 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.21.0
- Tokenizers: 0.20.3
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ELVISIO/jina_embeddings_v3_finetuned_online_contrastive_imdb_2
Base model
jinaai/jina-embeddings-v3