SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What additional ingredients are suggested to increase protein content in the context?',
'cocoa and maybe some ground flax or whatever is lying around) for an extra 40 grams of protein.',
'Thanks for this timely article! In the midst of the March Challenge; was trying to determine the next item to tackle- and groceries was it! How’d you know it was $1000? Hmmm….psychic.\nI FINALLY updated all the spending on Quicken last month to make myself stare it in the face. No surprises; not ugly, but not very pretty either. The most valuable outcome of the exercise was showing my husband that his hard efforts are appreciated, and I’m stepping up!',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.7582 |
cosine_accuracy@3 | 0.9121 |
cosine_accuracy@5 | 0.9451 |
cosine_accuracy@10 | 0.9725 |
cosine_precision@1 | 0.7582 |
cosine_precision@3 | 0.304 |
cosine_precision@5 | 0.189 |
cosine_precision@10 | 0.0973 |
cosine_recall@1 | 0.7582 |
cosine_recall@3 | 0.9121 |
cosine_recall@5 | 0.9451 |
cosine_recall@10 | 0.9725 |
cosine_ndcg@10 | 0.8709 |
cosine_mrr@10 | 0.8376 |
cosine_map@100 | 0.8396 |
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.66 |
cosine_accuracy@3 | 0.76 |
cosine_accuracy@5 | 0.88 |
cosine_accuracy@10 | 0.9 |
cosine_precision@1 | 0.66 |
cosine_precision@3 | 0.2533 |
cosine_precision@5 | 0.176 |
cosine_precision@10 | 0.09 |
cosine_recall@1 | 0.66 |
cosine_recall@3 | 0.76 |
cosine_recall@5 | 0.88 |
cosine_recall@10 | 0.9 |
cosine_ndcg@10 | 0.7736 |
cosine_mrr@10 | 0.7329 |
cosine_map@100 | 0.7377 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 100 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 100 samples:
sentence_0 sentence_1 type string string details - min: 8 tokens
- mean: 16.78 tokens
- max: 35 tokens
- min: 13 tokens
- mean: 132.1 tokens
- max: 195 tokens
- Samples:
sentence_0 sentence_1 What is the significance of the date Mar 29, 2012, in relation to grocery expenses?
Killing your $1000 Grocery Bill
Home
Media
Contact
Email
RSS
Start Here
About
Random
MMM Recommends
Forum
MMM Classics
Mr. Money Mustache
View: Fancy MagazineWut do u think about spendin eighty dolars a week on food for a family?
“Eighty dollars a week on food for the three of you? That’s IT??”, said a friend, “We spend more than three times that amount!!”
“Whoa”, I replied, “I guess I’m not as spendy as I thought”.
Of course, the person telling me about her high food bill was more of a typical high-income spender in many ways. Her family also took out loans to buy new cars, had at least one $2500 carbon fiber road bike gleaming in the garage, and hired out the household chores to allow them to conveniently work a double-career-with-kids while still taking plenty of short vacations involving air travel. Looking back, I probably could have predicted a non-Mustachian grocery bill.What factors contribute to the varying cost of living in the United States, and how can individuals make choices to manage their spending effectively?
But the experience still reminded me of the amazing variety of spending levels we all have available to us here in the United States. It is simultaneously one of the cheapest industrialized countries in the world to live in, and the most expensive. It all depends on the choices you make in your shopping, because everything in the world is available right here for your buying convenience.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 10 | 0.8684 |
2.0 | 20 | 0.8698 |
3.0 | 30 | 0.8699 |
4.0 | 40 | 0.8706 |
5.0 | 50 | 0.8709 |
1.0 | 5 | 0.7269 |
2.0 | 10 | 0.7437 |
3.0 | 15 | 0.7539 |
4.0 | 20 | 0.7727 |
5.0 | 25 | 0.7736 |
Framework Versions
- Python: 3.13.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for don-unagi/finetuned_arctic_ft_kg
Base model
Snowflake/snowflake-arctic-embed-lEvaluation results
- Cosine Accuracy@1 on Unknownself-reported0.758
- Cosine Accuracy@3 on Unknownself-reported0.912
- Cosine Accuracy@5 on Unknownself-reported0.945
- Cosine Accuracy@10 on Unknownself-reported0.973
- Cosine Precision@1 on Unknownself-reported0.758
- Cosine Precision@3 on Unknownself-reported0.304
- Cosine Precision@5 on Unknownself-reported0.189
- Cosine Precision@10 on Unknownself-reported0.097
- Cosine Recall@1 on Unknownself-reported0.758
- Cosine Recall@3 on Unknownself-reported0.912