SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mr-Cool/midterm-finetuned-embedding")
# Run inference
sentences = [
'What processes should be updated for GAI acquisition and procurement vendor assessments?',
'Inventory all third-party entities with access to organizational content and \\\\\nestablish approved GAI technology and service provider lists. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nValue Chain and Component \\\\\nIntegration \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-008 & \\begin{tabular}{l}\nMaintain records of changes to content made by third parties to promote content \\\\\nprovenance, including sources, timestamps, metadata. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Value Chain \\\\\nand Component Integration; \\\\\nIntellectual Property \\\\\n\\end{tabular} \\\\\n\\hline\nGV-6.1-009 & \\begin{tabular}{l}\nUpdate and integrate due diligence processes for GAI acquisition and \\\\\nprocurement vendor assessments to include intellectual property, data privacy, \\\\\nsecurity, and other risks. For example, update processes to: Address solutions that \\\\\nmay rely on embedded GAI technologies; Address ongoing monitoring, \\\\\nassessments, and alerting, dynamic risk assessments, and real-time reporting \\\\',
'Evaluation data; Ethical considerations; Legal and regulatory requirements. \\\\\n\\end{tabular} & \\begin{tabular}{l}\nInformation Integrity; Harmful Bias \\\\\nand Homogenization \\\\\n\\end{tabular} \\\\\n\\hline\nAI Actor Tasks: Al Deployment, Al Impact Assessment, Domain Experts, End-Users, Operation and Monitoring, TEVV & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8851 |
cosine_accuracy@3 | 0.954 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.8851 |
cosine_precision@3 | 0.318 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.0246 |
cosine_recall@3 | 0.0265 |
cosine_recall@5 | 0.0278 |
cosine_recall@10 | 0.0278 |
cosine_ndcg@10 | 0.2082 |
cosine_mrr@10 | 0.928 |
cosine_map@100 | 0.0258 |
dot_accuracy@1 | 0.8851 |
dot_accuracy@3 | 0.954 |
dot_accuracy@5 | 1.0 |
dot_accuracy@10 | 1.0 |
dot_precision@1 | 0.8851 |
dot_precision@3 | 0.318 |
dot_precision@5 | 0.2 |
dot_precision@10 | 0.1 |
dot_recall@1 | 0.0246 |
dot_recall@3 | 0.0265 |
dot_recall@5 | 0.0278 |
dot_recall@10 | 0.0278 |
dot_ndcg@10 | 0.2082 |
dot_mrr@10 | 0.928 |
dot_map@100 | 0.0258 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 678 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 7 tokens
- mean: 18.37 tokens
- max: 36 tokens
- min: 7 tokens
- mean: 188.5 tokens
- max: 396 tokens
- Samples:
sentence_0 sentence_1 What are the characteristics of trustworthy AI?
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
How are the characteristics of trustworthy AI integrated into organizational policies?
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
Why is it important to integrate trustworthy AI characteristics into organizational processes?
GOVERN 1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 20per_device_eval_batch_size
: 20num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 20per_device_eval_batch_size
: 20per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_map@100 |
---|---|---|
1.0 | 34 | 0.0250 |
1.4706 | 50 | 0.0258 |
2.0 | 68 | 0.0257 |
2.9412 | 100 | 0.0258 |
3.0 | 102 | 0.0258 |
4.0 | 136 | 0.0258 |
4.4118 | 150 | 0.0258 |
5.0 | 170 | 0.0258 |
Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.6.0.dev20240922+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Mr-Cool/midterm-finetuned-embedding
Base model
Snowflake/snowflake-arctic-embed-mEvaluation results
- Cosine Accuracy@1 on Unknownself-reported0.885
- Cosine Accuracy@3 on Unknownself-reported0.954
- Cosine Accuracy@5 on Unknownself-reported1.000
- Cosine Accuracy@10 on Unknownself-reported1.000
- Cosine Precision@1 on Unknownself-reported0.885
- Cosine Precision@3 on Unknownself-reported0.318
- Cosine Precision@5 on Unknownself-reported0.200
- Cosine Precision@10 on Unknownself-reported0.100
- Cosine Recall@1 on Unknownself-reported0.025
- Cosine Recall@3 on Unknownself-reported0.027