SentenceTransformer based on avsolatorio/GIST-small-Embedding-v0
This is a sentence-transformers model finetuned from avsolatorio/GIST-small-Embedding-v0. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: avsolatorio/GIST-small-Embedding-v0
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("GIST-small-Embedding-v0-4_batch_10_epoch_all_data_en_unique_split")
# Run inference
sentences = [
'A security incident must inform all affected internal and external bodies in a timely manner. It is necessary to check whether the Data Protection Officer, the Works and Staff Council and employees from the Legal Department need to be involved. Similarly, the reporting requirements for authorities and regulated sectors must be taken into account. It is also necessary to ensure that relevant bodies are informed of the necessary measures.',
'The organization shall plan and prepare for managing information secu-rity incidents by defining, establishing and communicating information security incident management processes, roles and responsibilities.',
'A.5.24',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
GIST-small-Embedding-v0-4_batch_10_epoch_all_data_en_unique_split_robustness_42_eval
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 0.8762 |
dot_accuracy | 0.095 |
manhattan_accuracy | 0.8698 |
euclidean_accuracy | 0.8762 |
max_accuracy | 0.8762 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 3,414 training samples
- Columns:
anchor
,positive
,ISO_ID
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive ISO_ID negative type string string string string details - min: 3 tokens
- mean: 79.84 tokens
- max: 512 tokens
- min: 10 tokens
- mean: 23.34 tokens
- max: 192 tokens
- min: 5 tokens
- mean: 6.99 tokens
- max: 7 tokens
- min: 10 tokens
- mean: 22.91 tokens
- max: 154 tokens
- Samples:
anchor positive ISO_ID negative System components in the area of responsibility of the Cloud Service Provider for the provision of the cloud service are automatically checked for known vulnerabilities at least once a month in accordance with the policies for handling vulnerabilities (cf. OPS-18), the severity is assessed in accordance with defined criteria and measures for timely remediation or mitigation are initiated within defined time windows.
Information about technical vulnerabilities of information systems in use shall be obtained, the organization’s exposure to such vulnerabilities shall be evaluated and appropriate measures shall be taken.
A.8.8
Information processing facilities shall be implemented with redundancy sufficient to meet availability requirements.
System components in the area of responsibility of the Cloud Service Provider for the provision of the cloud service are automatically checked for known vulnerabilities at least once a month in accordance with the policies for handling vulnerabilities (cf. OPS-18), the severity is assessed in accordance with defined criteria and measures for timely remediation or mitigation are initiated within defined time windows.
Changes to information processing facilities and information systems shall be subject to change management procedures.
A.8.32
Rules for the effective use of cryptography, including cryptographic key management, shall be defined and implemented.
The Cloud Service Provider retains the generated log data and keeps these in an appropriate, unchangeable and aggregated form, regardless of the source of such data, so that a central, authorised evaluation of the data is possible. Log data is deleted if it is no longer required for the purpose for which they were collected.
Between logging servers and the assets to be logged, authentication takes place to protect the integrity and authenticity of the information transmitted and stored. The transfer takes place using state-of-the-art encryption or a dedicated administration network (out-of-band management).Logs that record activities, exceptions, faults and other relevant events shall be produced, stored, protected and analysed.
A.8.15
Configurations, including security configurations, of hardware, software, services and networks shall be established, documented, implemented, monitored and reviewed.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 937 evaluation samples
- Columns:
anchor
,positive
,ISO_ID
, andnegative
- Approximate statistics based on the first 937 samples:
anchor positive ISO_ID negative type string string string string details - min: 12 tokens
- mean: 76.9 tokens
- max: 512 tokens
- min: 10 tokens
- mean: 41.55 tokens
- max: 495 tokens
- min: 5 tokens
- mean: 6.91 tokens
- max: 7 tokens
- min: 10 tokens
- mean: 40.68 tokens
- max: 495 tokens
- Samples:
anchor positive ISO_ID negative The Cloud Service Provider's internal and external employees are required by the employment terms and conditions to comply with applicable policies and instructions relating to information security.
The information security policy, and the policies and instructions based on it, are to be acknowledged by the internal and external personnel in a documented form before access is granted to any cloud customer data or system components under the responsibility of the Cloud Service Provider used to provide the cloud service in the production environment.The employment contractual agreements shall state the personnel’s and the organization’s responsibilities for information security.
A.6.2
The organization shall establish and implement procedures for the identification, collection, acquisition and preservation of evidence related to information security events.
The Cloud Service Provider has established procedures for inventorying assets.
The inventory is performed automatically and/or by the people or teams responsible for the assets to ensure complete, accurate, valid and consistent inventory throughout the asset lifecycle.
Assets are recorded with the information needed to apply the Risk Management Procedure (Cf. OIS-07), including the measures taken to manage these risks throughout the asset lifecycle. Changes to this information are logged.An inventory of information and other associated assets, including owners, shall be developed and maintained.
A.5.9
Access rights to information and other associated assets shall be provisioned, reviewed, modified and removed in accordance with the organization’s topic-specific policy on and rules for access control.
The Cloud Service Provider provides a training program for regular, target group-oriented security training and awareness for internal and external employees on standards and methods of secure software development and provision as well as on how to use the tools used for this purpose. The program is regularly reviewed and updated with regard to the applicable policies and instructions, the assigned roles and responsibilities and the tools used.
The organization shall:
a) determine the necessary competence of person(s) doing work under its control that affects its information security performance;
b) ensure that these persons are competent on the basis of appropriate education, training, or experience;
c) where applicable, take actions to acquire the necessary competence, and evaluate the effectiveness of the actions taken; and
d) retain appropriate documented information as evidence of competence.
NOTE Applicable actions can include, for example: the provision of training to, the mentoring of, or the re- assignment of current employees; or the hiring or contracting of competent persons.7.2
Knowledge gained from information security incidents shall be used to strengthen and improve the information security controls.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 4per_device_eval_batch_size
: 4num_train_epochs
: 10warmup_ratio
: 0.1bf16
: Trueddp_find_unused_parameters
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 4per_device_eval_batch_size
: 4per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Truedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Trueddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | GIST-small-Embedding-v0-4_batch_10_epoch_all_data_en_unique_split_robustness_42_eval_cosine_accuracy |
---|---|---|---|---|
0.9977 | 425 | 1.7795 | 1.4178 | 0.8036 |
1.9977 | 850 | 1.2852 | 1.1081 | 0.8591 |
2.9977 | 1275 | 1.0536 | 1.0428 | 0.8698 |
3.9977 | 1700 | 0.9389 | 1.0188 | 0.8741 |
4.9977 | 2125 | 0.8879 | 1.0129 | 0.8709 |
5.9977 | 2550 | 0.8557 | 1.0079 | 0.8698 |
6.9977 | 2975 | 0.8355 | 1.0076 | 0.8719 |
7.9977 | 3400 | 0.8151 | 1.0067 | 0.8751 |
8.9977 | 3825 | 0.8228 | 1.0065 | 0.8751 |
9.9977 | 4250 | 0.8174 | 1.0067 | 0.8762 |
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.1.0
- Transformers: 4.45.1
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.20.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Behrni/GIST-small-Embedding-v0-4_batch_10_epoch_all_data_en_unique_split_robustness_42
Base model
avsolatorio/GIST-small-Embedding-v0Evaluation results
- Cosine Accuracy on GIST small Embedding v0 4 batch 10 epoch all data en unique split robustness 42 evalself-reported0.876
- Dot Accuracy on GIST small Embedding v0 4 batch 10 epoch all data en unique split robustness 42 evalself-reported0.095
- Manhattan Accuracy on GIST small Embedding v0 4 batch 10 epoch all data en unique split robustness 42 evalself-reported0.870
- Euclidean Accuracy on GIST small Embedding v0 4 batch 10 epoch all data en unique split robustness 42 evalself-reported0.876
- Max Accuracy on GIST small Embedding v0 4 batch 10 epoch all data en unique split robustness 42 evalself-reported0.876