--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1999 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Which area code corresponds to the city listed as Richmond in the context, and in which state is Richmond located? sentences: - ple, suppose an organization has two network interfaces. suppose both interfaces are assigned to a single bandwidth policy. the virtual interface for the organization will represent both network interfaces - field, sl1 automatically calculates the width of the text field l height. the height of the text field, including the label text, in pixels. if you do not supply a value in this - ) [1] => array ( [state] => va [city] => richmond [area_code] => 804 ) [2] => array ( [state] => dc - source_sentence: What connection details should you set in the `connect()` function of your `main.go` file after running the command `go mod tidy`? sentences: - run go mod tidy go mod tidy set your connection details earlier you looked up your connection details. set them in main.go in the connect() function - 7. to leave an annotation for a status policy or rule, click the annotation icon ( ) next to the rule or tab. type your annotation text in the annotation window and click [save]. the annotation icon now displays as solid blue, while empty annotation icons contain a plus sign - the destination takes as its value the path where the le or directory is mounted in the container. can be speci ed as destination , dst , or target - source_sentence: What is the total memory limit for the steprunner in the xlqueue, and how much more memory is allocated compared to the standard steprunner? sentences: - noise reduction is the percentage of alerts that did not become events in sl1. a mature, tuned sl1 system will have a high noise reduction percentage, as sl1 is sharing only the events that matter to your environment - settings > thresholds) - source_sentence: Which statement is true regarding the configuration of SL1 systems to use Powerflow for syncing with third-party applications like ServiceNow or Cherwell? sentences: - l you can configure one or more sl1 systems to use powerflow to sync with a single instance of a third-party application like servicenow or cherwell. you cannot configure one sl1 system to use powerflow to sync with multiple instances of a third-party application like servicenow or cherwell. the relationship between sl1 and the third-party application can be either one-to-one or many-to-one, but - l apply_64gb_override and verify_64gb_override the following command is an example of a pfctl command to apply the 32 gb override - ( provisioning a customer (provision_customer.php) 229 [0] => array ( ['start_ip'] => ['end_ip'] => --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("david-scilo/test_upload_10_17_2024") # Run inference sentences = [ 'Which statement is true regarding the configuration of SL1 systems to use Powerflow for syncing with third-party applications like ServiceNow or Cherwell?', 'l you can configure one or more sl1 systems to use powerflow to sync with a single instance of a third-party application like servicenow or cherwell. you cannot configure one sl1 system to use powerflow to sync with multiple instances of a third-party application like servicenow or cherwell. the relationship between sl1 and the third-party application can be either one-to-one or many-to-one, but', "( provisioning a customer (provision_customer.php) 229 [0] => array ( ['start_ip'] => ['end_ip'] =>", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 1,999 training samples * Columns: sentence_0 and sentence_1 * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Which software ignores the authentication method field for users who are authenticated with single sign-on (SSO) in the context of EM7? | note: for users who are authenticated with single sign on (sso), em7 ignores the authentication method field. for details on configuring sl1 to use single sign on (sso) authentication, see the manual on using using single sign on | | Which command would you use to enable the ol8_appstream repository for package management on a Linux system using DNF? | sudo dnf install yum-utils sudo dnf config-manager --enable ol8_baseos_latest sudo dnf config-manager --enable ol8_appstream | | Which process should you consider using if you encounter environmental problems during the automated upgrade scripts for AWS environments? | the automated upgrade scripts will likely work for aws environments, but due to potential environmental differences between chosen amis, there might be other package updates or requirements. if you encounter environmental problems, you should consider using the back up, re-install, and restore process instead | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `num_train_epochs`: 1 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 10 - `per_device_eval_batch_size`: 10 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin
### Framework Versions - Python: 3.11.9 - Sentence Transformers: 3.1.1 - Transformers: 4.43.3 - PyTorch: 2.3.1+cu121 - Accelerate: 0.30.1 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```