metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6552
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-small-en-v1.5
widget:
- source_sentence: >-
What property is denoted as the M→M property in the queueing network
literature?
sentences:
- >-
The LOFAR system introduces two additional levels in the beam hierarchy:
the compound (tile) beam and the station beam.
- >-
The desired pseudonoise sequence in a CDMA system has the
characteristics that the fraction of 0's and 1's is almost half-and-half
over the period, and the shifted versions of the pseudonoise sequence
are nearly orthogonal to each other. If the shift of the pseudonoise
sequence is randomized, it becomes a random process.
- >-
The M→M property in the queueing network literature denotes the
independence of individual queues in the long term.
- source_sentence: >-
Which type of channel condition has better path loss exponent (PLE) in
terms of AA (air to air) and AG (air to ground) propagation channels?
sentences:
- >-
The goal of the Fixed Access Information API is to provide access
network related information for the multitude of fixed access
technologies.
- >-
Error mitigation is a technique to reduce the impact of errors in
near-term quantum systems without requiring full fault-tolerant quantum
codes.
- >-
From the document, it is mentioned that the AA channel has better
conditions than the AG channel in terms of path loss exponent (PLE).
- source_sentence: What is the goal of a functionality extraction attack?
sentences:
- >-
Deep learning can automatically extract high-level features from data,
reducing the need for manual feature engineering.
- >-
The goal of a functionality extraction attack is to create knock-off
models that mimic the behavior of an existing machine learning model.
- >-
The main advantage of using wind turbine towers for communication is
that they already have a reliable power grid connection.
- source_sentence: What is MTU?
sentences:
- >-
The worst-case complexity of average consensus is exponential in the
number of nodes, but it can be reduced to linear if an upper bound on
the total number of nodes is known.
- >-
In a normally clad fiber, at long wavelengths, the MFD is large compared
to the core diameter and the electric field extends far into the
cladding region.
- >-
MTU (Maximum Transmission Unit) represents the largest size of a data
packet that can be sent over a network without fragmentation.
- source_sentence: >-
What should the AP or PCP do if it is not decentralized AP or PCP
clustering capable or a decentralized AP or PCP cluster is not present?
sentences:
- >-
When a Data, Management or Extension frame is received, a STA inserts it
in an appropriate cache.
- >-
If the AP or PCP is not decentralized AP or PCP clustering capable or a
decentralized AP or PCP cluster is not present, it should set its
Cluster Member Role to 0 (not currently participating in a cluster) and
remain unclustered.
- >-
Analog beamforming based on slowly-varying second order statistics of
the CSI reduces the dimension of the effective instantaneous CSI for
digital beamforming within each coherent fading block, which helps to
relieve the signaling overhead.
datasets:
- dinho1597/Telecom-QA-MultipleChoice
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_recall@1
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on BAAI/bge-small-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: telecom ir eval
type: telecom-ir-eval
metrics:
- type: cosine_accuracy@1
value: 0.965675057208238
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.992372234935164
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9931350114416476
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9938977879481312
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.965675057208238
name: Cosine Precision@1
- type: cosine_recall@1
value: 0.965675057208238
name: Cosine Recall@1
- type: cosine_ndcg@10
value: 0.9824027787882591
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9784334023464457
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9786169716375667
name: Cosine Map@100
SentenceTransformer based on BAAI/bge-small-en-v1.5
This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5 on the telecom-qa-multiple_choice dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-small-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What should the AP or PCP do if it is not decentralized AP or PCP clustering capable or a decentralized AP or PCP cluster is not present?',
'If the AP or PCP is not decentralized AP or PCP clustering capable or a decentralized AP or PCP cluster is not present, it should set its Cluster Member Role to 0 (not currently participating in a cluster) and remain unclustered.',
'When a Data, Management or Extension frame is received, a STA inserts it in an appropriate cache.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
telecom-ir-eval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9657 |
cosine_accuracy@3 | 0.9924 |
cosine_accuracy@5 | 0.9931 |
cosine_accuracy@10 | 0.9939 |
cosine_precision@1 | 0.9657 |
cosine_recall@1 | 0.9657 |
cosine_ndcg@10 | 0.9824 |
cosine_mrr@10 | 0.9784 |
cosine_map@100 | 0.9786 |
Training Details
Training Dataset
telecom-qa-multiple_choice
- Dataset: telecom-qa-multiple_choice at 73aebbb
- Size: 6,552 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 4 tokens
- mean: 18.95 tokens
- max: 49 tokens
- min: 9 tokens
- mean: 29.33 tokens
- max: 112 tokens
- Samples:
anchor positive What is the goal of a jammer in a mobile edge caching system?
The goal of a jammer in a mobile edge caching system is to interrupt ongoing radio transmissions of the edge node with cached chunks or caching users and prevent access to cached content. Additionally, jammers aim to deplete the resources of edge nodes, caching users, and sensors during failed communication attempts.
Which type of DRL uses DNNs (Deep Neural Networks) to fit action values and employs experience replay and target networks to ensure stable training convergence?
Value-based DRL, such as Deep Q-Learning (DQL), uses DNNs to fit action values and employs experience replay and target networks to ensure stable training convergence.
What is the relationship between the curvature of the decision boundary and the robustness of a network?
The lower the curvature of the decision boundaries, the more robust the network.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Evaluation Dataset
telecom-qa-multiple_choice
- Dataset: telecom-qa-multiple_choice at 73aebbb
- Size: 6,552 evaluation samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 4 tokens
- mean: 18.87 tokens
- max: 56 tokens
- min: 8 tokens
- mean: 29.45 tokens
- max: 91 tokens
- Samples:
anchor positive Which forward error correction (FEC) codes are available for the THz single carrier mode?
The THz single carrier mode (THz-SC PHY) in the IEEE 802.15.3d standard supports two low-density parity-check (LDPC) codes: 14/15 LDPC (1440,1344) and 11/15 LDPC (1440,1056).
Which multiple access technique allows users to access the channel simultaneously using the same frequency and time resources, with different power levels?
Non-Orthogonal Multiple Access (NOMA) allows users to access the channel simultaneously using the same frequency and time resources, but with different power levels.
What is the power gain when doubling the number of antennas?
Doubling the number of antennas yields a 3-dB power gain.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 512per_device_eval_batch_size
: 512weight_decay
: 0.01num_train_epochs
: 15lr_scheduler_type
: cosine_with_restartswarmup_ratio
: 0.1fp16
: Trueload_best_model_at_end
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 512per_device_eval_batch_size
: 512per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 15max_steps
: -1lr_scheduler_type
: cosine_with_restartslr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | telecom-ir-eval_cosine_ndcg@10 |
---|---|---|---|---|
1.2727 | 15 | 1.0332 | 0.0968 | 0.9725 |
2.5455 | 30 | 0.2091 | 0.0518 | 0.9808 |
3.8182 | 45 | 0.0997 | 0.0470 | 0.9824 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}