SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-m. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-m
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rgtlai/ai-policy-ft")
# Run inference
sentences = [
'What proactive steps should be taken during the design phase of automated systems to assess equity and prevent algorithmic discrimination?',
' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n',
' \n \n \nApplying The Blueprint for an AI Bill of Rights \nSENSITIVE DATA: Data and metadata are sensitive if they pertain to an individual in a sensitive domain \n(defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a \nsensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric \ndata, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship \nhistory and legal status such as custody and divorce information, and home, work, or school environmental \ndata); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful \nharm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about \nthose who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, \nbut is not limited to, numerical, text, image, audio, or video data. \nSENSITIVE DOMAINS: “Sensitive domains” are those in which activities being conducted can cause material \nharms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liber\xad\nties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections \nor where such enhanced protections are reasonably expected by the public include, but are not limited to, \nhealth, family planning and care, employment, education, criminal justice, and personal finance. In the context \nof this framework, such domains are considered sensitive whether or not the specifics of a system context \nwould necessitate coverage under existing law, and domains and data that are considered sensitive are under\xad\nstood to change over time based on societal norms and context. \nSURVEILLANCE TECHNOLOGY: “Surveillance technology” refers to products or services marketed for \nor that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or \nretain data, identifying information, or communications concerning individuals or groups. This framework \nlimits its focus to both government and commercial use of surveillance technologies when juxtaposed with \nreal-time or subsequent automated analysis and when such systems have a potential for meaningful impact \non individuals’ or communities’ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved communities” refers to communities that have \nbeen systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as \nexemplified by the list in the preceding definition of “equity.” \n11\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.7 |
cosine_accuracy@3 | 0.9 |
cosine_accuracy@5 | 0.9667 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.7 |
cosine_precision@3 | 0.3 |
cosine_precision@5 | 0.1933 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.7 |
cosine_recall@3 | 0.9 |
cosine_recall@5 | 0.9667 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.8479 |
cosine_mrr@10 | 0.7983 |
cosine_map@100 | 0.7983 |
dot_accuracy@1 | 0.7 |
dot_accuracy@3 | 0.9 |
dot_accuracy@5 | 0.9667 |
dot_accuracy@10 | 1.0 |
dot_precision@1 | 0.7 |
dot_precision@3 | 0.3 |
dot_precision@5 | 0.1933 |
dot_precision@10 | 0.1 |
dot_recall@1 | 0.7 |
dot_recall@3 | 0.9 |
dot_recall@5 | 0.9667 |
dot_recall@10 | 1.0 |
dot_ndcg@10 | 0.8479 |
dot_mrr@10 | 0.7983 |
dot_map@100 | 0.7983 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 200 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 200 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 22.34 tokens
- max: 38 tokens
- min: 21 tokens
- mean: 447.96 tokens
- max: 512 tokens
- Samples:
sentence_0 sentence_1 What is the purpose of the AI Bill of Rights mentioned in the context?
BLUEPRINT FOR AN
AI BILL OF
RIGHTS
MAKING AUTOMATED
SYSTEMS WORK FOR
THE AMERICAN PEOPLE
OCTOBER 2022When was the Blueprint for an AI Bill of Rights published?
BLUEPRINT FOR AN
AI BILL OF
RIGHTS
MAKING AUTOMATED
SYSTEMS WORK FOR
THE AMERICAN PEOPLE
OCTOBER 2022What is the purpose of the Blueprint for an AI Bill of Rights as published by the White House Office of Science and Technology Policy?
About this Document
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
published by the White House Office of Science and Technology Policy in October 2022. This framework was
released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
world.” Its release follows a year of public engagement to inform this initiative. The framework is available
online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
About the Office of Science and Technology Policy
The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office
of the President with advice on the scientific, engineering, and technological aspects of the economy, national
security, health, foreign relations, the environment, and the technological recovery and use of resources, among
other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of
Management and Budget (OMB) with an annual review and analysis of Federal research and development in
budgets, and serves as a source of scientific and technological analysis and judgment for the President with
respect to major policies, plans, and programs of the Federal Government.
Legal Disclaimer
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper
published by the White House Office of Science and Technology Policy. It is intended to support the
development of policies and practices that protect civil rights and promote democratic values in the building,
deployment, and governance of automated systems.
The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It
does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or
international instrument. It does not constitute binding guidance for the public or Federal agencies and
therefore does not require compliance with the principles described herein. It also is not determinative of what
the U.S. government’s position will be in any international negotiation. Adoption of these principles may not
meet the requirements of existing statutes, regulations, policies, or international instruments, or the
requirements of the Federal agencies that enforce them. These principles are not intended to, and do not,
prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or
intelligence activities.
The appropriate application of the principles set forth in this white paper depends significantly on the
context in which automated systems are being utilized. In some circumstances, application of these principles
in whole or in part may not be appropriate given the intended use of automated systems to achieve government
agency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of
automated systems in certain settings such as AI systems used as part of school building security or automated
health diagnostic systems.
The Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of
equities, for example, between the protection of sensitive law enforcement information and the principle of
notice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and
other law enforcement equities. Even in contexts where these principles may not apply in whole or in part,
federal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as
existing policies and safeguards that govern automated systems, including, for example, Executive Order 13960,
Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020).
This white paper recognizes that national security (which includes certain law enforcement and
homeland security activities) and defense activities are of increased sensitivity and interest to our nation’s
adversaries and are often subject to special requirements, such as those governing classified information and
other protected data. Such activities require alternative, compatible safeguards through existing policies that
govern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and
Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and
Framework. The implementation of these policies to national security and defense activities can be informed by
the Blueprint for an AI Bill of Rights where feasible.
The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or
defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its
departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a
waiver of sovereign immunity.
Copyright Information
This document is a work of the United States Government and is in the public domain (see 17 U.S.C. §105).
2 - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 5multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_map@100 |
---|---|---|
1.0 | 13 | 0.7303 |
2.0 | 26 | 0.7356 |
3.0 | 39 | 0.7828 |
3.8462 | 50 | 0.7817 |
4.0 | 52 | 0.7817 |
5.0 | 65 | 0.7983 |
Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for rgtlai/ai-policy-ft
Base model
Snowflake/snowflake-arctic-embed-mSpace using rgtlai/ai-policy-ft 1
Evaluation results
- Cosine Accuracy@1 on Unknownself-reported0.700
- Cosine Accuracy@3 on Unknownself-reported0.900
- Cosine Accuracy@5 on Unknownself-reported0.967
- Cosine Accuracy@10 on Unknownself-reported1.000
- Cosine Precision@1 on Unknownself-reported0.700
- Cosine Precision@3 on Unknownself-reported0.300
- Cosine Precision@5 on Unknownself-reported0.193
- Cosine Precision@10 on Unknownself-reported0.100
- Cosine Recall@1 on Unknownself-reported0.700
- Cosine Recall@3 on Unknownself-reported0.900