SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("danicafisher/dfisher-sentence-transformer-fine-tuned2")
# Run inference
sentences = [
'What is the title of the publication related to Artificial Intelligence Risk Management by NIST?',
'NIST Trustworthy and Responsible AI \nNIST AI 600-1 \nArtificial Intelligence Risk Management \nFramework: Generative Artificial \nIntelligence Profile \n \n \n \nThis publication is available free of charge from: \nhttps://doi.org/10.6028/NIST.AI.600-1',
'HUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAn automated system should provide demonstrably effective mechanisms to opt out in favor of a human alterna\xad\ntive, where appropriate, as well as timely human consideration and remedy by a fallback system, with additional \nhuman oversight and safeguards for systems used in sensitive domains, and with training and assessment for any \nhuman-based portions of the system to ensure effectiveness. \nProvide a mechanism to conveniently opt out from automated systems in favor of a human \nalternative, where appropriate \nBrief, clear, accessible notice and instructions. Those impacted by an automated system should be \ngiven a brief, clear notice that they are entitled to opt-out, along with clear instructions for how to opt-out. \nInstructions should be provided in an accessible form and should be easily findable by those impacted by the \nautomated system. The brevity, clarity, and accessibility of the notice and instructions should be assessed (e.g., \nvia user experience research). \nHuman alternatives provided when appropriate. In many scenarios, there is a reasonable expectation \nof human involvement in attaining rights, opportunities, or access. When automated systems make up part of \nthe attainment process, alternative timely human-driven processes should be provided. The use of a human \nalternative should be triggered by an opt-out process. \nTimely and not burdensome human alternative. Opting out should be timely and not unreasonably \nburdensome in both the process of requesting to opt-out and the human-driven alternative provided. \nProvide timely human consideration and remedy by a fallback and escalation system in the \nevent that an automated system fails, produces error, or you would like to appeal or con\xad\ntest its impacts on you \nProportionate. The availability of human consideration and fallback, along with associated training and \nsafeguards against human bias, should be proportionate to the potential of the automated system to meaning\xad\nfully impact rights, opportunities, or access. Automated systems that have greater control over outcomes, \nprovide input to high-stakes decisions, relate to sensitive domains, or otherwise have a greater potential to \nmeaningfully impact rights, opportunities, or access should have greater availability (e.g., staffing) and over\xad\nsight of human consideration and fallback mechanisms. \nAccessible. Mechanisms for human consideration and fallback, whether in-person, on paper, by phone, or \notherwise provided, should be easy to find and use. These mechanisms should be tested to ensure that users \nwho have trouble with the automated system are able to use human consideration and fallback, with the under\xad\nstanding that it may be these users who are most likely to need the human assistance. Similarly, it should be \ntested to ensure that users with disabilities are able to find and use human consideration and fallback and also \nrequest reasonable accommodations or modifications. \nConvenient. Mechanisms for human consideration and fallback should not be unreasonably burdensome as \ncompared to the automated system’s equivalent. \n49',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 180 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 180 samples:
sentence_0 sentence_1 type string string details - min: 12 tokens
- mean: 22.28 tokens
- max: 36 tokens
- min: 21 tokens
- mean: 241.8 tokens
- max: 256 tokens
- Samples:
sentence_0 sentence_1 What concerns have been raised regarding the use of facial recognition technology in public housing?
65. See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media Profiles in Data Lead: Info
Appears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020. https://
www.cpomagazine.com/cyber-security/major-data-broker-exposes-235-million-social-media-profiles
in-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a Single Server. WIRED,
Nov. 22, 2019. https://www.wired.com/story/billion-records-exposed-online/
66. Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash. New York Times.
Sept. 24, 2019.
https://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html
67. Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust
Unions. Newsweek. Dec. 13, 2021.
https://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust
unions-1658603
68. See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum
(https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter), and
against Weight Watchers and their subsidiary Kurbo
(https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)
69. See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109
(1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g), Children's Online
Privacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information Protection and
Statistical Efficiency Act (CIPSEA) (116 Stat. 2899)
70. Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True. ProPublica. Nov.
21, 2018.
https://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true
71. Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012.
https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html
72. Jack Gillum and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology
Schools are Using to Monitor Students. ProPublica. Jun. 25, 2019.
https://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology
schools-are-using-to-monitor-students/
73. Drew Harwell. Cheating-detection companies made millions during the pandemic. Now students are
fighting back. Washington Post. Nov. 12, 2020.
https://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/
74. See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage. Government
Technology. May 24, 2022.
https://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage;
Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And Disability
Discrimination In New Surveillance Technologies: How new surveillance technologies in education,
policing, health care, and the workplace disproportionately harm disabled people. Center for Democracy
and Technology Report. May 24, 2022.
https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how
new-surveillance-technologies-in-education-policing-health-care-and-the-workplace
disproportionately-harm-disabled-people/
69What are the potential consequences of automated systems making decisions without providing notice or explanations to affected individuals?
NOTICE &
EXPLANATION
WHY THIS PRINCIPLE IS IMPORTANT
This section provides a brief summary of the problems which the principle seeks to address and protect
against, including illustrative examples.
Automated systems now determine opportunities, from employment to credit, and directly shape the American
public’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this
expansive impact is not always visible. An applicant might not know whether a person rejected their resume or a
hiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge deny
ing their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting
decisions, people are often denied the knowledge they need to address the impact of automated systems on their lives.
Notice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonable
ness of a recommendation before enacting it.
In order to guard against potential harms, the American public needs to know if an automated system is being used.
Clear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Like
wise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a
particular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore,
unaccountable, whether by design or by omission. These factors can make explanations both more challenging and
more important, and should not be used as a pretext to avoid explaining important decisions to the people impacted
by those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline
requirement.
Providing notice has long been a standard practice, and in many cases is a legal requirement, when, for example,
making a video recording of someone (outside of a law enforcement or national security context). In some cases, such
as credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the
process of explaining such systems are under active research and improvement and such explanations can take many
forms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory
systems that can help the public better understand decisions that impact them.
While notice and explanation requirements are already in place in some sectors or situations, the American public
deserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights,
opportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the
validity and reasonable use of automated systems.
•
A lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home
health-care assistance couldn't determine why, especially since the decision went against historical access
practices. In a court hearing, the lawyer learned from a witness that the state in which the older client
lived had recently adopted a new algorithm to determine eligibility.83 The lack of a timely explanation made it
harder to understand and contest the decision.
•
A formal child welfare investigation is opened against a parent based on an algorithm and without the parent
ever being notified that data was being collected and used as part of an algorithmic child maltreatment
risk assessment.84 The lack of notice or an explanation makes it harder for those performing child
maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them
contest a decision.
41How has the Supreme Court's decision to overturn Roe v Wade been addressed by President Biden?
ENDNOTES
1.The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the
Federal Government. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive
order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/
2. The White House. Remarks by President Biden on the Supreme Court Decision to Overturn Roe v. Wade. Jun.
24, 2022. https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/06/24/remarks-by-president
biden-on-the-supreme-court-decision-to-overturn-roe-v-wade/
3. The White House. Join the Effort to Create A Bill of Rights for an Automated Society. Nov. 10, 2021. https://
www.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an
automated-society/
4. U.S. Dept. of Health, Educ. & Welfare, Report of the Sec’y’s Advisory Comm. on Automated Pers. Data Sys.,
Records, Computers, and the Rights of Citizens (July 1973). https://www.justice.gov/opcl/docs/rec-com
rights.pdf.
5. See, e.g., Office of Mgmt. & Budget, Exec. Office of the President, Circular A-130, Managing Information as a
Strategic Resource, app. II § 3 (July 28, 2016); Org. of Econ. Co-Operation & Dev., Revision of the
Recommendation of the Council Concerning Guidelines Governing the Protection of Privacy and Transborder
Flows of Personal Data, Annex Part Two (June 20, 2013). https://one.oecd.org/document/C(2013)79/en/pdf.
6. Andrew Wong et al. External validation of a widely implemented proprietary sepsis prediction model in
hospitalized patients. JAMA Intern Med. 2021; 181(8):1065-1070. doi:10.1001/jamainternmed.2021.2626
7. Jessica Guynn. Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate
speech. USA Today. Apr. 24, 2019. https://www.usatoday.com/story/news/2019/04/24/facebook-while-black
zucked-users-say-they-get-blocked-racism-discussion/2859593002/
8. See, e.g., Michael Levitt. AirTags are being used to track people and cars. Here's what is being done about it.
NPR. Feb. 18, 2022. https://www.npr.org/2022/02/18/1080944193/apple-airtags-theft-stalking-privacy-tech;
Samantha Cole. Police Records Show Women Are Being Stalked With Apple AirTags Across the Country.
Motherboard. Apr. 6, 2022. https://www.vice.com/en/article/y3vj3y/apple-airtags-police-reports-stalking
harassment
9. Kristian Lum and William Isaac. To Predict and Serve? Significance. Vol. 13, No. 5, p. 14-19. Oct. 7, 2016.
https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x; Aaron Sankin, Dhruv Mehrotra,
Surya Mattu, and Annie Gilbertson. Crime Prediction Software Promised to Be Free of Biases. New Data Shows
It Perpetuates Them. The Markup and Gizmodo. Dec. 2, 2021. https://themarkup.org/prediction
bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates
them
10. Samantha Cole. This Horrifying App Undresses a Photo of Any Woman With a Single Click. Motherboard.
June 26, 2019. https://www.vice.com/en/article/kzm59x/deepnude-app-creates-fake-nudes-of-any-woman
11. Lauren Kaori Gurley. Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make.
Motherboard. Sep. 20, 2021. https://www.vice.com/en/article/88npjv/amazons-ai-cameras-are-punishing
drivers-for-mistakes-they-didnt-make
63 - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseeval_use_gather_object
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for danicafisher/dfisher-sentence-transformer-fine-tuned2
Base model
sentence-transformers/all-MiniLM-L6-v2