# Model Details

## Model Description

• Developed by: Princeton-nlp
• Model type: Feature Extraction
• Related Models:
• Parent Model: RoBERTa-large

# Uses

## Direct Use

This model can be used for the task of Feature Extraction

## Out-of-Scope Use

The model should not be used to intentionally create hostile or alienating environments for people.

# Bias, Risks, and Limitations

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.

## Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

# Training Details

## Training Data

The model craters note in the Github Repository

We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k).

# Evaluation

## Testing Data, Factors & Metrics

### Testing Data

The model craters note in the associated paper

Our evaluation code for sentence embeddings is based on a modified version of SentEval. It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See associated paper (Appendix B) for evaluation details.

# Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

# Citation

BibTeX:

@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}


# Glossary [optional]

If you have any questions related to the code or the paper, feel free to email Tianyu (tianyug@cs.princeton.edu) and Xingcheng (yxc18@mails.tsinghua.edu.cn). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!

# Model Card Authors [optional]

Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team

# How to Get Started with the Model

Use the code below to get started with the model.

Click to expand
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/sup-simcse-roberta-large")

model = AutoModel.from_pretrained("princeton-nlp/sup-simcse-roberta-large")