princeton-nlp nazneen commited on
Commit
218baf6
1 Parent(s): 6504ae0

model documentation (#3)

Browse files

- model documentation (239ed6a6ffa0ab1adc7e50ef162c1dd8cb95c951)


Co-authored-by: Nazneen Rajani <nazneen@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +167 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - feature-extraction
4
+ - bert
5
+ ---
6
+ # Model Card for unsup-simcse-bert-base-uncased
7
+
8
+ # Model Details
9
+
10
+ ## Model Description
11
+
12
+ More information needed
13
+
14
+ - **Developed by:** Princeton NLP group
15
+ - **Shared by [Optional]:** Hugging Face
16
+ - **Model type:** Feature Extraction
17
+ - **Language(s) (NLP):** More information needed
18
+ - **License:** More information needed
19
+ - **Related Models:**
20
+ - **Parent Model:** BERT
21
+ - **Resources for more information:**
22
+ - [GitHub Repo](https://github.com/princeton-nlp/SimCSE)
23
+ - [Model Space](https://huggingface.co/spaces/mteb/leaderboard)
24
+ - [Associated Paper](https://arxiv.org/abs/2104.08821)
25
+
26
+ # Uses
27
+
28
+ ## Direct Use
29
+
30
+ This model can be used for the task of Feature Engineering.
31
+
32
+ ## Downstream Use [Optional]
33
+
34
+ More information needed
35
+
36
+ ## Out-of-Scope Use
37
+
38
+ The model should not be used to intentionally create hostile or alienating environments for people.
39
+
40
+
41
+
42
+ # Bias, Risks, and Limitations
43
+
44
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
45
+
46
+
47
+ ## Recommendations
48
+
49
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
50
+
51
+
52
+ # Training Details
53
+
54
+ ## Training Data
55
+ The model craters note in the [Github Repository](https://github.com/princeton-nlp/SimCSE/blob/main/README.md)
56
+ > We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k).
57
+
58
+ ## Training Procedure
59
+
60
+
61
+ ### Preprocessing
62
+
63
+ More information needed
64
+
65
+ ### Speeds, Sizes, Times
66
+
67
+ More information needed
68
+
69
+ # Evaluation
70
+
71
+
72
+ ## Testing Data, Factors & Metrics
73
+
74
+ ### Testing Data
75
+ The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf)
76
+ > Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks. For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See [associated paper](https://arxiv.org/pdf/2104.08821.pdf) (Appendix B) for evaluation details.
77
+
78
+ ### Factors
79
+
80
+ More information needed
81
+
82
+ ### Metrics
83
+
84
+ More information needed
85
+
86
+ ## Results
87
+
88
+ More information needed
89
+
90
+ # Model Examination
91
+ The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf)
92
+
93
+ > **Uniformity and alignment.**
94
+ We also observe that (1) though pre-trained embeddings have good alignment, their uniformity is poor (i.e., the embeddings are highly anisotropic); (2) post-processing methods like BERT-flow and BERT-whitening greatly improve uniformity but also suffer a degeneration in alignment; (3) unsupervised SimCSE effectively improves uniformity of pre-trained embeddings whereas keeping a good alignment;(4) incorporating supervised data in SimCSE further amends alignment.
95
+
96
+ # Environmental Impact
97
+
98
+
99
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
100
+
101
+ - **Hardware Type:** Nvidia 3090 GPUs with CUDA 11
102
+ - **Hours used:** More information needed
103
+ - **Cloud Provider:** More information needed
104
+ - **Compute Region:** More information needed
105
+ - **Carbon Emitted:** More information needed
106
+
107
+ # Technical Specifications [optional]
108
+
109
+ ## Model Architecture and Objective
110
+ More information needed
111
+
112
+ ## Compute Infrastructure
113
+
114
+ More information needed
115
+
116
+ ### Hardware
117
+
118
+ More information needed
119
+
120
+ ### Software
121
+
122
+ More information needed
123
+
124
+ # Citation
125
+
126
+
127
+ **BibTeX:**
128
+ ```bibtex
129
+ @inproceedings{gao2021simcse,
130
+ title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
131
+ author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
132
+ booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
133
+ year={2021}
134
+ }
135
+
136
+ ```
137
+
138
+ # Glossary [optional]
139
+ More information needed
140
+
141
+ # More Information [optional]
142
+
143
+ More information needed
144
+
145
+ # Model Card Authors [optional]
146
+
147
+ Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team
148
+
149
+ # Model Card Contact
150
+ If you have any questions related to the code or the paper, feel free to email Tianyu (`tianyug@cs.princeton.edu`) and Xingcheng (`yxc18@mails.tsinghua.edu.cn`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
151
+ # How to Get Started with the Model
152
+
153
+ Use the code below to get started with the model.
154
+
155
+ <details>
156
+ <summary> Click to expand </summary>
157
+
158
+ ```python
159
+ from transformers import AutoTokenizer, AutoModel
160
+
161
+ tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/unsup-simcse-bert-base-uncased")
162
+
163
+ model = AutoModel.from_pretrained("princeton-nlp/unsup-simcse-bert-base-uncased")
164
+
165
+ ```
166
+ </details>
167
+