voidism commited on
Commit
24c3fc5
1 Parent(s): fc6e2e4

update README

Browse files
Files changed (1) hide show
  1. README.md +205 -0
README.md CHANGED
@@ -1,3 +1,208 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
5
+
6
+ [![GitHub Stars](https://img.shields.io/github/stars/voidism/DiffCSE?style=social)](https://github.com/voidism/DiffCSE/)
7
+
8
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
9
+
10
+ arXiv link: https://arxiv.org/abs/2204.10298
11
+ To be published in [**NAACL 2022**](https://2022.naacl.org/)
12
+
13
+ Authors:
14
+ [Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
15
+ [Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
16
+ [Hongyin Luo](http://people.csail.mit.edu/hyluo/),
17
+ [Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
18
+ [Shiyu Chang](https://code-terminator.github.io/),
19
+ [Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
20
+ [Shang-Wen Li](https://swdanielli.github.io/),
21
+ [Scott Wen-tau Yih](https://scottyih.org/),
22
+ [Yoon Kim](https://people.csail.mit.edu/yoonkim/),
23
+ [James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
24
+
25
+
26
+ Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
27
+
28
+ ## Overview
29
+ ![DiffCSE](https://github.com/voidism/DiffCSE/raw/master/diffcse.png)
30
+
31
+ We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
32
+
33
+ ## Setups
34
+
35
+ [![Python](https://img.shields.io/badge/python-3.9.5-blue?logo=python&logoColor=FED643)](https://www.python.org/downloads/release/python-395/)
36
+
37
+ ### Requirements
38
+ * Python 3.9.5
39
+
40
+ ### Install our customized Transformers package
41
+ ```
42
+ cd transformers-4.2.1
43
+ pip install .
44
+ ```
45
+ > If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
46
+ > We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
47
+
48
+ ### Install other packages
49
+ ```
50
+ pip install -r requirements.txt
51
+ ```
52
+
53
+ ### Download the pretraining dataset
54
+ ```
55
+ cd data
56
+ bash download_wiki.sh
57
+ ```
58
+
59
+ ### Download the downstream dataset
60
+ ```
61
+ cd SentEval/data/downstream/
62
+ bash download_dataset.sh
63
+ ```
64
+
65
+ ## Training
66
+ (The same as `run_diffcse.sh`.)
67
+ ```bash
68
+ python train.py \
69
+ --model_name_or_path bert-base-uncased \
70
+ --generator_name distilbert-base-uncased \
71
+ --train_file data/wiki1m_for_simcse.txt \
72
+ --output_dir <your_output_model_dir> \
73
+ --num_train_epochs 2 \
74
+ --per_device_train_batch_size 64 \
75
+ --learning_rate 7e-6 \
76
+ --max_seq_length 32 \
77
+ --evaluation_strategy steps \
78
+ --metric_for_best_model stsb_spearman \
79
+ --load_best_model_at_end \
80
+ --eval_steps 125 \
81
+ --pooler_type cls \
82
+ --mlp_only_train \
83
+ --overwrite_output_dir \
84
+ --logging_first_step \
85
+ --logging_dir <your_logging_dir> \
86
+ --temp 0.05 \
87
+ --do_train \
88
+ --do_eval \
89
+ --batchnorm \
90
+ --lambda_weight 0.005 \
91
+ --fp16 --masking_ratio 0.30
92
+ ```
93
+
94
+ Our new arguments:
95
+ * `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
96
+ * `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
97
+ * `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
98
+
99
+
100
+ Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
101
+ * `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
102
+ * `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
103
+ * `--temp`: Temperature for the contrastive loss. We always use `0.05`.
104
+ * `--pooler_type`: Pooling method.
105
+ * `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
106
+
107
+ For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
108
+
109
+ ## Evaluation
110
+
111
+
112
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
113
+ We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
114
+
115
+ ```bash
116
+ python evaluation.py \
117
+ --model_name_or_path <your_output_model_dir> \
118
+ --pooler cls_before_pooler \
119
+ --task_set <sts|transfer|full> \
120
+ --mode test
121
+ ```
122
+
123
+ To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
124
+
125
+ ### BERT
126
+ #### STS
127
+
128
+ ```bash
129
+ python evaluation.py \
130
+ --model_name_or_path voidism/diffcse-bert-base-uncased-sts \
131
+ --pooler cls_before_pooler \
132
+ --task_set sts \
133
+ --mode test
134
+ ```
135
+ #### Transfer Tasks
136
+
137
+ ```bash
138
+ python evaluation.py \
139
+ --model_name_or_path voidism/diffcse-bert-base-uncased-trans \
140
+ --pooler cls_before_pooler \
141
+ --task_set transfer \
142
+ --mode test
143
+ ```
144
+
145
+ ### RoBERTa
146
+ #### STS
147
+
148
+ ```bash
149
+ python evaluation.py \
150
+ --model_name_or_path voidism/diffcse-roberta-base-sts \
151
+ --pooler cls_before_pooler \
152
+ --task_set sts \
153
+ --mode test
154
+ ```
155
+ #### Transfer Tasks
156
+
157
+ ```bash
158
+ python evaluation.py \
159
+ --model_name_or_path voidism/diffcse-roberta-base-trans \
160
+ --pooler cls_before_pooler \
161
+ --task_set transfer \
162
+ --mode test
163
+ ```
164
+
165
+ For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
166
+
167
+
168
+ ## Pretrained models
169
+
170
+ [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97-Models-yellow)](https://huggingface.co/voidism)
171
+
172
+ * DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
173
+ * DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
174
+ * DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
175
+ * DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
176
+
177
+ We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
178
+ See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
179
+
180
+ ```python
181
+ from diffcse import DiffCSE
182
+ model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
183
+ model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
184
+ model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
185
+ model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
186
+ ```
187
+
188
+ ## Citations
189
+
190
+ [![DOI](https://img.shields.io/badge/DOI-10.48550/arXiv.2204.10298-green?color=FF8000?color=009922)](https://doi.org/10.48550/arXiv.2204.10298)
191
+
192
+ Please cite our paper and the SimCSE paper if they are helpful to your work!
193
+
194
+ ```bibtex
195
+ @inproceedings{chuang2022diffcse,
196
+ title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
197
+ author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
198
+ booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
199
+ year={2022}
200
+ }
201
+
202
+ @inproceedings{gao2021simcse,
203
+ title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
204
+ author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
205
+ booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
206
+ year={2021}
207
+ }
208
+ ```