Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# kwang2049/TSDAE-cqadupstack2nli_stsb
|
2 |
+
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
|
3 |
+
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
|
4 |
+
2. Unsupervised training on cqadupstack with the TSDAE objective;
|
5 |
+
|
6 |
+
The pooling method is CLS-pooling.
|
7 |
+
|
8 |
+
## Usage
|
9 |
+
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
|
10 |
+
```bash
|
11 |
+
pip install sentence-transformers
|
12 |
+
```
|
13 |
+
And then load the model and use it to encode sentences:
|
14 |
+
```python
|
15 |
+
from sentence_transformers import SentenceTransformer, models
|
16 |
+
dataset = 'cqadupstack'
|
17 |
+
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
|
18 |
+
model = SentenceTransformer(model_name_or_path)
|
19 |
+
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
|
20 |
+
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
|
21 |
+
```
|
22 |
+
## Evaluation
|
23 |
+
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
|
24 |
+
```bash
|
25 |
+
pip install useb # Or git clone and pip install .
|
26 |
+
python -m useb.downloading all # Download both training and evaluation data
|
27 |
+
```
|
28 |
+
And then do the evaluation:
|
29 |
+
```python
|
30 |
+
from sentence_transformers import SentenceTransformer, models
|
31 |
+
import torch
|
32 |
+
from useb import run_on
|
33 |
+
dataset = 'cqadupstack'
|
34 |
+
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
|
35 |
+
model = SentenceTransformer(model_name_or_path)
|
36 |
+
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
|
37 |
+
@torch.no_grad()
|
38 |
+
def semb_fn(sentences) -> torch.Tensor:
|
39 |
+
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
|
40 |
+
result = run_on(
|
41 |
+
dataset,
|
42 |
+
semb_fn=semb_fn,
|
43 |
+
eval_type='test',
|
44 |
+
data_eval_path='data-eval'
|
45 |
+
)
|
46 |
+
```
|
47 |
+
|
48 |
+
## Training
|
49 |
+
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
|
50 |
+
|
51 |
+
## Cite & Authors
|
52 |
+
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
|
53 |
+
```bibtex
|
54 |
+
@article{wang-2021-TSDAE,
|
55 |
+
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
|
56 |
+
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
|
57 |
+
journal= "arXiv preprint arXiv:2104.06979",
|
58 |
+
month = "4",
|
59 |
+
year = "2021",
|
60 |
+
url = "https://arxiv.org/abs/2104.06979",
|
61 |
+
}
|
62 |
+
```
|