File size: 1,388 Bytes
13ad85c
 
 
 
 
 
 
 
67bc41f
 
 
13ad85c
 
 
 
 
be63e04
67bc41f
13ad85c
 
 
89ecc9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be63e04
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
language: 
  - en
thumbnail: "url to a thumbnail used in social sharing"
tags:
- classification
license: "mit"
datasets:
- SetFit/qqp
models:
- microsoft/deberta-v3-base
metrics:
- accuracy
- loss
widget:
- text: How is the life of a math student? Could you describe your own experiences?
  context: Which level of preparation is enough for the exam jlpt5?
  example_title: "Classification"
---

A fine-tuned model based on the **DeBERTaV3** model of Microsoft and fine-tuned on **Glue QQP**, which detects the linguistical similarities between two questions and whether they are similar questions or duplicates.

## Model Hyperparameters

```python
epoch=4
per_device_train_batch_size=32
per_device_eval_batch_size=16
lr=2e-5
weight_decay=1e-2
gradient_checkpointing=True
gradient_accumulation_steps=8
```
## Model Performance

```JSON
{"Training Loss": 0.132400,
 "Validation Loss": 0.217410,
 "Validation Accuracy": 0.917969
}
```

## Model Dependencies

```JSON
{"Main Model": "microsoft/deberta-v3-base",
 "Dataset": "SetFit/qqp"
}
```

## Information Citation

```bibtex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```