File size: 1,161 Bytes
fd33d3c
b8f4360
 
fd33d3c
 
 
b8f4360
fd33d3c
 
 
0c41186
 
9b8b67c
06da743
fd33d3c
91de11e
fd33d3c
 
 
0c41186
fd33d3c
 
fc3fd65
fd33d3c
b59332e
853e63a
 
fd33d3c
0c41186
fd33d3c
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
language: en
thumbnail: https://github.com/junnyu
tags:
- pytorch
- electra
license: mit
datasets:
- openwebtext
---
# 一、 个人在openwebtext数据集上训练得到的electra-small模型

# 二、 复现结果(dev dataset)
|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|
|---|---|---|---|---|---|---|---|---|---|
|Metrics|MCC|Acc|Acc|Spearman|Acc|Acc|Acc|Acc||
|ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36|
|**ELECTRA-Small-OWT (this)**| 55.82 |89.67|87.0|86.96|89.28|80.08|87.50|66.07|80.30|

# 三、 训练细节
- 数据集 openwebtext
- 训练batch_size 256
- 学习率lr  5e-4
- 最大句子长度max_seqlen  128
- 训练total step  62.5W
- GPU RTX3090
- 训练时间总共耗费2.5天

# 四、 使用
```python
import torch
from transformers.models.electra import ElectraModel, ElectraTokenizer
tokenizer = ElectraTokenizer.from_pretrained("junnyu/electra_small_discriminator")
model = ElectraModel.from_pretrained("junnyu/electra_small_discriminator")
inputs = tokenizer("Beijing is the capital of China.", return_tensors="pt")
with torch.no_grad():
    outputs = model(**inputs)
    print(outputs[0].shape)
```