File size: 1,690 Bytes
2ee515e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:mrpc-2020-06-29-12:04/log.txt.
Loading nlp dataset glue, subset mrpc, split train.
Loading nlp dataset glue, subset mrpc, split validation.
Loaded dataset. Found: 2 labels: ([0, 1])
Loading transformers AutoModelForSequenceClassification: bert-base-uncased
Tokenizing training data. (len: 3668)
Tokenizing eval data (len: 408)
Loaded data and tokenized in 12.476295709609985s
Training model across 4 GPUs
***** Running training *****
	Num examples = 3668
	Batch size = 16
	Max sequence length = 256
	Num steps = 1145
	Num epochs = 5
	Learning rate = 2e-05
Eval accuracy: 84.31372549019608%
Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:mrpc-2020-06-29-12:04/.
Eval accuracy: 87.74509803921569%
Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:mrpc-2020-06-29-12:04/.
Eval accuracy: 86.02941176470588%
Eval accuracy: 85.7843137254902%
Eval accuracy: 85.04901960784314%
Saved tokenizer <textattack.models.tokenizers.auto_tokenizer.AutoTokenizer object at 0x7f3a1d1a5d00> to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:mrpc-2020-06-29-12:04/.
Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:mrpc-2020-06-29-12:04/README.md.
Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:mrpc-2020-06-29-12:04/train_args.json.