system HF staff commited on
Commit
e2a66d4
1 Parent(s): d159715

Update log.txt

Browse files
Files changed (1) hide show
  1. log.txt +44 -0
log.txt ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/log.txt.
2
+ Loading nlp dataset glue, subset rte, split train.
3
+ Loading nlp dataset glue, subset rte, split validation.
4
+ Loaded dataset. Found: 2 labels: ([0, 1])
5
+ Loading transformers AutoModelForSequenceClassification: bert-base-uncased
6
+ Tokenizing training data. (len: 2490)
7
+ Tokenizing eval data (len: 277)
8
+ Loaded data and tokenized in 14.295648097991943s
9
+ Training model across 1 GPUs
10
+ ***** Running training *****
11
+ Num examples = 2490
12
+ Batch size = 128
13
+ Max sequence length = 128
14
+ Num steps = 95
15
+ Num epochs = 5
16
+ Learning rate = 3e-05
17
+ Failed to predict with model <class 'transformers.modeling_bert.BertForSequenceClassification'>. Check tokenizer configuration.
18
+ Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/log.txt.
19
+ Loading nlp dataset glue, subset rte, split train.
20
+ Loading nlp dataset glue, subset rte, split validation.
21
+ Loaded dataset. Found: 2 labels: ([0, 1])
22
+ Loading transformers AutoModelForSequenceClassification: bert-base-uncased
23
+ Tokenizing training data. (len: 2490)
24
+ Tokenizing eval data (len: 277)
25
+ Loaded data and tokenized in 13.395596742630005s
26
+ Training model across 1 GPUs
27
+ ***** Running training *****
28
+ Num examples = 2490
29
+ Batch size = 8
30
+ Max sequence length = 128
31
+ Num steps = 1555
32
+ Num epochs = 5
33
+ Learning rate = 2e-05
34
+ Eval accuracy: 68.23104693140795%
35
+ Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/.
36
+ Eval accuracy: 70.03610108303249%
37
+ Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/.
38
+ Eval accuracy: 72.56317689530685%
39
+ Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/.
40
+ Eval accuracy: 69.67509025270758%
41
+ Eval accuracy: 69.31407942238266%
42
+ Saved tokenizer <textattack.models.tokenizers.auto_tokenizer.AutoTokenizer object at 0x7fc688911c10> to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/.
43
+ Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/README.md.
44
+ Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/train_args.json.