yoshitomo-matsubara commited on
Commit
df1c058
1 Parent(s): 4f27b13

initial commit

Browse files
README.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - bert
5
+ - sst2
6
+ - glue
7
+ - kd
8
+ - torchdistill
9
+ license: apache-2.0
10
+ datasets:
11
+ - sst2
12
+ metrics:
13
+ - accuracy
14
+ ---
15
+
16
+ `bert-base-uncased` fine-tuned on SST-2 dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
17
+ The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/kd/bert_base_uncased_from_bert_large_uncased.yaml).
18
+ I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-uncased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "finetuning_task": "sst2",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "problem_type": "single_label_classification",
22
+ "transformers_version": "4.6.1",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44e16fdc2f4d92ed6d9208b8a5c990589ea69b3529b0340bc66a114e62148c6f
3
+ size 438024457
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "do_lower": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased"}
training.log ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2021-05-31 05:47:27,758 INFO __main__ Namespace(adjust_lr=False, config='torchdistill/configs/sample/glue/sst2/kd/bert_base_uncased_from_bert_large_uncased.yaml', log='log/glue/sst2/kd/bert_base_uncased_from_bert_large_uncased.txt', private_output='leaderboard/glue/kd/bert_base_uncased_from_bert_large_uncased/', seed=None, student_only=False, task_name='sst2', test_only=False, world_size=1)
2
+ 2021-05-31 05:47:27,789 INFO __main__ Distributed environment: NO
3
+ Num processes: 1
4
+ Process index: 0
5
+ Local process index: 0
6
+ Device: cuda
7
+ Use FP16 precision: True
8
+
9
+ 2021-05-31 05:47:35,040 WARNING datasets.builder Reusing dataset glue (/root/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
10
+ 2021-05-31 05:47:38,503 INFO __main__ Start training
11
+ 2021-05-31 05:47:38,503 INFO torchdistill.models.util [teacher model]
12
+ 2021-05-31 05:47:38,503 INFO torchdistill.models.util Using the original teacher model
13
+ 2021-05-31 05:47:38,503 INFO torchdistill.models.util [student model]
14
+ 2021-05-31 05:47:38,504 INFO torchdistill.models.util Using the original student model
15
+ 2021-05-31 05:47:38,504 INFO torchdistill.core.distillation Loss = 1.0 * OrgLoss
16
+ 2021-05-31 05:47:38,504 INFO torchdistill.core.distillation Freezing the whole teacher model
17
+ 2021-05-31 05:47:42,142 INFO torchdistill.misc.log Epoch: [0] [ 0/2105] eta: 0:10:14 lr: 9.998416468725257e-05 sample/s: 14.042895693690195 loss: 0.3713 (0.3713) time: 0.2918 data: 0.0070 max mem: 1909
18
+ 2021-05-31 05:50:16,780 INFO torchdistill.misc.log Epoch: [0] [ 500/2105] eta: 0:08:16 lr: 9.20665083135392e-05 sample/s: 11.948494724127714 loss: 0.0681 (0.0908) time: 0.3081 data: 0.0032 max mem: 3637
19
+ 2021-05-31 05:52:52,257 INFO torchdistill.misc.log Epoch: [0] [1000/2105] eta: 0:05:42 lr: 8.414885193982581e-05 sample/s: 11.96046261450051 loss: 0.0413 (0.0710) time: 0.3093 data: 0.0033 max mem: 3637
20
+ 2021-05-31 05:55:28,379 INFO torchdistill.misc.log Epoch: [0] [1500/2105] eta: 0:03:08 lr: 7.623119556611244e-05 sample/s: 11.926241231549644 loss: 0.0231 (0.0601) time: 0.3108 data: 0.0032 max mem: 3798
21
+ 2021-05-31 05:58:04,665 INFO torchdistill.misc.log Epoch: [0] [2000/2105] eta: 0:00:32 lr: 6.831353919239906e-05 sample/s: 16.632809944065624 loss: 0.0248 (0.0530) time: 0.3105 data: 0.0033 max mem: 3802
22
+ 2021-05-31 05:58:37,981 INFO torchdistill.misc.log Epoch: [0] Total time: 0:10:56
23
+ 2021-05-31 05:58:39,659 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/sst2/default_experiment-1-0.arrow
24
+ 2021-05-31 05:58:39,659 INFO __main__ Validation: accuracy = 0.9151376146788991
25
+ 2021-05-31 05:58:39,659 INFO __main__ Updating ckpt at ./resource/ckpt/glue/sst2/kd/sst2-bert-base-uncased_from_bert-large-uncased
26
+ 2021-05-31 05:58:40,894 INFO torchdistill.misc.log Epoch: [1] [ 0/2105] eta: 0:10:41 lr: 6.665083135391924e-05 sample/s: 13.684057874084145 loss: 0.0106 (0.0106) time: 0.3047 data: 0.0124 max mem: 3802
27
+ 2021-05-31 06:01:18,103 INFO torchdistill.misc.log Epoch: [1] [ 500/2105] eta: 0:08:22 lr: 5.8733174980205864e-05 sample/s: 11.957572794359765 loss: 0.0071 (0.0139) time: 0.3221 data: 0.0031 max mem: 3802
28
+ 2021-05-31 06:03:54,053 INFO torchdistill.misc.log Epoch: [1] [1000/2105] eta: 0:05:45 lr: 5.081551860649249e-05 sample/s: 11.940883530020505 loss: 0.0065 (0.0137) time: 0.3034 data: 0.0031 max mem: 3802
29
+ 2021-05-31 06:06:29,428 INFO torchdistill.misc.log Epoch: [1] [1500/2105] eta: 0:03:08 lr: 4.28978622327791e-05 sample/s: 13.916417959969143 loss: 0.0063 (0.0133) time: 0.3077 data: 0.0032 max mem: 3802
30
+ 2021-05-31 06:09:03,728 INFO torchdistill.misc.log Epoch: [1] [2000/2105] eta: 0:00:32 lr: 3.4980205859065716e-05 sample/s: 11.953755940463552 loss: 0.0084 (0.0129) time: 0.3038 data: 0.0032 max mem: 3802
31
+ 2021-05-31 06:09:35,385 INFO torchdistill.misc.log Epoch: [1] Total time: 0:10:54
32
+ 2021-05-31 06:09:37,066 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/sst2/default_experiment-1-0.arrow
33
+ 2021-05-31 06:09:37,066 INFO __main__ Validation: accuracy = 0.9243119266055045
34
+ 2021-05-31 06:09:37,067 INFO __main__ Updating ckpt at ./resource/ckpt/glue/sst2/kd/sst2-bert-base-uncased_from_bert-large-uncased
35
+ 2021-05-31 06:09:38,580 INFO torchdistill.misc.log Epoch: [2] [ 0/2105] eta: 0:10:29 lr: 3.3317498020585904e-05 sample/s: 13.716311384138436 loss: 0.0019 (0.0019) time: 0.2992 data: 0.0076 max mem: 3802
36
+ 2021-05-31 06:12:12,846 INFO torchdistill.misc.log Epoch: [2] [ 500/2105] eta: 0:08:15 lr: 2.5399841646872525e-05 sample/s: 13.906025093019228 loss: 0.0049 (0.0055) time: 0.2998 data: 0.0031 max mem: 3802
37
+ 2021-05-31 06:14:49,099 INFO torchdistill.misc.log Epoch: [2] [1000/2105] eta: 0:05:43 lr: 1.7482185273159146e-05 sample/s: 10.724082628769473 loss: 0.0038 (0.0054) time: 0.3174 data: 0.0031 max mem: 3802
38
+ 2021-05-31 06:17:26,529 INFO torchdistill.misc.log Epoch: [2] [1500/2105] eta: 0:03:08 lr: 9.564528899445763e-06 sample/s: 11.955757780905381 loss: 0.0028 (0.0053) time: 0.3269 data: 0.0032 max mem: 3802
39
+ 2021-05-31 06:20:02,249 INFO torchdistill.misc.log Epoch: [2] [2000/2105] eta: 0:00:32 lr: 1.6468725257323833e-06 sample/s: 11.92430011293717 loss: 0.0027 (0.0053) time: 0.2978 data: 0.0031 max mem: 3802
40
+ 2021-05-31 06:20:34,146 INFO torchdistill.misc.log Epoch: [2] Total time: 0:10:55
41
+ 2021-05-31 06:20:35,823 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/sst2/default_experiment-1-0.arrow
42
+ 2021-05-31 06:20:35,823 INFO __main__ Validation: accuracy = 0.9288990825688074
43
+ 2021-05-31 06:20:35,823 INFO __main__ Updating ckpt at ./resource/ckpt/glue/sst2/kd/sst2-bert-base-uncased_from_bert-large-uncased
44
+ 2021-05-31 06:20:36,974 INFO __main__ [Teacher: bert-large-uncased]
45
+ 2021-05-31 06:20:41,563 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/sst2/default_experiment-1-0.arrow
46
+ 2021-05-31 06:20:41,563 INFO __main__ Test: accuracy = 0.9346330275229358
47
+ 2021-05-31 06:20:43,462 INFO __main__ [Student: bert-base-uncased]
48
+ 2021-05-31 06:20:45,138 INFO /usr/local/lib/python3.7/dist-packages/datasets/metric.py Removing /root/.cache/huggingface/metrics/glue/sst2/default_experiment-1-0.arrow
49
+ 2021-05-31 06:20:45,138 INFO __main__ Test: accuracy = 0.9288990825688074
50
+ 2021-05-31 06:20:45,138 INFO __main__ Start prediction for private dataset(s)
51
+ 2021-05-31 06:20:45,139 INFO __main__ sst2/test: 1821 samples
vocab.txt ADDED
The diff for this file is too large to render. See raw diff