blizrys commited on
Commit
d9b50da
·
1 Parent(s): 0b09b43

BATCH_SIZE=8

Browse files

LEARNING_RATE=0.003
MAX_LENGTH=512
FOLD=0

.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ checkpoint-*/
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - null
6
+ metrics:
7
+ - accuracy
8
+ model_index:
9
+ - name: biobert-v1.1-finetuned-pubmedqa-adapter
10
+ results:
11
+ - task:
12
+ name: Text Classification
13
+ type: text-classification
14
+ metric:
15
+ name: Accuracy
16
+ type: accuracy
17
+ value: 0.54
18
+ ---
19
+
20
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
21
+ should probably proofread and complete it, then remove this comment. -->
22
+
23
+ # biobert-v1.1-finetuned-pubmedqa-adapter
24
+
25
+ This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
26
+ It achieves the following results on the evaluation set:
27
+ - Loss: 0.9581
28
+ - Accuracy: 0.54
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 0.003
48
+ - train_batch_size: 8
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 10
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
59
+ | No log | 1.0 | 57 | 1.3413 | 0.36 |
60
+ | No log | 2.0 | 114 | 1.1254 | 0.54 |
61
+ | No log | 3.0 | 171 | 1.0072 | 0.54 |
62
+ | No log | 4.0 | 228 | 1.1034 | 0.54 |
63
+ | No log | 5.0 | 285 | 0.9814 | 0.54 |
64
+ | No log | 6.0 | 342 | 0.9629 | 0.54 |
65
+ | No log | 7.0 | 399 | 0.9859 | 0.54 |
66
+ | No log | 8.0 | 456 | 0.9761 | 0.54 |
67
+ | 1.0093 | 9.0 | 513 | 0.9574 | 0.54 |
68
+ | 1.0093 | 10.0 | 570 | 0.9581 | 0.54 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.8.2
74
+ - Pytorch 1.9.0+cu102
75
+ - Datasets 1.11.0
76
+ - Tokenizers 0.10.3
pubmedqa/adapter_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": {
3
+ "adapter_residual_before_ln": false,
4
+ "cross_adapter": false,
5
+ "inv_adapter": null,
6
+ "inv_adapter_reduction_factor": null,
7
+ "leave_out": [],
8
+ "ln_after": false,
9
+ "ln_before": false,
10
+ "mh_adapter": false,
11
+ "non_linearity": "relu",
12
+ "original_ln_after": true,
13
+ "original_ln_before": true,
14
+ "output_adapter": true,
15
+ "reduction_factor": 16,
16
+ "residual_before_ln": true
17
+ },
18
+ "config_id": "9076f36a74755ac4",
19
+ "hidden_size": 768,
20
+ "model_class": "BertForSequenceClassification",
21
+ "model_name": "dmis-lab/biobert-v1.1",
22
+ "model_type": "bert",
23
+ "name": "pubmedqa"
24
+ }
pubmedqa/head_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": null,
3
+ "hidden_size": 768,
4
+ "label2id": {
5
+ "LABEL_0": 0,
6
+ "LABEL_1": 1,
7
+ "LABEL_2": 2
8
+ },
9
+ "model_class": "BertForSequenceClassification",
10
+ "model_name": "dmis-lab/biobert-v1.1",
11
+ "model_type": "bert",
12
+ "name": null,
13
+ "num_labels": 3
14
+ }
pubmedqa/pytorch_adapter.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:268ce50ab86d370bfe891da3194eac6aaae9f996820b30a0848aca27895624e2
3
+ size 3594133
pubmedqa/pytorch_model_head.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5118681492f5b2d1b769db44e903ff42f7b062d2d42c34a76dc7cd40734282c
3
+ size 10215
runs/Sep13_12-27-51_c775b922d43c/1631536084.834492/events.out.tfevents.1631536084.c775b922d43c.75.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:217d1e3119c10e0db1f84faab41ebb5217f0e31a04c5869d0f26838550a7f5d5
3
+ size 4291
runs/Sep13_12-27-51_c775b922d43c/events.out.tfevents.1631536084.c775b922d43c.75.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:049b66f08394893c982aa3bb4a92d32f31d84839360dbe719c12ddad64308b86
3
+ size 6985
runs/Sep13_12-42-51_c775b922d43c/1631536979.7077425/events.out.tfevents.1631536979.c775b922d43c.891.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20c177c87c4cf1df059649bdc5402c7207435176f0b0ed52dce54ad241d18a03
3
+ size 4291
runs/Sep13_12-42-51_c775b922d43c/events.out.tfevents.1631536979.c775b922d43c.891.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e800e1c204d17602020dc5eb91b7342f2afe899adc3f0a58c5a6fba27a2e5bb
3
+ size 6985
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": "/root/.cache/huggingface/transformers/118da8438a7854000cfcf052566f83ae4f4159ac25796e49e16c3b18746041b4.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d", "name_or_path": "dmis-lab/biobert-v1.1", "do_basic_tokenize": true, "never_split": null, "tokenizer_class": "BertTokenizer"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14c491c7a0392233feb4585975200ab3e97cdd09c24018fbf9ed488a6c322796
3
+ size 2735
vocab.txt ADDED
The diff for this file is too large to render. See raw diff