namvandy commited on
Commit
9fef58f
1 Parent(s): 2c26acf

Upload 10 files

Browse files

push_to_hub 로컬 문제로 손수 업로드

.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ checkpoint-*/
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - klue
7
+ metrics:
8
+ - pearsonr
9
+ model-index:
10
+ - name: bert-base-finetuned-sts-v3
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: klue
17
+ type: klue
18
+ config: sts
19
+ split: train
20
+ args: sts
21
+ metrics:
22
+ - name: Pearsonr
23
+ type: pearsonr
24
+ value: 0.9172194083849969
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # bert-base-finetuned-sts-v3
31
+
32
+ This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.3716
35
+ - Pearsonr: 0.9172
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 2e-05
55
+ - train_batch_size: 4
56
+ - eval_batch_size: 4
57
+ - seed: 42
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: linear
60
+ - num_epochs: 20
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Pearsonr |
65
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
66
+ | 0.2265 | 1.0 | 2917 | 0.4886 | 0.8933 |
67
+ | 0.1504 | 2.0 | 5834 | 0.4374 | 0.8948 |
68
+ | 0.0982 | 3.0 | 8751 | 0.5246 | 0.8957 |
69
+ | 0.0832 | 4.0 | 11668 | 0.4387 | 0.9006 |
70
+ | 0.0751 | 5.0 | 14585 | 0.4036 | 0.9049 |
71
+ | 0.0564 | 6.0 | 17502 | 0.3828 | 0.9133 |
72
+ | 0.0488 | 7.0 | 20419 | 0.3716 | 0.9172 |
73
+ | 0.0384 | 8.0 | 23336 | 0.4060 | 0.9093 |
74
+ | 0.0365 | 9.0 | 26253 | 0.3939 | 0.9065 |
75
+ | 0.0319 | 10.0 | 29170 | 0.3953 | 0.9106 |
76
+ | 0.0262 | 11.0 | 32087 | 0.3885 | 0.9109 |
77
+ | 0.0219 | 12.0 | 35004 | 0.3724 | 0.9154 |
78
+ | 0.0188 | 13.0 | 37921 | 0.3827 | 0.9111 |
79
+ | 0.0175 | 14.0 | 40838 | 0.4103 | 0.9099 |
80
+ | 0.0144 | 15.0 | 43755 | 0.3768 | 0.9152 |
81
+ | 0.0132 | 16.0 | 46672 | 0.3868 | 0.9151 |
82
+ | 0.0125 | 17.0 | 49589 | 0.3981 | 0.9103 |
83
+ | 0.0106 | 18.0 | 52506 | 0.3808 | 0.9138 |
84
+ | 0.0095 | 19.0 | 55423 | 0.3904 | 0.9128 |
85
+ | 0.0089 | 20.0 | 58340 | 0.3885 | 0.9137 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.25.1
91
+ - Pytorch 1.13.0
92
+ - Datasets 2.7.1
93
+ - Tokenizers 0.13.2
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "klue/bert-base",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "problem_type": "regression",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.25.1",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 32000
32
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a797efc70e368eeb67244bdf2b5f6522f70b3a605d0333b24ea895fc61cc5e27
3
+ size 442545269
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_basic_tokenize": true,
4
+ "do_lower_case": false,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "name_or_path": "klue/bert-base",
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "special_tokens_map_file": "C:\\Users\\indj/.cache\\huggingface\\hub\\models--klue--bert-base\\snapshots\\34b965303f98bc5214daca7f76b7fb82d2dc6183\\special_tokens_map.json",
12
+ "strip_accents": null,
13
+ "tokenize_chinese_chars": true,
14
+ "tokenizer_class": "BertTokenizer",
15
+ "unk_token": "[UNK]"
16
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1c6b7d79a17d3674c1c886665ba2f11ebc149be79bc7a3b405ed7b529576280
3
+ size 3451
vocab.txt ADDED
The diff for this file is too large to render. See raw diff