VityaVitalich commited on
Commit
81fb41a
·
1 Parent(s): 9a48631

bert-tiny-sst2

Browse files
Files changed (4) hide show
  1. README.md +78 -0
  2. config.json +28 -0
  3. pytorch_model.bin +3 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: M-FAC/bert-tiny-finetuned-sst2
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - sst2
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: results
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: sst2
17
+ type: sst2
18
+ config: default
19
+ split: validation
20
+ args: default
21
+ metrics:
22
+ - name: Accuracy
23
+ type: accuracy
24
+ value: 0.8279816513761468
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # results
31
+
32
+ This model is a fine-tuned version of [M-FAC/bert-tiny-finetuned-sst2](https://huggingface.co/M-FAC/bert-tiny-finetuned-sst2) on the sst2 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.4771
35
+ - Accuracy: 0.8280
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 3e-05
55
+ - train_batch_size: 128
56
+ - eval_batch_size: 128
57
+ - seed: 42
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: linear
60
+ - num_epochs: 5
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
65
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
66
+ | 0.2313 | 1.0 | 527 | 0.4771 | 0.8280 |
67
+ | 0.2057 | 2.0 | 1054 | 0.4937 | 0.8257 |
68
+ | 0.1949 | 3.0 | 1581 | 0.5121 | 0.8177 |
69
+ | 0.1904 | 4.0 | 2108 | 0.5100 | 0.8200 |
70
+ | 0.1879 | 5.0 | 2635 | 0.5137 | 0.8211 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.34.0.dev0
76
+ - Pytorch 2.0.1+cu117
77
+ - Datasets 2.14.5
78
+ - Tokenizers 0.14.0
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "M-FAC/bert-tiny-finetuned-sst2",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "finetuning_task": "sst2",
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 128,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 512,
15
+ "layer_norm_eps": 1e-12,
16
+ "max_position_embeddings": 512,
17
+ "model_type": "bert",
18
+ "num_attention_heads": 2,
19
+ "num_hidden_layers": 2,
20
+ "pad_token_id": 0,
21
+ "position_embedding_type": "absolute",
22
+ "problem_type": "single_label_classification",
23
+ "torch_dtype": "float16",
24
+ "transformers_version": "4.34.0.dev0",
25
+ "type_vocab_size": 2,
26
+ "use_cache": true,
27
+ "vocab_size": 30522
28
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c7bf9547da061b1868afb1a2609c9f88913eca5c4b41dc90aa7e28c19167670
3
+ size 8785836
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:650627c715c48a10e613895418dbc5644f7224d3de7b226af5fdeb3b0965a650
3
+ size 4027