gokulsrinivasagan commited on
Commit
9d2e016
1 Parent(s): c4dd659

Model save

Browse files
README.md CHANGED
@@ -1,32 +1,14 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - en
5
  base_model: gokulsrinivasagan/bert_tiny_lda_20_v1
6
  tags:
7
  - generated_from_trainer
8
- datasets:
9
- - glue
10
  metrics:
11
  - matthews_correlation
12
  - accuracy
13
  model-index:
14
  - name: bert_tiny_lda_20_v1_cola
15
- results:
16
- - task:
17
- name: Text Classification
18
- type: text-classification
19
- dataset:
20
- name: GLUE COLA
21
- type: glue
22
- args: cola
23
- metrics:
24
- - name: Matthews Correlation
25
- type: matthews_correlation
26
- value: 0.0
27
- - name: Accuracy
28
- type: accuracy
29
- value: 0.6912751793861389
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,11 +16,11 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  # bert_tiny_lda_20_v1_cola
36
 
37
- This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda_20_v1](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda_20_v1) on the GLUE COLA dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 0.6172
40
- - Matthews Correlation: 0.0
41
- - Accuracy: 0.6913
42
 
43
  ## Model description
44
 
@@ -57,7 +39,7 @@ More information needed
57
  ### Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
60
- - learning_rate: 0.001
61
  - train_batch_size: 256
62
  - eval_batch_size: 256
63
  - seed: 10
@@ -69,13 +51,12 @@ The following hyperparameters were used during training:
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
72
- | 0.644 | 1.0 | 34 | 0.6201 | 0.0 | 0.6913 |
73
- | 0.6095 | 2.0 | 68 | 0.6172 | 0.0 | 0.6913 |
74
- | 0.61 | 3.0 | 102 | 0.6226 | 0.0 | 0.6913 |
75
- | 0.6103 | 4.0 | 136 | 0.6194 | 0.0 | 0.6913 |
76
- | 0.6102 | 5.0 | 170 | 0.6184 | 0.0 | 0.6913 |
77
- | 0.6093 | 6.0 | 204 | 0.6192 | 0.0 | 0.6913 |
78
- | 0.6103 | 7.0 | 238 | 0.6197 | 0.0 | 0.6913 |
79
 
80
 
81
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  base_model: gokulsrinivasagan/bert_tiny_lda_20_v1
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - matthews_correlation
8
  - accuracy
9
  model-index:
10
  - name: bert_tiny_lda_20_v1_cola
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # bert_tiny_lda_20_v1_cola
18
 
19
+ This model is a fine-tuned version of [gokulsrinivasagan/bert_tiny_lda_20_v1](https://huggingface.co/gokulsrinivasagan/bert_tiny_lda_20_v1) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.7010
22
+ - Matthews Correlation: 0.0997
23
+ - Accuracy: 0.6663
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 5e-05
43
  - train_batch_size: 256
44
  - eval_batch_size: 256
45
  - seed: 10
 
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
53
  |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
54
+ | 0.6144 | 1.0 | 34 | 0.6166 | 0.0 | 0.6913 |
55
+ | 0.605 | 2.0 | 68 | 0.6190 | 0.0213 | 0.6903 |
56
+ | 0.5934 | 3.0 | 102 | 0.6171 | 0.0043 | 0.6759 |
57
+ | 0.567 | 4.0 | 136 | 0.6516 | 0.0362 | 0.6836 |
58
+ | 0.518 | 5.0 | 170 | 0.6389 | 0.0675 | 0.6692 |
59
+ | 0.4781 | 6.0 | 204 | 0.7010 | 0.0997 | 0.6663 |
 
60
 
61
 
62
  ### Framework versions
logs/events.out.tfevents.1733323175.ki-g0008.1207389.18 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4c1e87fa1a2520c883e53abfb74725b1c7928010a3b9710244aaf08b8694c3a9
3
- size 5670
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e785f98e78b982d46c9f8d348844fe59a5f1ea4c209b487f51f1887935ec618
3
+ size 8992