gokulsrinivasagan commited on
Commit
c10f828
1 Parent(s): 9cdbf4c

Model save

Browse files
Files changed (2) hide show
  1. README.md +12 -32
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,28 +1,13 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - en
5
  base_model: gokulsrinivasagan/distilbert_lda_5_v1
6
  tags:
7
  - generated_from_trainer
8
- datasets:
9
- - glue
10
  metrics:
11
  - accuracy
12
  model-index:
13
  - name: distilbert_lda_5_v1_qnli
14
- results:
15
- - task:
16
- name: Text Classification
17
- type: text-classification
18
- dataset:
19
- name: GLUE QNLI
20
- type: glue
21
- args: qnli
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.5053999633900788
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # distilbert_lda_5_v1_qnli
32
 
33
- This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1) on the GLUE QNLI dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.6914
36
- - Accuracy: 0.5054
37
 
38
  ## Model description
39
 
@@ -52,7 +37,7 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 0.001
56
  - train_batch_size: 256
57
  - eval_batch_size: 256
58
  - seed: 10
@@ -64,18 +49,13 @@ The following hyperparameters were used during training:
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
- | 0.7218 | 1.0 | 410 | 0.6934 | 0.4946 |
68
- | 0.6932 | 2.0 | 820 | 0.6935 | 0.4946 |
69
- | 0.6932 | 3.0 | 1230 | 0.6933 | 0.5054 |
70
- | 0.6931 | 4.0 | 1640 | 0.6933 | 0.5054 |
71
- | 0.6932 | 5.0 | 2050 | 0.6934 | 0.4946 |
72
- | 0.6932 | 6.0 | 2460 | 0.6933 | 0.5054 |
73
- | 0.6931 | 7.0 | 2870 | 0.6914 | 0.5054 |
74
- | 0.693 | 8.0 | 3280 | 0.6934 | 0.4946 |
75
- | 0.6932 | 9.0 | 3690 | 0.6933 | 0.5054 |
76
- | 0.6932 | 10.0 | 4100 | 0.6934 | 0.4946 |
77
- | 0.6932 | 11.0 | 4510 | 0.6933 | 0.5054 |
78
- | 0.6932 | 12.0 | 4920 | 0.6934 | 0.4946 |
79
 
80
 
81
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  base_model: gokulsrinivasagan/distilbert_lda_5_v1
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - accuracy
8
  model-index:
9
  - name: distilbert_lda_5_v1_qnli
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # distilbert_lda_5_v1_qnli
17
 
18
+ This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5_v1) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6610
21
+ - Accuracy: 0.8122
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
  - train_batch_size: 256
42
  - eval_batch_size: 256
43
  - seed: 10
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
+ | 0.497 | 1.0 | 410 | 0.3871 | 0.8303 |
53
+ | 0.3635 | 2.0 | 820 | 0.3706 | 0.8435 |
54
+ | 0.2745 | 3.0 | 1230 | 0.3750 | 0.8376 |
55
+ | 0.1968 | 4.0 | 1640 | 0.4567 | 0.8309 |
56
+ | 0.1394 | 5.0 | 2050 | 0.5239 | 0.8321 |
57
+ | 0.1001 | 6.0 | 2460 | 0.5522 | 0.8342 |
58
+ | 0.0764 | 7.0 | 2870 | 0.6610 | 0.8122 |
 
 
 
 
 
59
 
60
 
61
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:79e198e5b9eb4ad7d3303fdbd20518d5ea11c28bb4fe0633e9c5da63bebbcc7d
3
  size 267832560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7838861b411c871a6ccb1d094da838d8a6fb166be22230293f381c4c76972301
3
  size 267832560