mohammadtaghizadeh commited on
Commit
803f62c
1 Parent(s): 887480e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -6
README.md CHANGED
@@ -4,9 +4,25 @@ tags:
4
  - generated_from_trainer
5
  metrics:
6
  - f1
 
7
  model-index:
8
  - name: flan-t5-base-imdb-text-classification
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -14,7 +30,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # flan-t5-base-imdb-text-classification
16
 
17
- This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.0767
20
  - F1: 95.084
@@ -24,9 +40,6 @@ It achieves the following results on the evaluation set:
24
 
25
  More information needed
26
 
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
 
31
  ## Training and evaluation data
32
 
@@ -54,4 +67,4 @@ The following hyperparameters were used during training:
54
  - Transformers 4.28.1
55
  - Pytorch 2.0.0+cu118
56
  - Datasets 2.12.0
57
- - Tokenizers 0.13.3
 
4
  - generated_from_trainer
5
  metrics:
6
  - f1
7
+ - accuracy
8
  model-index:
9
  - name: flan-t5-base-imdb-text-classification
10
+ results:
11
+ - task:
12
+ name: Sequence-to-sequence Language Modeling
13
+ type: text2text-generation
14
+ dataset:
15
+ name: imdb
16
+ type: imdb
17
+ config: imdb
18
+ split: test
19
+ args: imdb
20
+ metrics:
21
+ - name: Accuracy
22
+ type: accuracy
23
+ value: 93.0000
24
+ datasets:
25
+ - imdb
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # flan-t5-base-imdb-text-classification
32
 
33
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the imdb dataset.
34
  It achieves the following results on the evaluation set:
35
  - Loss: 0.0767
36
  - F1: 95.084
 
40
 
41
  More information needed
42
 
 
 
 
43
 
44
  ## Training and evaluation data
45
 
 
67
  - Transformers 4.28.1
68
  - Pytorch 2.0.0+cu118
69
  - Datasets 2.12.0
70
+ - Tokenizers 0.13.3