egosumkira commited on
Commit
b150e52
1 Parent(s): b1b4fde

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -15
README.md CHANGED
@@ -1,11 +1,14 @@
1
  ---
2
  license: mit
3
- tags:
4
- - generated_from_keras_callback
5
  base_model: gpt2
6
  model-index:
7
  - name: gpt2-fantasy
8
  results: []
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -13,29 +16,25 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  # gpt2-fantasy
15
 
16
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
17
- It achieves the following results on the evaluation set:
18
 
19
 
20
  ## Model description
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
27
 
28
- ## Training and evaluation data
29
-
30
- More information needed
31
-
32
- ## Training procedure
33
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - optimizer: None
38
- - training_precision: float32
 
 
39
 
40
  ### Training results
41
 
@@ -45,4 +44,4 @@ The following hyperparameters were used during training:
45
 
46
  - Transformers 4.29.2
47
  - TensorFlow 2.12.0
48
- - Tokenizers 0.13.3
 
1
  ---
2
  license: mit
 
 
3
  base_model: gpt2
4
  model-index:
5
  - name: gpt2-fantasy
6
  results: []
7
+ language:
8
+ - en
9
+ metrics:
10
+ - accuracy
11
+ library_name: transformers
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
16
 
17
  # gpt2-fantasy
18
 
19
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on IMDB fantasy synopsis dataset.
 
20
 
21
 
22
  ## Model description
23
 
24
+ This model was fine-tuned with intention of generating short fantasy stories based on given keywords.
25
 
26
+ ## Training data
27
 
28
+ Training data was parsed from IMDB website and consists of keywords-synopsis pairs. Method of encoding data was inspired from [this repo](https://github.com/minimaxir/gpt-2-keyword-generation)
29
 
 
 
 
 
 
30
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
+ - optimizer: Adam
35
+ - dropout: 0.2
36
+ - learning schedule: exponential decay
37
+ - epochs: 4
38
 
39
  ### Training results
40
 
 
44
 
45
  - Transformers 4.29.2
46
  - TensorFlow 2.12.0
47
+ - Tokenizers 0.13.3