bprateek commited on
Commit
dd3d79e
1 Parent(s): c41bdb0

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -22
README.md CHANGED
@@ -1,33 +1,67 @@
1
  ---
2
- language: en
3
- tags:
4
- - text-generation
5
  license: apache-2.0
6
- widget:
7
- - text: "Maximize your bedroom space without sacrificing style with the storage bed."
8
- - text: "Handcrafted of solid acacia in weathered gray, our round Jozy drop-leaf dining table is a space-saving."
9
- - text: "Our plush and luxurious Emmett modular sofa brings custom comfort to your living space."
 
 
 
10
  ---
11
 
12
- ## GPT2-Home
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- This model is fine-tuned using GPT-2 on amazon home products metadata.
15
- It can generate descriptions for your **home** products by getting a text prompt.
16
 
17
- ### Model description
 
 
 
 
 
 
 
 
18
 
 
19
 
20
- [GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.
 
 
 
 
 
21
 
22
 
23
- ### How to use
24
- For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home)
25
 
26
- You can use this model directly with a pipeline for text generation.
27
- ```python
28
- >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
29
- >>> tokenizer = AutoTokenizer.from_pretrained("bprateek/product_description_generator")
30
- >>> model = AutoModelForCausalLM.from_pretrained("bprateek/product_description_generator")
31
- >>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
32
- >>> generated_text = generator("This bed is very comfortable.")
33
- ```
 
1
  ---
 
 
 
2
  license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ model-index:
8
+ - name: product_description_generator
9
+ results: []
10
  ---
11
 
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # product_description_generator
16
+
17
+ This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 4.0241
20
+ - Rouge1: 0.1639
21
+ - Rouge2: 0.0
22
+ - Rougel: 0.1337
23
+ - Rougelsum: 0.1357
24
+ - Gen Len: 11.4
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
 
40
+ ### Training hyperparameters
 
41
 
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 2e-05
44
+ - train_batch_size: 16
45
+ - eval_batch_size: 16
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 4
50
+ - mixed_precision_training: Native AMP
51
 
52
+ ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
56
+ | No log | 1.0 | 6 | 4.2311 | 0.1365 | 0.0 | 0.1103 | 0.1102 | 12.1 |
57
+ | No log | 2.0 | 12 | 4.1437 | 0.1668 | 0.0 | 0.1321 | 0.1332 | 13.2 |
58
+ | No log | 3.0 | 18 | 4.0572 | 0.143 | 0.0 | 0.1152 | 0.1152 | 11.8 |
59
+ | No log | 4.0 | 24 | 4.0241 | 0.1639 | 0.0 | 0.1337 | 0.1357 | 11.4 |
60
 
61
 
62
+ ### Framework versions
 
63
 
64
+ - Transformers 4.28.1
65
+ - Pytorch 2.0.0+cu118
66
+ - Datasets 2.12.0
67
+ - Tokenizers 0.13.3