merve HF staff commited on
Commit
07e01b0
1 Parent(s): 47ad418

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -5,30 +5,32 @@ tags:
5
  model-index:
6
  - name: chatgpt-prompts-bart-long
7
  results: []
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
- probably proofread and complete it, then remove this comment. -->
12
 
13
- # chatgpt-prompts-bart-long
14
 
15
- This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - Train Loss: 2.8329
18
  - Validation Loss: 2.5015
19
  - Epoch: 4
20
 
21
- ## Model description
22
-
23
- More information needed
24
-
25
  ## Intended uses & limitations
26
 
27
- More information needed
 
 
 
28
 
29
- ## Training and evaluation data
 
 
 
 
30
 
31
- More information needed
32
 
33
  ## Training procedure
34
 
@@ -54,4 +56,4 @@ The following hyperparameters were used during training:
54
  - Transformers 4.26.0
55
  - TensorFlow 2.9.2
56
  - Datasets 2.8.0
57
- - Tokenizers 0.13.2
 
5
  model-index:
6
  - name: chatgpt-prompts-bart-long
7
  results: []
8
+ datasets:
9
+ - fka/awesome-chatgpt-prompts
10
  ---
11
 
 
 
12
 
13
+ # ChatGPT Prompt Generator
14
 
15
+ This model is a fine-tuned version of [BART-large](https://huggingface.co/facebook/bart-large) on a ChatGPT prompts dataset.
16
  It achieves the following results on the evaluation set:
17
  - Train Loss: 2.8329
18
  - Validation Loss: 2.5015
19
  - Epoch: 4
20
 
 
 
 
 
21
  ## Intended uses & limitations
22
 
23
+ You can use this to generate ChatGPT personas. Simply input a persona like below:
24
+
25
+ ```
26
+ from transformers import BartForConditionalGeneration, BartTokenizer
27
 
28
+ example_english_phrase = "photographer"
29
+ batch = tokenizer(example_english_phrase, return_tensors="pt")
30
+ generated_ids = model.generate(batch["input_ids"], max_new_tokens=150)
31
+ output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
32
+ ```
33
 
 
34
 
35
  ## Training procedure
36
 
 
56
  - Transformers 4.26.0
57
  - TensorFlow 2.9.2
58
  - Datasets 2.8.0
59
+ - Tokenizers 0.13.2