Update README.md
Browse files
README.md
CHANGED
@@ -2,19 +2,30 @@
|
|
2 |
license: llama2
|
3 |
---
|
4 |
|
5 |
-
This model is the secret weapon behind the [Aurelian](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-fp16) series of models. It takes a chunk of story as the input, and
|
|
|
|
|
|
|
6 |
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/story-reverse-prompt-70b-rope8-32K-6bpw_h6_exl2).
|
7 |
|
8 |
|
9 |
The steps to use this model are:
|
10 |
-
- Get a plaintext version of a story
|
11 |
-
|
12 |
-
-
|
13 |
-
|
14 |
-
-
|
15 |
-
-
|
16 |
-
|
17 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
|
20 |
## Input format:
|
@@ -41,7 +52,7 @@ TASK: Write a detailed prompt that would generate the above section of the story
|
|
41 |
<s>ASSISTANT:
|
42 |
```
|
43 |
|
44 |
-
- The model will respond with the output.
|
45 |
- Note that there is a blank space at the end of the above completion request (after `ASSISTANT:`).
|
46 |
- I used `<NO-LINE-BREAK>` to indicate that there is not supposed to be a line break there, but I added a line break in the above text, just for human readability.
|
47 |
- The first prompt (with the YES response) is basically a hard-coded way to add some background info, outside the system prompt. It is optional, but the model was trained to expect it (even if it only provides trivial information like 'This is a fictional story').
|
@@ -71,11 +82,11 @@ TASK: Write a detailed prompt that would generate the above section of the story
|
|
71 |
|
72 |
|
73 |
TASK: Write a detailed prompt that would generate the above section of the story.</s><NO-LINE-BREAK>
|
74 |
-
<s>ASSISTANT:
|
75 |
```
|
76 |
and so on.
|
77 |
|
78 |
-
If you run out of context length, you'd need to drop the oldest chunks/prompts (most clients will do this automatically). It is optional whether you want to drop the first general information/background prompt or not.
|
79 |
|
80 |
|
81 |
## Example:
|
@@ -147,7 +158,7 @@ Amid this backdrop of half-imagined dystopia and unlikely vibrancy, crime ripple
|
|
147 |
|
148 |
|
149 |
TASK: Write a detailed prompt that would generate the above section of the story.</s><NO-LINE-BREAK>
|
150 |
-
<s>ASSISTANT:
|
151 |
```
|
152 |
|
153 |
**Model output (2nd chunk):**
|
@@ -157,6 +168,6 @@ Describe some of the horrid and grotesque creatures that stalk the shadows of th
|
|
157 |
Also mention the crime that is rampant in the city. Describe some of the types of illicit activities that are common in the city.</s>
|
158 |
|
159 |
|
160 |
-
and so on. As you can see, you can get varied responses, sometimes specific and detailed (like the 1st chunk output),
|
161 |
|
162 |
Right now, there is no way to control which kind of output you get from this model, but you can regenerate until you get the desired length/level of detail if you like.
|
|
|
2 |
license: llama2
|
3 |
---
|
4 |
|
5 |
+
This model is the secret weapon behind the [Aurelian](https://huggingface.co/grimulkan/aurelian-alpha0.1-70b-rope8-32K-fp16) series of models. It takes a chunk of story as the input, and generates the prompt that could have produced it. Basically, it converts a story into Q&A format, for the purpose of training another model for instruct-tuned story writing.
|
6 |
+
|
7 |
+
I made it for internal use, and it is not very user-friendly. But it's what I've got until I get the compute to re-train it.
|
8 |
+
|
9 |
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/story-reverse-prompt-70b-rope8-32K-6bpw_h6_exl2).
|
10 |
|
11 |
|
12 |
The steps to use this model are:
|
13 |
+
- Get a plaintext version of a story.
|
14 |
+
- Hopefully human-written, that would be the main point of using a model like this.
|
15 |
+
- Divide the story into chunks.
|
16 |
+
- Typically less than 3000 tokens per chunk.
|
17 |
+
- Try to break on chapter boundaries.
|
18 |
+
- Try to strip any non-story text, like the chapter number, page number, etc.
|
19 |
+
- Setup the initial prompt of the model (see below), and pass it the first chunk.
|
20 |
+
- Model produces a prompt that could generate it (see example).
|
21 |
+
- Concatenate the previous output cumulatively (context history, see examples below), and pass the combined input along with the next story chunk, similar to a normal chat conversation.
|
22 |
+
- Model produces the writing prompt corresponding to the 2nd chunk.
|
23 |
+
- The reason the model accepts a conversation history and has 32K of context is to keep the writing prompts sounding natural and to pay attention to prior story context.
|
24 |
+
- For instance, let's say the character meets an elf in the new chunk, but the elf was already introduced some chunks before in the story. When writing the prompt, the model would correctly refer to 'the elf', rather than 'an elf', since it knows the prior story context.
|
25 |
+
- This is the main advantage of using this model vs trying to generate a standalone summary per story chunk. Standalone summaries end up sounding very inconsistent over a long story.
|
26 |
+
- Depending on how much VRAM you have, you may need to limit your context length (or truncate at 32K anyway, leaving room for the output), just like any other chat conversation. Most clients will do this automatically (egs., oobabooga). You will lose prior story context, but 32K is pretty big if you can use all of it.
|
27 |
+
- Continue this process, until your entire story text is converted into a series of writing prompts and corresponding story chunks as output.
|
28 |
+
- Then you can convert the Q&A to egs., fastchat format in a .json for SFT.
|
29 |
|
30 |
|
31 |
## Input format:
|
|
|
52 |
<s>ASSISTANT:
|
53 |
```
|
54 |
|
55 |
+
- The model will respond with the output (see an example below).
|
56 |
- Note that there is a blank space at the end of the above completion request (after `ASSISTANT:`).
|
57 |
- I used `<NO-LINE-BREAK>` to indicate that there is not supposed to be a line break there, but I added a line break in the above text, just for human readability.
|
58 |
- The first prompt (with the YES response) is basically a hard-coded way to add some background info, outside the system prompt. It is optional, but the model was trained to expect it (even if it only provides trivial information like 'This is a fictional story').
|
|
|
82 |
|
83 |
|
84 |
TASK: Write a detailed prompt that would generate the above section of the story.</s><NO-LINE-BREAK>
|
85 |
+
<s>ASSISTANT:
|
86 |
```
|
87 |
and so on.
|
88 |
|
89 |
+
If you run out of context length, you'd need to drop the oldest chunks/prompts (most clients will do this automatically). It is optional whether you want to drop or preserve the first general information/background prompt or not.
|
90 |
|
91 |
|
92 |
## Example:
|
|
|
158 |
|
159 |
|
160 |
TASK: Write a detailed prompt that would generate the above section of the story.</s><NO-LINE-BREAK>
|
161 |
+
<s>ASSISTANT:
|
162 |
```
|
163 |
|
164 |
**Model output (2nd chunk):**
|
|
|
168 |
Also mention the crime that is rampant in the city. Describe some of the types of illicit activities that are common in the city.</s>
|
169 |
|
170 |
|
171 |
+
and so on. As you can see, you can get varied responses, sometimes specific and detailed (like the 1st chunk output), or short (leaving the story details to be more open-ended). This was done to mimic the different ways humans might ask to write a story section. Sometimes they may want to tell the model exactly what to write, sometimes they want the model to choose.
|
172 |
|
173 |
Right now, there is no way to control which kind of output you get from this model, but you can regenerate until you get the desired length/level of detail if you like.
|