FPHam commited on
Commit
92c56f3
1 Parent(s): be23ac4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -98
README.md CHANGED
@@ -2,114 +2,23 @@
2
  inference: false
3
  license: other
4
  ---
 
 
 
5
 
6
- <!-- header start -->
7
- <div style="width: 100%;">
8
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
- </div>
10
- <div style="display: flex; justify-content: space-between; width: 100%;">
11
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
- <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
13
- </div>
14
- <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
- </div>
17
- </div>
18
- <!-- header end -->
19
 
20
- # FPHam's Karen The Editor 13B GPTQ
21
 
22
- These files are GPTQ 4bit model files for [FPHam's Karen The Editor 13B](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).
23
 
24
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
 
26
  ## Other repositories available
27
 
28
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GPTQ)
29
  * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
30
- * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Karen_theEditor_13b_HF
31
-
32
- ## Prompt template
33
-
34
- ```
35
- USER: Edit the following for spelling and grammar mistakes:
36
- ASSISTANT:
37
- ```
38
-
39
- ## How to easily download and use this model in text-generation-webui
40
-
41
- ### Downloading the model
42
-
43
- 1. Click the **Model tab**.
44
- 2. Under **Download custom model or LoRA**, enter `TheBloke/Karen_theEditor_13B-GPTQ`.
45
- 3. Click **Download**.
46
- 4. Wait until it says it's finished downloading.
47
- 5. Untick "Autoload model"
48
- 6. Click the **Refresh** icon next to **Model** in the top left.
49
-
50
- ### To use with AutoGPTQ (if installed)
51
-
52
- 1. In the **Model drop-down**: choose the model you just downloaded, `Karen_theEditor_13B-GPTQ`.
53
- 2. Under **GPTQ**, tick **AutoGPTQ**.
54
- 3. Click **Save settings for this model** in the top right.
55
- 4. Click **Reload the Model** in the top right.
56
- 5. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
57
-
58
- ### To use with GPTQ-for-LLaMa
59
-
60
- 1. In the **Model drop-down**: choose the model you just downloaded, `Karen_theEditor_13B-GPTQ`.
61
- 2. If you see an error in the bottom right, ignore it - it's temporary.
62
- 3. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
63
- 4. Click **Save settings for this model** in the top right.
64
- 5. Click **Reload the Model** in the top right.
65
- 6. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
66
-
67
- ## Provided files
68
-
69
- **Karen-The-Editor-GPTQ-4bit-128g.no-act.order.safetensors**
70
-
71
- This will work with all versions of GPTQ-for-LLaMa, and with AutoGPTQ.
72
-
73
- It was created with
74
-
75
- * `Karen-The-Editor-GPTQ-4bit-128g.no-act.order.safetensors`
76
- * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
77
- * Works with AutoGPTQ
78
- * Works with text-generation-webui one-click-installers
79
- * Parameters: Groupsize = 128. Act Order / desc_act = False.
80
-
81
- <!-- footer start -->
82
- ## Discord
83
-
84
- For further support, and discussions on these models and AI in general, join us at:
85
-
86
- [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
87
 
88
- ## Thanks, and how to contribute.
89
 
90
- Thanks to the [chirper.ai](https://chirper.ai) team!
91
-
92
- I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
93
-
94
- If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
95
-
96
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
97
-
98
- * Patreon: https://patreon.com/TheBlokeAI
99
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
100
-
101
- **Patreon special mentions**: Aemon Algiz; Dmitiry Samsonov; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad; Nikolai Manek; senxiiz; Talal Aujan; vamX; Eugene Pentland; Lone Striker; Luke Pendergrass; Johann-Peter Hartmann.
102
-
103
- Thank you to all my generous patrons and donaters.
104
- <!-- footer end -->
105
-
106
- # Original model card: FPHam's Karen The Editor 13B
107
-
108
- ## Karen is an editor for your fiction.
109
-
110
- She fixes grammar and wording issues, but doesn't necessary start rewording everything like ChatGPT into a corporate talk. So it should keep the style intact.
111
-
112
- Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar).
113
 
114
  Karen gets triggered by this prompt (pun intended):
115
 
@@ -132,3 +41,38 @@ context: "Assistant is a meticulous senior editor with a specialization in editi
132
  I am not sure to what extend the context part has any effect. Doesn't hurt though.
133
 
134
  This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  inference: false
3
  license: other
4
  ---
5
+ ## Karen is an editor for your fiction.
6
+
7
+ She fixes grammar and wording issues, but doesn't necessary start rewording everything like ChatGPT into a corporate talk. So it should keep the style intact.
8
 
9
+ Based on LLAMA 13b and Wizard-Vucna-uncensored finetune, then finetuned with about 20k grammar examples (bad grammar/good grammar).
 
 
 
 
 
 
 
 
 
 
 
 
10
 
 
11
 
12
+ # FPHam's Karen The Editor 13B GPTQ (quantized by TheBloke)
13
 
14
+ These files are GPTQ 4bit model files for [Karen The Editor 13B](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).
15
 
16
  ## Other repositories available
17
 
 
18
  * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Karen_theEditor_13B-GGML)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
 
20
 
21
+ ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  Karen gets triggered by this prompt (pun intended):
24
 
 
41
  I am not sure to what extend the context part has any effect. Doesn't hurt though.
42
 
43
  This is a version 0.2 and finetuned just so it retains the underlaying Vicuna qualities.
44
+
45
+ ## Example:
46
+
47
+ Input
48
+ ```
49
+ USER: Edit the following for spelling and grammar mistakes:
50
+ unce upon time lived name John who have pet monkey Jackie.
51
+ one day while walkin down the street he saw his friend Tom coming to him.
52
+ "Hey Tom" said John "hows it going?"
53
+
54
+ Tom replyied "Not too bad how 'bout yourself?". as they was talk Jackie sudden jump onto Tom shoulder and started play with his hair.
55
+ Tom look at John and ask "Is that yours?"
56
+
57
+ John reply "Yeah, this crazy lil guy follow me everywhere" just then Jackie grab hold of Tom glasses and try to take thmm off.
58
+ Tom struggle to keep his balance as he laugh hystericaly.
59
+ ```
60
+ Output
61
+ ```
62
+ ASSISTANT: Once upon a time there lived a man named John who had a pet monkey called Jackie.
63
+ One day while walking down the street he saw his friend Tom approaching him.
64
+ "Hey Tom," said John. "How's it going?"
65
+
66
+ Tom replied, "Not too bad, how about yourself?" As they were talking, Jackie suddenly jumped onto Tom's shoulder and began playing with his hair.
67
+ Tom looked at John and asked, "Is that yours?"
68
+
69
+ John replied, "Yeah, this crazy little guy follows me everywhere." Just then Jackie grabbed hold of Tom's glasses and tried to take them off.
70
+ Tom struggled to keep his balance as he laughed hysterically.
71
+ ```
72
+
73
+ ## Goal: to create the best grammar checker you have ever seen
74
+
75
+ ## To do:
76
+ - train on even larger dataset
77
+ - see if finetuning on just plain LLAMA without Vicuna would work better or worse (the theory is that it will became very focused on editing and nothing else)
78
+ - explore what different settings (temperature, top_p, top_k do for this type of finetune)