TheBloke commited on
Commit
68cd9ae
1 Parent(s): 90e7dee

Initial GPTQ model commit.

Browse files
Files changed (1) hide show
  1. README.md +165 -0
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
4
+ - ehartford/wizard_vicuna_70k_unfiltered
5
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
6
+ - QingyiSi/Alpaca-CoT
7
+ - teknium/GPT4-LLM-Cleaned
8
+ - teknium/GPTeacher-General-Instruct
9
+ - metaeval/ScienceQA_text_only
10
+ - hellaswag
11
+ - openai/summarize_from_feedback
12
+ - riddle_sense
13
+ - gsm8k
14
+ - ewof/code-alpaca-instruct-unfiltered
15
+ language:
16
+ - en
17
+ library_name: transformers
18
+ pipeline_tag: text-generation
19
+ license: other
20
+ inference: false
21
+ ---
22
+
23
+ # Manticore 13B GPTQ
24
+
25
+ This repo contains 4bit GPTQ format quantised models of [OpenAccess AI Collective's Manticore Chat 13B](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg).
26
+
27
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
28
+
29
+ ## Repositories available
30
+
31
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ).
32
+ * [4-bit, 5-bit and 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML).
33
+ * [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg/edit/main/README.md).
34
+
35
+ ## How to easily download and use this model in text-generation-webui
36
+
37
+ Open the text-generation-webui UI as normal.
38
+
39
+ 1. Click the **Model tab**.
40
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/manticore-13b-chat-pyg-GPTQ`.
41
+ 3. Click **Download**.
42
+ 4. Wait until it says it's finished downloading.
43
+ 5. Click the **Refresh** icon next to **Model** in the top left.
44
+ 6. In the **Model drop-down**: choose the model you just downloaded, `manticore-13b-chat-pyg-GPTQ`.
45
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
46
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
47
+ 9. Click **Save settings for this model** in the top right.
48
+ 10. Click **Reload the Model** in the top right.
49
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
50
+
51
+ ## Provided files
52
+
53
+ **`Manticore-13B-Chat-Pyg-GPTQ-4bit-128g.no-act-order.safetensors`**
54
+
55
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.
56
+
57
+ It was created without `--act-order` to ensure compatibility with all UIs out there.
58
+
59
+ * `Manticore-13B-Chat-Pyg-GPTQ-4bit-128g.no-act-order.safetensors`
60
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
61
+ * Works with text-generation-webui one-click-installers
62
+ * Parameters: Groupsize = 128. No act-order.
63
+ * Command used to create the GPTQ:
64
+ ```
65
+ python llama.py /workspace/models/openaccess-ai-collective_manticore-13b-chat-pyg wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/manticore-pyg/gptq/Manticore-13B-Chat-Pyg-GPTQ-4bit-128g.no-act-order.safetensors
66
+ ```
67
+
68
+ # Original Manticore Chat 13B model card
69
+
70
+ # Manticore 13B Chat
71
+
72
+ Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of
73
+ chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens.
74
+
75
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
76
+
77
+ # Training Datasets
78
+
79
+ Manticore 13B Chat is a Llama 13B model fine-tuned on the following datasets along with the datasets from the original Manticore 13B.
80
+
81
+ **Manticore 13B Chat was trained on 25% of the datasets below. The datasets were merged, shuffled, and then sharded into 4 parts.**
82
+
83
+ - de-duped pygmalion dataset, filtered down to RP data
84
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented
85
+ - hellaswag, updated for detailed explanations w 30K+ rows
86
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented
87
+ - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered)
88
+
89
+ Manticore 13B
90
+ - [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
91
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
92
+ - [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
93
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
94
+ - [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
95
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
96
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
97
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split
98
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
99
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
100
+
101
+ Not added from Manticore 13B:
102
+ - mmlu - mmlu datasets were not added to this model as the `test` split is used for benchmarks
103
+
104
+ # Shoutouts
105
+
106
+ Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset.
107
+ # Demo
108
+
109
+ Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
110
+ - https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg
111
+
112
+ ## Release Notes
113
+
114
+ - https://wandb.ai/wing-lian/manticore-13b-v2/runs/hxr3aiiw
115
+
116
+ ## Build
117
+
118
+ Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
119
+ - 3 epochs taking approximately 8 hours. No further epochs will be released.
120
+ - The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
121
+
122
+ ## Bias, Risks, and Limitations
123
+ Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
124
+ Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
125
+
126
+ ## Examples
127
+
128
+ ```
129
+ USER: make up a sonnet in the style of shakespeare about a skier skiing in the glades on a bluebird day.
130
+ ASSISTANT: A skier doth glide 'neath sun’s glow, His edges sharp to cut the snow, Bluebird sky above, While he doth revel on the slopes below, The cold air biting his nose, But not his spirits, For he is filled with a sense of joy, As he carves through the powdery drifts. His body moves in graceful rhythm, And each turn he makes with ease, He feels like he could ski forever, With no care for the world outside these glades.
131
+ ```
132
+
133
+ ```
134
+ USER: tell me a joke about alpacas
135
+ ASSISTANT: Did you hear about the famous alpaca ballet dancer? She was a prima llama.
136
+ USER: okay, that is actually pretty good
137
+ ASSISTANT: What do you call an alpaca in a bowtie? A llama-cholla!
138
+ USER: why is that funny?
139
+ ASSISTANT: Because it’s a play on words. “Llama-cholla” sounds like “lame chola,” which is Spanish slang for someone who acts cool or pretentious. So, the joke is implying that alpacas are vain and act cool with their bowties.
140
+ ```
141
+
142
+ ```
143
+ USER: What is 44 + 77?
144
+ ASSISTANT: 121
145
+ USER: What is the sum of 33 and 77?
146
+ ASSISTANT: 110
147
+ USER: tell me 5 words that start with “ex” and the sum of 33 and 77.
148
+ ASSISTANT: exotic, expressive, explicit, experimental, and extravagant. The sum of 33 and 77 is 110.
149
+ USER: tell me 3 words that start with “egr” and end in “s”
150
+ ASSISTANT: egret, eggplant, and egrets.
151
+ ```
152
+ - oof
153
+
154
+
155
+ ```
156
+ USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
157
+ ASSISTANT: Sally has a total of 6 siblings, so she has a total of 6 - 2 = 4 sisters.
158
+ USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Use multi-step reasoning to explain your answer.
159
+ ASSISTANT: Sally has a total of three brothers, so she has a total of three * 2 = six siblings. Her family consists of three boys and six girls in total. This means that Sally has three sisters since there are six female relatives and only three male ones.
160
+ USER: But when brothers have sisters, aren’t they shared or the same?
161
+ ASSISTANT: No, not every sibling shares their mother. For example, I have two brothers who share one mother but my third brother has a different mother from both of them.
162
+ USER: Let’s assume they all share the same mother and father. Now explain with multi-step reasoning.
163
+ ASSISTANT: If Sally has three brothers then that means she has six siblings altogether. Each brother has two sisters, which means that there are twelve female relatives in her family - one for each of her brothers and herself. Since there are only six females in the entire family, that means that Sally’s mother has no other children than her own three sons - so she can’t be the mother of any of the sisters. So all those sisters have to belong to one of the brothers, and since they are shared by two brothers, then each brother has a single sister. Therefore, there is only one sister for Sally, making the final answer 1.
164
+ ```
165
+ - pretty sure that reasoning isn't correct, and the final answer of having 1 sister is a lucky guess