These files are GGML format model files for OpenAccess AI Collective's Manticore 13B Chat.
GGML files are for CPU + GPU inference using llama.cpp and libraries and UIs which support this format, such as:
- 4-bit GPTQ models for GPU inference
- 2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference
- Unquantised fp16 model in pytorch format, for GPU inference and for further conversions
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
The new methods available are:
- GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
- GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
- GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
- GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
- GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
- GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
|Name||Quant method||Bits||Size||Max RAM required||Use case|
|Manticore-13B-Chat-Pyg.ggmlv3.q2_K.bin||q2_K||2||5.43 GB||7.93 GB||New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.|
|Manticore-13B-Chat-Pyg.ggmlv3.q3_K_L.bin||q3_K_L||3||6.87 GB||9.37 GB||New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K|
|Manticore-13B-Chat-Pyg.ggmlv3.q3_K_M.bin||q3_K_M||3||6.25 GB||8.75 GB||New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K|
|Manticore-13B-Chat-Pyg.ggmlv3.q3_K_S.bin||q3_K_S||3||5.59 GB||8.09 GB||New k-quant method. Uses GGML_TYPE_Q3_K for all tensors|
|Manticore-13B-Chat-Pyg.ggmlv3.q4_0.bin||q4_0||4||7.32 GB||9.82 GB||Original llama.cpp quant method, 4-bit.|
|Manticore-13B-Chat-Pyg.ggmlv3.q4_1.bin||q4_1||4||8.14 GB||10.64 GB||Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.|
|Manticore-13B-Chat-Pyg.ggmlv3.q4_K_M.bin||q4_K_M||4||7.82 GB||10.32 GB||New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K|
|Manticore-13B-Chat-Pyg.ggmlv3.q4_K_S.bin||q4_K_S||4||7.32 GB||9.82 GB||New k-quant method. Uses GGML_TYPE_Q4_K for all tensors|
|Manticore-13B-Chat-Pyg.ggmlv3.q5_0.bin||q5_0||5||8.95 GB||11.45 GB||Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.|
|Manticore-13B-Chat-Pyg.ggmlv3.q5_1.bin||q5_1||5||9.76 GB||12.26 GB||Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference.|
|Manticore-13B-Chat-Pyg.ggmlv3.q5_K_M.bin||q5_K_M||5||9.21 GB||11.71 GB||New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K|
|Manticore-13B-Chat-Pyg.ggmlv3.q5_K_S.bin||q5_K_S||5||8.95 GB||11.45 GB||New k-quant method. Uses GGML_TYPE_Q5_K for all tensors|
|Manticore-13B-Chat-Pyg.ggmlv3.q6_K.bin||q6_K||6||10.68 GB||13.18 GB||New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors|
|Manticore-13B-Chat-Pyg.ggmlv3.q8_0.bin||q8_0||8||13.83 GB||16.33 GB||Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.|
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
I use the following command line; adjust for your tastes and needs:
./main -t 10 -ngl 32 -m Manticore-13B-Chat-Pyg.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
-t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use
-ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the
-p <PROMPT> argument with
Further instructions here: text-generation-webui/docs/llama.cpp-models.md.
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Special thanks to: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
Patreon special mentions: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using
### in favor of
chat only style prompts using
ASSISTANT: as well as pygmalion/metharme prompting using
<|system|>, <|user|> and <|model|> tokens.
Manticore 13B Chat is a Llama 13B model fine-tuned on the following datasets along with the datasets from the original Manticore 13B.
Manticore 13B Chat was trained on 25% of the datasets below. The datasets were merged, shuffled, and then sharded into 4 parts.
- de-duped pygmalion dataset, filtered down to RP data
- riddle_sense - instruct augmented
- hellaswag, updated for detailed explanations w 30K+ rows
- gsm8k - instruct augmented
- ShareGPT - based on a cleaned and de-suped subset
- subset of QingyiSi/Alpaca-CoT for roleplay and CoT
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the
- hellaswag - 5K row subset of instruct augmented for concise responses, derived from the
- metaeval/ScienceQA_text_only - instruct for concise responses
- openai/summarize_from_feedback - instruct augmented tl;dr summarization
Not added from Manticore 13B:
- mmlu - mmlu datasets were not added to this model as the
testsplit is used for benchmarks
Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset.
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
Manticore was built with Axolotl on 8xA100 80GB
- 3 epochs taking approximately 8 hours. No further epochs will be released.
- The configuration to duplicate this build is provided in this repo's /config folder.
Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
USER: make up a sonnet in the style of shakespeare about a skier skiing in the glades on a bluebird day. ASSISTANT: A skier doth glide 'neath sun’s glow, His edges sharp to cut the snow, Bluebird sky above, While he doth revel on the slopes below, The cold air biting his nose, But not his spirits, For he is filled with a sense of joy, As he carves through the powdery drifts. His body moves in graceful rhythm, And each turn he makes with ease, He feels like he could ski forever, With no care for the world outside these glades.
USER: tell me a joke about alpacas ASSISTANT: Did you hear about the famous alpaca ballet dancer? She was a prima llama. USER: okay, that is actually pretty good ASSISTANT: What do you call an alpaca in a bowtie? A llama-cholla! USER: why is that funny? ASSISTANT: Because it’s a play on words. “Llama-cholla” sounds like “lame chola,” which is Spanish slang for someone who acts cool or pretentious. So, the joke is implying that alpacas are vain and act cool with their bowties.
USER: What is 44 + 77? ASSISTANT: 121 USER: What is the sum of 33 and 77? ASSISTANT: 110 USER: tell me 5 words that start with “ex” and the sum of 33 and 77. ASSISTANT: exotic, expressive, explicit, experimental, and extravagant. The sum of 33 and 77 is 110. USER: tell me 3 words that start with “egr” and end in “s” ASSISTANT: egret, eggplant, and egrets.
USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? ASSISTANT: Sally has a total of 6 siblings, so she has a total of 6 - 2 = 4 sisters. USER: Sally has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have? Use multi-step reasoning to explain your answer. ASSISTANT: Sally has a total of three brothers, so she has a total of three * 2 = six siblings. Her family consists of three boys and six girls in total. This means that Sally has three sisters since there are six female relatives and only three male ones. USER: But when brothers have sisters, aren’t they shared or the same? ASSISTANT: No, not every sibling shares their mother. For example, I have two brothers who share one mother but my third brother has a different mother from both of them. USER: Let’s assume they all share the same mother and father. Now explain with multi-step reasoning. ASSISTANT: If Sally has three brothers then that means she has six siblings altogether. Each brother has two sisters, which means that there are twelve female relatives in her family - one for each of her brothers and herself. Since there are only six females in the entire family, that means that Sally’s mother has no other children than her own three sons - so she can’t be the mother of any of the sisters. So all those sisters have to belong to one of the brothers, and since they are shared by two brothers, then each brother has a single sister. Therefore, there is only one sister for Sally, making the final answer 1.
- pretty sure that reasoning isn't correct, and the final answer of having 1 sister is a lucky guess
- Downloads last month