Manticore-13B-GGML / README.md
TheBloke's picture
Update README.md
e66b8c1
|
raw
history blame
8.13 kB
metadata
datasets:
  - anon8231489123/ShareGPT_Vicuna_unfiltered
  - ehartford/wizard_vicuna_70k_unfiltered
  - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
  - QingyiSi/Alpaca-CoT
  - teknium/GPT4-LLM-Cleaned
  - teknium/GPTeacher-General-Instruct
  - metaeval/ScienceQA_text_only
  - hellaswag
  - tasksource/mmlu
  - openai/summarize_from_feedback
language:
  - en
library_name: transformers
pipeline_tag: text-generation
TheBlokeAI

Manticore 13B GGML

This is GGML format quantised 4-bit, 5-bit and 8-bit models of epoch 3 of OpenAccess AI Collective's Manticore 13B.

This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using llama.cpp.

Repositories available

THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit 2d5db48 or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch previous_llama_ggmlv2.

Epoch

The files in the main branch are from Epoch 3 of Manticore 13B, as of May 19th.

The files in the previous_llama_ggmlv2 branch are from Epoch 1.

Provided files

Name Quant method Bits Size RAM required Use case
manticore-13B.ggmlv3.q4_0.bin q4_0 4bit 8.14GB 10.5GB 4-bit.
manticore-13B.ggmlv3.q4_1.bin q4_1 4bit 8.14GB 10.5GB 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
manticore-13B.ggmlv3.q5_0.bin q5_0 5bit 8.95GB 11.0GB 5-bit. Higher accuracy, higher resource usage and slower inference.
manticore-13B.ggmlv3.q5_1.bin q5_1 5bit 9.76GB 12.25GB 5-bit. Even higher accuracy, and higher resource usage and slower inference.
manticore-13B.ggmlv3.q8_0.bin q8_0 8bit 14.6GB 17GB 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.

How to run in llama.cpp

I use the following command line; adjust for your tastes and needs:

./main -t 8 -m manticore-13B-.ggmlv2.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"

Change -t 8 to the number of physical CPU cores you have.

How to run in text-generation-webui

GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.

Further instructions here: text-generation-webui/docs/llama.cpp-models.md.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Patreon special mentions: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.

Thank you to all my generous patrons and donaters!

Original Model Card: Manticore 13B - Preview Release (previously Wizard Mega)

Manticore 13B is a Llama 13B model fine-tuned on the following datasets:

Demo

Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.

Release Notes

Build

Manticore was built with Axolotl on 8xA100 80GB

  • Preview Release: 1 epoch taking 8 hours.
  • The configuration to duplicate this build is provided in this repo's /config folder.

Bias, Risks, and Limitations

Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.

Examples

### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.

### Assistant:
### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar...

### Assistant: