TheBloke commited on
Commit
2d6998a
1 Parent(s): 7d8b2d2

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
4
+ - ehartford/wizard_vicuna_70k_unfiltered
5
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
6
+ - QingyiSi/Alpaca-CoT
7
+ - teknium/GPT4-LLM-Cleaned
8
+ - teknium/GPTeacher-General-Instruct
9
+ - metaeval/ScienceQA_text_only
10
+ - hellaswag
11
+ - tasksource/mmlu
12
+ - openai/summarize_from_feedback
13
+ language:
14
+ - en
15
+ library_name: transformers
16
+ pipeline_tag: text-generation
17
+ ---
18
+
19
+ # Manticore 13B GPTQ
20
+
21
+ This repo contains 4bit GPTQ format quantised models of [OpenAccess AI Collective's Manticore 13B](https://huggingface.co/openaccess-ai-collective/manticore-13b).
22
+
23
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
24
+
25
+ ## Repositories available
26
+
27
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Manticore-13B-GPTQ).
28
+ * [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/Manticore-13B-GGML).
29
+ * [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b).
30
+
31
+ ## How to easily download and use this model in text-generation-webui
32
+
33
+ Open the text-generation-webui UI as normal.
34
+
35
+ 1. Click the **Model tab**.
36
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Manticore-13B-GPTQ`.
37
+ 3. Click **Download**.
38
+ 4. Wait until it says it's finished downloading.
39
+ 5. Click the **Refresh** icon next to **Model** in the top left.
40
+ 6. In the **Model drop-down**: choose the model you just downloaded, `Manticore-13B-GPTQ`.
41
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
42
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
43
+ 9. Click **Save settings for this model** in the top right.
44
+ 10. Click **Reload the Model** in the top right.
45
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
46
+
47
+ ## Provided files
48
+
49
+ **`Manticore-13B-GPTQ-4bit-128g.no-act-order.safetensors`**
50
+
51
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.
52
+
53
+ It was created without `--act-order` to ensure compatibility with all UIs out there.
54
+
55
+ * `Manticore-13B-GPTQ-4bit-128g.no-act-order.safetensors`
56
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
57
+ * Works with text-generation-webui one-click-installers
58
+ * Parameters: Groupsize = 128. No act-order.
59
+ * Command used to create the GPTQ:
60
+ ```
61
+ python llama.py /workspace/models/openaccess-ai-collective_manticore-13b/ wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/manticore-13b/gptq/Manticore-13B-GPTQ-4bit-128g.no-act-order.safetensors
62
+ ```
63
+
64
+
65
+ # Original Model Card: Manticore 13B - Preview Release (previously Wizard Mega)
66
+
67
+ Manticore 13B is a Llama 13B model fine-tuned on the following datasets:
68
+ - [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset
69
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
70
+ - [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
71
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
72
+ - [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
73
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
74
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses
75
+ - mmlu: instruct augmented for detailed responses subset including
76
+ - abstract_algebra
77
+ - conceptual_physics
78
+ - formal_logic
79
+ - high_school_physics
80
+ - logical_fallacies
81
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses
82
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
83
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
84
+
85
+
86
+ # Demo
87
+
88
+ Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
89
+ - https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml
90
+
91
+ ## Release Notes
92
+
93
+ - https://wandb.ai/wing-lian/manticore-13b/runs/nq3u3uoh/workspace
94
+
95
+ ## Build
96
+
97
+ Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB
98
+ - Preview Release: 1 epoch taking 8 hours.
99
+ - The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs).
100
+
101
+ ## Bias, Risks, and Limitations
102
+ Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
103
+ Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
104
+
105
+ ## Examples
106
+
107
+ ````
108
+ ### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.
109
+
110
+ ### Assistant:
111
+ ````
112
+
113
+ ```
114
+ ### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar...
115
+
116
+ ### Assistant:
117
+ ```
118
+