TheBloke commited on
Commit
f301ea4
1 Parent(s): 10a4221

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Camel AI's CAMEL 13B Combined Data GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [Camel AI's CAMEL 13B Combined Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GPTQ)
29
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGML)
30
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-fp16)
31
+
32
+ ## How to easily download and use this model in text-generation-webui
33
+
34
+ Please make sure you're using the latest version of text-generation-webui
35
+
36
+ 1. Click the **Model tab**.
37
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/CAMEL-13B-Combined-Data-GPTQ`.
38
+ 3. Click **Download**.
39
+ 4. The model will start downloading, and once finished it will be automatically loaded.
40
+ 5. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
41
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
42
+ 6. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
43
+
44
+ ## How to use this GPTQ model from Python code
45
+
46
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
47
+
48
+ `pip install auto-gptq`
49
+
50
+ Then try the following example code:
51
+
52
+ ```python
53
+ from transformers import AutoTokenizer, pipeline, logging
54
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
55
+ import argparse
56
+
57
+ model_name_or_path = "TheBloke/CAMEL-13B-Combined-Data-GPTQ"
58
+ model_basename = "camel-30b-combined-GPTQ-4bit--1g.act.order"
59
+
60
+ use_triton = False
61
+
62
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
63
+
64
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
65
+ model_basename=model_basename,
66
+ use_safetensors=True,
67
+ trust_remote_code=True,
68
+ device="cuda:0",
69
+ use_triton=use_triton,
70
+ quantize_config=None)
71
+
72
+ print("\n\n*** Generate:")
73
+
74
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
75
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
76
+ print(tokenizer.decode(output[0]))
77
+
78
+ # Inference can also be done using transformers' pipeline
79
+
80
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
81
+ logging.set_verbosity(logging.CRITICAL)
82
+
83
+ prompt = "Tell me about AI"
84
+ prompt_template=f'''### Human: {prompt}
85
+ ### Assistant:'''
86
+
87
+ print("*** Pipeline:")
88
+ pipe = pipeline(
89
+ "text-generation",
90
+ model=model,
91
+ tokenizer=tokenizer,
92
+ max_new_tokens=512,
93
+ temperature=0.7,
94
+ top_p=0.95,
95
+ repetition_penalty=1.15
96
+ )
97
+
98
+ print(pipe(prompt_template)[0]['generated_text'])
99
+ ```
100
+
101
+ ## Provided files
102
+
103
+ **camel-30b-combined-GPTQ-4bit--1g.act.order.safetensors**
104
+
105
+ This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
106
+
107
+ It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
108
+
109
+ * `camel-30b-combined-GPTQ-4bit--1g.act.order.safetensors`
110
+ * Works with AutoGPTQ in CUDA or Triton modes.
111
+ * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
112
+ * Works with text-generation-webui, including one-click-installers.
113
+ * Parameters: Groupsize = -1. Act Order / desc_act = True.
114
+
115
+ <!-- footer start -->
116
+ ## Discord
117
+
118
+ For further support, and discussions on these models and AI in general, join us at:
119
+
120
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
121
+
122
+ ## Thanks, and how to contribute.
123
+
124
+ Thanks to the [chirper.ai](https://chirper.ai) team!
125
+
126
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
127
+
128
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
129
+
130
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
131
+
132
+ * Patreon: https://patreon.com/TheBlokeAI
133
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
134
+
135
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
136
+
137
+ **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
138
+
139
+ Thank you to all my generous patrons and donaters!
140
+
141
+ <!-- footer end -->
142
+
143
+ # Original model card: Camel AI's CAMEL 13B Combined Data
144
+
145
+ CAMEL-13B-Combined-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations collected through our [CAMEL](https://arxiv.org/abs/2303.17760) framework, 100K English public conversations from ShareGPT that can be found [here](https://github.com/lm-sys/FastChat/issues/90#issuecomment-1493250773), and 52K instructions from Alpaca dataset that can be found [here](https://github.com/tatsu-lab/stanford_alpaca/blob/761dc5bfbdeeffa89b8bff5d038781a4055f796a/alpaca_data.json). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL<sup>*</sup>-13B scores an average of **58.1**, outperfroming LLaMA-30B (58.3), and on par with LLaMA-65B(58.1)!
146
+
147
+ | Model | size | ARC-C (25 shots, acc_norm) | HellaSwag (10 shots, acc_norm) | MMLU (5 shots, acc_norm) | TruthfulQA (0 shot, mc2) | Average | Delta |
148
+ |-------------|:----:|:---------------------------:|:-------------------------------:|:-------------------------:|:-------------------------:|:-------:|-------|
149
+ | LLaMA | 13B | 50.8 | 78.9 | 37.7 | 39.9 | 51.8 | - |
150
+ | Vicuna | 13B | 47.4 | 75.2 | 39.6 | 49.8 | 53.7 | 1.9 |
151
+ | CAMEL<sup>*</sup> | 13B | 55.5 | 79.3 | 50.3 | 47.3 | 58.1 | 6.3 |
152
+ | LLaMA | 65B | 57.8 | 84.2 | 48.8 | 42.3 | **58.3** | 6.5 |
153
+