Upload README.md
Browse files
README.md
CHANGED
@@ -50,7 +50,7 @@ This repo contains GGUF format model files for [Camel AI's CAMEL 13B Combined Da
|
|
50 |
<!-- README_GGUF.md-about-gguf start -->
|
51 |
### About GGUF
|
52 |
|
53 |
-
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
54 |
|
55 |
Here is an incomplate list of clients and libraries that are known to support GGUF:
|
56 |
|
@@ -93,7 +93,7 @@ Below is an instruction that describes a task. Write a response that appropriate
|
|
93 |
<!-- compatibility_gguf start -->
|
94 |
## Compatibility
|
95 |
|
96 |
-
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [
|
97 |
|
98 |
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
99 |
|
@@ -157,7 +157,7 @@ Then click Download.
|
|
157 |
I recommend using the `huggingface-hub` Python library:
|
158 |
|
159 |
```shell
|
160 |
-
pip3 install huggingface-hub
|
161 |
```
|
162 |
|
163 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
@@ -186,25 +186,25 @@ pip3 install hf_transfer
|
|
186 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
187 |
|
188 |
```shell
|
189 |
-
|
190 |
```
|
191 |
|
192 |
-
Windows
|
193 |
</details>
|
194 |
<!-- README_GGUF.md-how-to-download end -->
|
195 |
|
196 |
<!-- README_GGUF.md-how-to-run start -->
|
197 |
## Example `llama.cpp` command
|
198 |
|
199 |
-
Make sure you are using `llama.cpp` from commit [
|
200 |
|
201 |
```shell
|
202 |
-
./main -ngl 32 -m camel-13b-combined.Q4_K_M.gguf --color -c
|
203 |
```
|
204 |
|
205 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
206 |
|
207 |
-
Change `-c
|
208 |
|
209 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
210 |
|
@@ -218,22 +218,24 @@ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://git
|
|
218 |
|
219 |
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
|
220 |
|
221 |
-
### How to load this model
|
222 |
|
223 |
#### First install the package
|
224 |
|
225 |
-
|
|
|
|
|
226 |
# Base ctransformers with no GPU acceleration
|
227 |
-
pip install ctransformers
|
228 |
# Or with CUDA GPU acceleration
|
229 |
-
pip install ctransformers[cuda]
|
230 |
-
# Or with ROCm GPU acceleration
|
231 |
-
CT_HIPBLAS=1 pip install ctransformers
|
232 |
-
# Or with Metal GPU acceleration for macOS systems
|
233 |
-
CT_METAL=1 pip install ctransformers
|
234 |
```
|
235 |
|
236 |
-
#### Simple example code
|
237 |
|
238 |
```python
|
239 |
from ctransformers import AutoModelForCausalLM
|
@@ -246,7 +248,7 @@ print(llm("AI is going to"))
|
|
246 |
|
247 |
## How to use with LangChain
|
248 |
|
249 |
-
Here
|
250 |
|
251 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
252 |
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
@@ -290,6 +292,71 @@ And thank you again to a16z for their generous grant.
|
|
290 |
<!-- original-model-card start -->
|
291 |
# Original model card: Camel AI's CAMEL 13B Combined Data
|
292 |
|
293 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
294 |
|
295 |
<!-- original-model-card end -->
|
|
|
50 |
<!-- README_GGUF.md-about-gguf start -->
|
51 |
### About GGUF
|
52 |
|
53 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
54 |
|
55 |
Here is an incomplate list of clients and libraries that are known to support GGUF:
|
56 |
|
|
|
93 |
<!-- compatibility_gguf start -->
|
94 |
## Compatibility
|
95 |
|
96 |
+
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
|
97 |
|
98 |
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
99 |
|
|
|
157 |
I recommend using the `huggingface-hub` Python library:
|
158 |
|
159 |
```shell
|
160 |
+
pip3 install huggingface-hub
|
161 |
```
|
162 |
|
163 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
|
|
186 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
187 |
|
188 |
```shell
|
189 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CAMEL-13B-Combined-Data-GGUF camel-13b-combined.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
190 |
```
|
191 |
|
192 |
+
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
193 |
</details>
|
194 |
<!-- README_GGUF.md-how-to-download end -->
|
195 |
|
196 |
<!-- README_GGUF.md-how-to-run start -->
|
197 |
## Example `llama.cpp` command
|
198 |
|
199 |
+
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
200 |
|
201 |
```shell
|
202 |
+
./main -ngl 32 -m camel-13b-combined.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
|
203 |
```
|
204 |
|
205 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
206 |
|
207 |
+
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
|
208 |
|
209 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
210 |
|
|
|
218 |
|
219 |
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
|
220 |
|
221 |
+
### How to load this model in Python code, using ctransformers
|
222 |
|
223 |
#### First install the package
|
224 |
|
225 |
+
Run one of the following commands, according to your system:
|
226 |
+
|
227 |
+
```shell
|
228 |
# Base ctransformers with no GPU acceleration
|
229 |
+
pip install ctransformers
|
230 |
# Or with CUDA GPU acceleration
|
231 |
+
pip install ctransformers[cuda]
|
232 |
+
# Or with AMD ROCm GPU acceleration (Linux only)
|
233 |
+
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
|
234 |
+
# Or with Metal GPU acceleration for macOS systems only
|
235 |
+
CT_METAL=1 pip install ctransformers --no-binary ctransformers
|
236 |
```
|
237 |
|
238 |
+
#### Simple ctransformers example code
|
239 |
|
240 |
```python
|
241 |
from ctransformers import AutoModelForCausalLM
|
|
|
248 |
|
249 |
## How to use with LangChain
|
250 |
|
251 |
+
Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
252 |
|
253 |
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
|
254 |
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
|
|
292 |
<!-- original-model-card start -->
|
293 |
# Original model card: Camel AI's CAMEL 13B Combined Data
|
294 |
|
295 |
+
|
296 |
+
<!-- header start -->
|
297 |
+
<div style="width: 100%;">
|
298 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
299 |
+
</div>
|
300 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
301 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
302 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
303 |
+
</div>
|
304 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
305 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
306 |
+
</div>
|
307 |
+
</div>
|
308 |
+
<!-- header end -->
|
309 |
+
|
310 |
+
# Camel AI's CAMEL 13B Combined Data fp16
|
311 |
+
|
312 |
+
These files are pytorch format fp16 model files for [Camel AI's CAMEL 13B Combined Data](https://huggingface.co/camel-ai/CAMEL-13B-Combined-Data).
|
313 |
+
|
314 |
+
It is the result of merging and/or converting the source repository to float16.
|
315 |
+
|
316 |
+
## Repositories available
|
317 |
+
|
318 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-fp16)
|
319 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-GGML)
|
320 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-13B-Combined-Data-fp16)
|
321 |
+
|
322 |
+
<!-- footer start -->
|
323 |
+
## Discord
|
324 |
+
|
325 |
+
For further support, and discussions on these models and AI in general, join us at:
|
326 |
+
|
327 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
328 |
+
|
329 |
+
## Thanks, and how to contribute.
|
330 |
+
|
331 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
332 |
+
|
333 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
334 |
+
|
335 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
336 |
+
|
337 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
338 |
+
|
339 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
340 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
341 |
+
|
342 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
343 |
+
|
344 |
+
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
|
345 |
+
|
346 |
+
Thank you to all my generous patrons and donaters!
|
347 |
+
|
348 |
+
<!-- footer end -->
|
349 |
+
|
350 |
+
# Original model card: Camel AI's CAMEL 13B Combined Data
|
351 |
+
|
352 |
+
CAMEL-13B-Combined-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations collected through our [CAMEL](https://arxiv.org/abs/2303.17760) framework, 100K English public conversations from ShareGPT that can be found [here](https://github.com/lm-sys/FastChat/issues/90#issuecomment-1493250773), and 52K instructions from Alpaca dataset that can be found [here](https://github.com/tatsu-lab/stanford_alpaca/blob/761dc5bfbdeeffa89b8bff5d038781a4055f796a/alpaca_data.json). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL<sup>*</sup>-13B scores an average of **58.1**, outperfroming LLaMA-30B (58.3), and on par with LLaMA-65B(58.1)!
|
353 |
+
|
354 |
+
| Model | size | ARC-C (25 shots, acc_norm) | HellaSwag (10 shots, acc_norm) | MMLU (5 shots, acc_norm) | TruthfulQA (0 shot, mc2) | Average | Delta |
|
355 |
+
|-------------|:----:|:---------------------------:|:-------------------------------:|:-------------------------:|:-------------------------:|:-------:|-------|
|
356 |
+
| LLaMA | 13B | 50.8 | 78.9 | 37.7 | 39.9 | 51.8 | - |
|
357 |
+
| Vicuna | 13B | 47.4 | 75.2 | 39.6 | 49.8 | 53.7 | 1.9 |
|
358 |
+
| CAMEL<sup>*</sup> | 13B | 55.5 | 79.3 | 50.3 | 47.3 | 58.1 | 6.3 |
|
359 |
+
| LLaMA | 65B | 57.8 | 84.2 | 48.8 | 42.3 | **58.3** | 6.5 |
|
360 |
+
|
361 |
|
362 |
<!-- original-model-card end -->
|