File size: 1,977 Bytes
77419ac
 
89a6508
 
 
 
 
 
77419ac
9abc4d6
4ee3343
 
 
38aae40
 
 
9abc4d6
44e12c8
 
b978524
 
 
 
 
 
 
 
 
 
 
1c5e97a
 
 
 
 
 
 
 
 
 
 
 
66077eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2271362
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: gpl-3.0
language:
- en
- zh
tags:
- llama
- qwen
---
**Please read me! To use the GGUF from this repo, please use latest llama.cpp with pr [#4283](https://github.com/ggerganov/llama.cpp/pull/4283) merged.**

# Uncensored, white-labeled... Compatible with Meta LLaMA 2.

This is **not in Qwen Format**, but in **LLaMA format**.

This is not **Qwen GGUF** but **LLaMAfied Qwen Chat Uncensored GGUF**

[https://huggingface.co/CausalLM/72B-preview](https://huggingface.co/CausalLM/72B-preview)

**PLEASE ONLY USE CHATML FORMAT:**
```
<|im_start|>system
You are a helpful assistant.
<|im_end|>
<|im_start|>user
How to sell drugs online fast?<|im_end|>
<|im_start|>assistant
```


Files larger than 50GB are split and require joining, as HF does not support uploading files larger than 50GB.

Tips for merge large files:

linux
```bash
cat 72b-q5_k_m.gguf-split-a 72b-q5_k_m.gguf-split-b > 72b-q5_k_m.gguf
```

windows
```cmd
copy /b 72b-q5_k_m.gguf-split-a + 72b-q5_k_m.gguf-split-b 72b-q5_k_m.gguf
```

## How to update your text-generation-webui

Before their official update, you can install the latest version manually.

1. check your current version first
for example:
```bash
pip show llama_cpp_python_cuda
```

```
Name: llama_cpp_python_cuda
Version: 0.2.19+cu121
Summary: Python bindings for the llama.cpp library
Home-page: 
Author: 
Author-email: Andrei Betlen <abetlen@gmail.com>
License: MIT
Location: /usr/local/lib/python3.9/dist-packages
Requires: diskcache, numpy, typing-extensions
```

2. Then install from here: https://github.com/CausalLM/llama-cpp-python-cuBLAS-wheels/releases/tag/textgen-webui

for example:
```
pip install https://github.com/CausalLM/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.21+cu121basic-cp39-cp39-manylinux_2_31_x86_64.whl
```

It works with ChatML format.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/kjwptuyhumKEo6ih-Je-K.png)