raffr
commited on
Commit
•
63cde80
0
Parent(s):
Duplicate from localmodels/LLM
Browse files- .gitattributes +35 -0
- README.md +79 -0
- vicuna-13b-v1.3.0.ggmlv3.q2_K.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q3_K_L.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q3_K_M.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q3_K_S.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q4_0.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q4_1.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q4_K_M.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q4_K_S.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q5_0.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q5_1.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q5_K_M.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q5_K_S.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q6_K.bin +3 -0
- vicuna-13b-v1.3.0.ggmlv3.q8_0.bin +3 -0
.gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
duplicated_from: localmodels/LLM
|
3 |
+
---
|
4 |
+
# Vicuna 13B v1.3 ggml
|
5 |
+
|
6 |
+
From LMSYS: https://huggingface.co/lmsys/vicuna-13b-v1.3
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
11 |
+
|
12 |
+
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
|
13 |
+
|
14 |
+
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
15 |
+
|
16 |
+
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## Files
|
21 |
+
| Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case |
|
22 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
23 |
+
| vicuna-13b-v1.3.0.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
24 |
+
| vicuna-13b-v1.3.0.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
25 |
+
| vicuna-13b-v1.3.0.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
26 |
+
| vicuna-13b-v1.3.0.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
27 |
+
| vicuna-13b-v1.3.0.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
|
28 |
+
| vicuna-13b-v1.3.0.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
29 |
+
| vicuna-13b-v1.3.0.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
30 |
+
| vicuna-13b-v1.3.0.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
31 |
+
| vicuna-13b-v1.3.0.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
32 |
+
| vicuna-13b-v1.3.0.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
33 |
+
| vicuna-13b-v1.3.0.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
34 |
+
| vicuna-13b-v1.3.0.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
35 |
+
| vicuna-13b-v1.3.0.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
36 |
+
| vicuna-13b-v1.3.0.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
37 |
+
|
38 |
+
---
|
39 |
+
|
40 |
+
# Vicuna Model Card
|
41 |
+
|
42 |
+
## Model Details
|
43 |
+
|
44 |
+
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
|
45 |
+
|
46 |
+
- **Developed by:** [LMSYS](https://lmsys.org/)
|
47 |
+
- **Model type:** An auto-regressive language model based on the transformer architecture.
|
48 |
+
- **License:** Non-commercial license
|
49 |
+
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
|
50 |
+
|
51 |
+
### Model Sources
|
52 |
+
|
53 |
+
- **Repository:** https://github.com/lm-sys/FastChat
|
54 |
+
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
|
55 |
+
- **Paper:** https://arxiv.org/abs/2306.05685
|
56 |
+
- **Demo:** https://chat.lmsys.org/
|
57 |
+
|
58 |
+
## Uses
|
59 |
+
|
60 |
+
The primary use of Vicuna is research on large language models and chatbots.
|
61 |
+
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
|
62 |
+
|
63 |
+
## How to Get Started with the Model
|
64 |
+
|
65 |
+
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
|
66 |
+
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
|
67 |
+
|
68 |
+
## Training Details
|
69 |
+
|
70 |
+
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
|
71 |
+
The training data is around 140K conversations collected from ShareGPT.com.
|
72 |
+
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
73 |
+
|
74 |
+
## Evaluation
|
75 |
+
|
76 |
+
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
77 |
+
|
78 |
+
## Difference between different versions of Vicuna
|
79 |
+
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
vicuna-13b-v1.3.0.ggmlv3.q2_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bdc23e0e9e908913f0163a2463f27f7ba49360f666b3422151a3e0a672fdbd4c
|
3 |
+
size 5508521088
|
vicuna-13b-v1.3.0.ggmlv3.q3_K_L.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:688162f8111cea513549e1565b7e291615ada5a0aab0986b39ef4384c3c50c68
|
3 |
+
size 6929269888
|
vicuna-13b-v1.3.0.ggmlv3.q3_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4767c77db1b80896b0b6441e784708ddd0d2dc4885e52867603c1da81fee1f36
|
3 |
+
size 6313231488
|
vicuna-13b-v1.3.0.ggmlv3.q3_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9b3b183eeb08ff8a56cf312db295304ef8407ee9fdb42bb87c746f62af119eda
|
3 |
+
size 5658690688
|
vicuna-13b-v1.3.0.ggmlv3.q4_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:86a5f845438de32de49312c5a290561b9b9455eddc760e401bf6e9a0f86d9736
|
3 |
+
size 7323305088
|
vicuna-13b-v1.3.0.ggmlv3.q4_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f2fe06060068d25dd8b4a7c8f9175f25add1b6361c6e25881ced1bbc65830ef1
|
3 |
+
size 8136770688
|
vicuna-13b-v1.3.0.ggmlv3.q4_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0e32bfad8502da594ee0765eda2963ef431237d55255392251740176f909607f
|
3 |
+
size 7865666688
|
vicuna-13b-v1.3.0.ggmlv3.q4_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f036c88135b7ea997392c6964aba4c23641af9c623af634ee4a4e9568dcd67d1
|
3 |
+
size 7365545088
|
vicuna-13b-v1.3.0.ggmlv3.q5_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0318a1ae330b2007cceed2ac46406be802415c86fcea8e23b45e3b78251d3b10
|
3 |
+
size 8950236288
|
vicuna-13b-v1.3.0.ggmlv3.q5_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3f27b5543718da0d51a73caf17f48de2053cf5ae64dc6ae81e6d50407a1b4cce
|
3 |
+
size 9763701888
|
vicuna-13b-v1.3.0.ggmlv3.q5_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6376257149251e5fa1e1d9c091d76532c19bdb6df7c96af3d6ef3552e4da2ef6
|
3 |
+
size 9229634688
|
vicuna-13b-v1.3.0.ggmlv3.q5_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ee8f56cb2e02d4dbf8f0bccd2367adfc0dc3b37b240abf4f41f5bfeb1c172aa3
|
3 |
+
size 8971996288
|
vicuna-13b-v1.3.0.ggmlv3.q6_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b6b97e7f81a8ec8b773d05c468250be14f5198a074a0608bfc92c78cf7ccb906
|
3 |
+
size 10678850688
|
vicuna-13b-v1.3.0.ggmlv3.q8_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c341c4c97cde5d741cbd6faa8066809408f85d277414ce816f0c39a88dcf1eee
|
3 |
+
size 13831029888
|