--- inference: false language: - zh license: other model_creator: lmsys model_link: https://huggingface.co/lmsys/vicuna-33b-v1.3 model_name: vicuna-33b-v1.3 model_type: vicuna pipeline_tag: text-generation quantized_by: shaowenchen tasks: - text2text-generation tags: - gguf - vicuna - chinese --- ## Provided files | Name | Quant method | Size | | ----------------------------- | ------------ | ----- | | vicuna-33b-v1.3.Q2_K.gguf | Q2_K | 13 GB | | vicuna-33b-v1.3.Q3_K.gguf | Q3_K | 15 GB | | vicuna-33b-v1.3.Q3_K_L.gguf | Q3_K_L | 16 GB | | vicuna-33b-v1.3.Q3_K_S.gguf | Q3_K_S | 13 GB | | vicuna-33b-v1.3.Q4_0.gguf | Q4_0 | 17 GB | | vicuna-33b-v1.3.Q4_1.gguf | Q4_1 | 19 GB | | vicuna-33b-v1.3.Q4_K.gguf | Q4_K | 18 GB | | vicuna-33b-v1.3.Q4_K_S.gguf | Q4_K_S | 17 GB | | vicuna-33b-v1.3.Q5_0.gguf | Q5_0 | 21 GB | | vicuna-33b-v1.3.Q5_1.gguf | Q5_1 | 23 GB | | vicuna-33b-v1.3.Q5_K.gguf | Q5_K | 21 GB | | vicuna-33b-v1.3.Q5_K_S.gguf | Q5_K_S | 21 GB | | vicuna-33b-v1.3.Q6_K.gguf | Q6_K | 25 GB | | vicuna-33b-v1.3.Q8_0.gguf | Q8_0 | 32 GB | Usage: ```bash docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest ``` ## Provided images | Name | Quant method | Compressed Size | | --------------------------------------- | ------------ | --------------- | | `shaowenchen/vicuna-33b-v1.3-gguf:Q2_K` | Q2_K | 12.78 GB | | `shaowenchen/vicuna-33b-v1.3-gguf:Q3_K` | Q3_K | 14.81 GB | | `shaowenchen/vicuna-33b-v1.3-gguf:Q4_K` | Q4_K | 18.24 GB | | `shaowenchen/vicuna-33b-v1.3-gguf:Q5_K` | Q5_K | 21.72 GB | | `shaowenchen/vicuna-33b-v1.3-gguf:Q6_K` | Q6_K | 25.05 GB | | `shaowenchen/vicuna-33b-v1.3-gguf:Q8_0` | Q8_0 | 31.34 GB | | `shaowenchen/vicuna-33b-v1.3-gguf:full` | full | 56.07 GB | Usage: ``` docker run --rm -p 8000:8000 shaowenchen/vicuna-33b-v1.3-gguf:Q2_K ``` and you can view http://localhost:8000/docs to see the swagger UI.