File size: 657 Bytes
955e908 e1fc0d5 5377be7 f814989 955e908 e1fc0d5 f814989 452d826 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
---
license: gpl-3.0
tags:
- vicuna
- ggml
pipeline_tag: conversational
language:
- en
- bg
- ca
- cs
- da
- de
- es
- fr
- hr
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- sr
- sv
- uk
library_name: adapter-transformers
---
Note: If you previously used the q4_0 model before April 26th, 2023, you are using an outdated model. I suggest redownloading for a better experience. Check https://github.com/ggerganov/llama.cpp#quantization for details on the different quantization types.
This is a ggml version of vicuna 7b and 13b. This is the censored model, a similar 1.0 uncensored 13b model can be found at https://huggingface.co/eachadea/ggml-vicuna-13b-1.1. |