YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
juud-Mistral-7B - GGUF
- Model creator: https://huggingface.co/AIJUUD/
- Original model: https://huggingface.co/AIJUUD/juud-Mistral-7B/
Name | Quant method | Size |
---|---|---|
juud-Mistral-7B.Q2_K.gguf | Q2_K | 2.53GB |
juud-Mistral-7B.IQ3_XS.gguf | IQ3_XS | 2.81GB |
juud-Mistral-7B.IQ3_S.gguf | IQ3_S | 2.96GB |
juud-Mistral-7B.Q3_K_S.gguf | Q3_K_S | 2.95GB |
juud-Mistral-7B.IQ3_M.gguf | IQ3_M | 3.06GB |
juud-Mistral-7B.Q3_K.gguf | Q3_K | 3.28GB |
juud-Mistral-7B.Q3_K_M.gguf | Q3_K_M | 3.28GB |
juud-Mistral-7B.Q3_K_L.gguf | Q3_K_L | 3.56GB |
juud-Mistral-7B.IQ4_XS.gguf | IQ4_XS | 3.67GB |
juud-Mistral-7B.Q4_0.gguf | Q4_0 | 3.83GB |
juud-Mistral-7B.IQ4_NL.gguf | IQ4_NL | 3.87GB |
juud-Mistral-7B.Q4_K_S.gguf | Q4_K_S | 3.86GB |
juud-Mistral-7B.Q4_K.gguf | Q4_K | 4.07GB |
juud-Mistral-7B.Q4_K_M.gguf | Q4_K_M | 4.07GB |
juud-Mistral-7B.Q4_1.gguf | Q4_1 | 4.24GB |
juud-Mistral-7B.Q5_0.gguf | Q5_0 | 4.65GB |
juud-Mistral-7B.Q5_K_S.gguf | Q5_K_S | 4.65GB |
juud-Mistral-7B.Q5_K.gguf | Q5_K | 4.78GB |
juud-Mistral-7B.Q5_K_M.gguf | Q5_K_M | 4.78GB |
juud-Mistral-7B.Q5_1.gguf | Q5_1 | 5.07GB |
juud-Mistral-7B.Q6_K.gguf | Q6_K | 5.53GB |
juud-Mistral-7B.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
library_name: transformers license: apache-2.0
Model Card for Model ID
Welcome To BNK AI.
You now are seeing Initiative status of Huge AI Team like mackintosh Garage company
Though beginning seems humble future will be prosperous!
If you have a question, feel free to contact to bnkaiteam@gmail.com
- Downloads last month
- 4,608
Model size
7.24B params
Architecture
llama
Unable to determine this model's library. Check the
docs
.