--- language: - en library_name: transformers quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ1_S.gguf) | i1-IQ1_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.9 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q2_K.gguf) | i1-Q2_K | 13.4 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.2 | fast, lower quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 14.7 | IQ3-XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.5 | IQ3-XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.1 | fast, medium quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.2 | fast, medium quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.9 | best weighted quant | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)