mradermacher commited on
Commit
808ab18
1 Parent(s): 6252ba5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -4,9 +4,17 @@ language:
4
  library_name: transformers
5
  quantized_by: mradermacher
6
  ---
7
- weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B
8
 
 
9
  <!-- provided-files -->
 
 
 
 
 
 
 
10
  ## Provided Quants
11
 
12
  | Link | Type | Size/GB | Notes |
@@ -24,4 +32,5 @@ weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/Nous-Capybara-l
24
  | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.2 | fast, medium quality |
25
  | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.9 | best weighted quant |
26
 
 
27
  <!-- end -->
 
4
  library_name: transformers
5
  quantized_by: mradermacher
6
  ---
7
+ ## About
8
 
9
+ weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B
10
  <!-- provided-files -->
11
+
12
+ ## Usage
13
+
14
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
15
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
16
+ more details, including on how to concatenate multi-part files.
17
+
18
  ## Provided Quants
19
 
20
  | Link | Type | Size/GB | Notes |
 
32
  | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.2 | fast, medium quality |
33
  | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.9 | best weighted quant |
34
 
35
+
36
  <!-- end -->