mradermacher commited on
Commit
3570c21
1 Parent(s): e23e44c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -7,9 +7,17 @@ tags:
7
  - llama
8
  - llama 2
9
  ---
10
- weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/lzlv-limarpv3-l2-70b
11
 
 
12
  <!-- provided-files -->
 
 
 
 
 
 
 
13
  ## Provided Quants
14
 
15
  | Link | Type | Size/GB | Notes |
@@ -26,4 +34,5 @@ weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/lzlv-limarpv3-l
26
  | [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-i1-GGUF/resolve/main/lzlv-limarpv3-l2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
27
  | [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-i1-GGUF/resolve/main/lzlv-limarpv3-l2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
28
 
 
29
  <!-- end -->
 
7
  - llama
8
  - llama 2
9
  ---
10
+ ## About
11
 
12
+ weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/lzlv-limarpv3-l2-70b
13
  <!-- provided-files -->
14
+
15
+ ## Usage
16
+
17
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
18
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
19
+ more details, including on how to concatenate multi-part files.
20
+
21
  ## Provided Quants
22
 
23
  | Link | Type | Size/GB | Notes |
 
34
  | [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-i1-GGUF/resolve/main/lzlv-limarpv3-l2-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
35
  | [GGUF](https://huggingface.co/mradermacher/lzlv-limarpv3-l2-70b-i1-GGUF/resolve/main/lzlv-limarpv3-l2-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
36
 
37
+
38
  <!-- end -->