mradermacher commited on
Commit
3574f5c
1 Parent(s): cdf85f9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -5,9 +5,17 @@ library_name: transformers
5
  license: llama2
6
  quantized_by: mradermacher
7
  ---
8
- weighted/imatrix quants of https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0
9
 
 
10
  <!-- provided-files -->
 
 
 
 
 
 
 
11
  ## Provided Quants
12
 
13
  | Link | Type | Size/GB | Notes |
@@ -24,4 +32,5 @@ weighted/imatrix quants of https://huggingface.co/sophosympatheia/Aurora-Nights-
24
  | [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
25
  | [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
26
 
 
27
  <!-- end -->
 
5
  license: llama2
6
  quantized_by: mradermacher
7
  ---
8
+ ## About
9
 
10
+ weighted/imatrix quants of https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0
11
  <!-- provided-files -->
12
+
13
+ ## Usage
14
+
15
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
16
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
17
+ more details, including on how to concatenate multi-part files.
18
+
19
  ## Provided Quants
20
 
21
  | Link | Type | Size/GB | Notes |
 
32
  | [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
33
  | [GGUF](https://huggingface.co/mradermacher/Aurora-Nights-70B-v1.0-i1-GGUF/resolve/main/Aurora-Nights-70B-v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
34
 
35
+
36
  <!-- end -->