Transformers
GGUF
English
Inference Endpoints
mradermacher commited on
Commit
93a8d57
1 Parent(s): ef15992

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -1,8 +1,13 @@
1
  ---
2
  base_model: nbeerbower/Mahou-1.2a-llama3-8B
 
 
 
 
3
  language:
4
  - en
5
  library_name: transformers
 
6
  quantized_by: mradermacher
7
  tags: []
8
  ---
@@ -15,7 +20,7 @@ tags: []
15
  static quants of https://huggingface.co/nbeerbower/Mahou-1.2a-llama3-8B
16
 
17
  <!-- provided-files -->
18
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
19
  ## Usage
20
 
21
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
1
  ---
2
  base_model: nbeerbower/Mahou-1.2a-llama3-8B
3
+ datasets:
4
+ - flammenai/FlameMix-DPO-v1
5
+ - flammenai/Grill-preprod-v1_chatML
6
+ - flammenai/Grill-preprod-v2_chatML
7
  language:
8
  - en
9
  library_name: transformers
10
+ license: llama3
11
  quantized_by: mradermacher
12
  tags: []
13
  ---
 
20
  static quants of https://huggingface.co/nbeerbower/Mahou-1.2a-llama3-8B
21
 
22
  <!-- provided-files -->
23
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mahou-1.2a-llama3-8B-i1-GGUF
24
  ## Usage
25
 
26
  If you are unsure how to use GGUF files, refer to one of [TheBloke's