mradermacher commited on
Commit
a7b2c62
1 Parent(s): f380afd

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -2,6 +2,7 @@
2
  base_model: nitky/Swallow-70b-RP
3
  language:
4
  - en
 
5
  library_name: transformers
6
  license: llama2
7
  model_type: llama
@@ -46,7 +47,6 @@ more details, including on how to concatenate multi-part files.
46
  | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q6_K.gguf.part2of2) | Q6_K | 56.8 | very good quality |
47
  | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
48
 
49
-
50
  Here is a handy graph by ikawrakow comparing some lower-quality quant
51
  types (lower is better):
52
 
 
2
  base_model: nitky/Swallow-70b-RP
3
  language:
4
  - en
5
+ - ja
6
  library_name: transformers
7
  license: llama2
8
  model_type: llama
 
47
  | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q6_K.gguf.part2of2) | Q6_K | 56.8 | very good quality |
48
  | [PART 1](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Swallow-70b-RP-GGUF/resolve/main/Swallow-70b-RP.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
49
 
 
50
  Here is a handy graph by ikawrakow comparing some lower-quality quant
51
  types (lower is better):
52