apepkuss79 commited on
Commit
1419f28
·
verified ·
1 Parent(s): cfc6dee

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -82,7 +82,7 @@ tags:
82
  --ctx-size 128000 \
83
  ```
84
 
85
- ## Quantized GGUF Models
86
 
87
  | Name | Quant method | Bits | Size | Use case |
88
  | ---- | ---- | ---- | ---- | ----- |
@@ -98,6 +98,6 @@ tags:
98
  | [Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended |
99
  | [Meta-Llama-3.1-8B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
100
  | [Meta-Llama-3.1-8B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
101
- | [Meta-Llama-3.1-8B-Instruct-f16.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-f16.gguf) | f16 | 16 | 16.1 GB| |
102
 
103
  *Quantized with llama.cpp b3445.*
 
82
  --ctx-size 128000 \
83
  ```
84
 
85
+ <!-- ## Quantized GGUF Models
86
 
87
  | Name | Quant method | Bits | Size | Use case |
88
  | ---- | ---- | ---- | ---- | ----- |
 
98
  | [Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 5.6 GB| large, low quality loss - recommended |
99
  | [Meta-Llama-3.1-8B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-Q6_K.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
100
  | [Meta-Llama-3.1-8B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss - not recommended |
101
+ | [Meta-Llama-3.1-8B-Instruct-f16.gguf](https://huggingface.co/second-state/Meta-Llama-3.1-8B-Instruct/blob/main/Meta-Llama-3.1-8B-Instruct-f16.gguf) | f16 | 16 | 16.1 GB| | -->
102
 
103
  *Quantized with llama.cpp b3445.*