DeepMount00 commited on
Commit
b930777
1 Parent(s): 0fcf93a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -13,5 +13,4 @@ language:
13
  | [mistal-Ita-7b-q4_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB | medium, balanced quality - recommended |
14
  | [mistal-Ita-7b-q5_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q5_k_m.gguf) | Q5_K_M | 5 | 7.63 GB | large, very low quality loss - recommended |
15
 
16
- **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
17
  <!-- README_GGUF.md-provided-files end -->
 
13
  | [mistal-Ita-7b-q4_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB | medium, balanced quality - recommended |
14
  | [mistal-Ita-7b-q5_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q5_k_m.gguf) | Q5_K_M | 5 | 7.63 GB | large, very low quality loss - recommended |
15
 
 
16
  <!-- README_GGUF.md-provided-files end -->