rahuldshetty commited on
Commit
d2da9d2
1 Parent(s): 77566be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ license_link: https://ai.google.dev/gemma/terms
12
  GGUF Quantized version of [gemma-2b](https://huggingface.co/google/gemma-2b).
13
 
14
  | Name | Quant method | Bits | Size | Use case |
15
- | ---- | ---- | ---- | ---- | ---- | ----- |
16
  | [gemma-2b-Q2_K.gguf](https://huggingface.co/rahuldshetty/gemma-2b-gguf-quantized/blob/main/gemma-2b-Q2_K.gguf) | Q2_K | 2 | 900 MB | smallest, significant quality loss - not recommended for most purposes |
17
  | [gemma-2b-Q4_K_M.gguf](https://huggingface.co/rahuldshetty/gemma-2b-gguf-quantized/blob/main/gemma-2b-Q4_K_M.gguf) | Q4_K_M | 4 | 1.5 GB | medium, balanced quality - recommended |
18
 
 
12
  GGUF Quantized version of [gemma-2b](https://huggingface.co/google/gemma-2b).
13
 
14
  | Name | Quant method | Bits | Size | Use case |
15
+ | ---- | ---- | ---- | ---- | ----- |
16
  | [gemma-2b-Q2_K.gguf](https://huggingface.co/rahuldshetty/gemma-2b-gguf-quantized/blob/main/gemma-2b-Q2_K.gguf) | Q2_K | 2 | 900 MB | smallest, significant quality loss - not recommended for most purposes |
17
  | [gemma-2b-Q4_K_M.gguf](https://huggingface.co/rahuldshetty/gemma-2b-gguf-quantized/blob/main/gemma-2b-Q4_K_M.gguf) | Q4_K_M | 4 | 1.5 GB | medium, balanced quality - recommended |
18