Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ Quantized and unquantized embedding models in GGUF format for use with `llama.cp
|
|
17 |
|
18 |
| Filename | Quantization | Size |
|
19 |
|:-------- | ------------ | ---- |
|
20 |
-
| [bge-large-zh-v1.5-f32.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-f32.gguf) | F32 | 1.3
|
21 |
| [bge-large-zh-v1.5-f16.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-f16.gguf) | F16 | 620 MB |
|
22 |
| [bge-large-zh-v1.5-q8_0.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-q8_0.gguf) | Q8_0 | 332 MB |
|
23 |
| [bge-large-zh-v1.5-q4_k_m.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-q4_k_m.gguf) | Q4_K_M | 193 MB |
|
|
|
17 |
|
18 |
| Filename | Quantization | Size |
|
19 |
|:-------- | ------------ | ---- |
|
20 |
+
| [bge-large-zh-v1.5-f32.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-f32.gguf) | F32 | 1.3 GB |
|
21 |
| [bge-large-zh-v1.5-f16.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-f16.gguf) | F16 | 620 MB |
|
22 |
| [bge-large-zh-v1.5-q8_0.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-q8_0.gguf) | Q8_0 | 332 MB |
|
23 |
| [bge-large-zh-v1.5-q4_k_m.gguf](https://huggingface.co/CompendiumLabs/bge-large-zh-v1.5-gguf/blob/main/bge-large-zh-v1.5-q4_k_m.gguf) | Q4_K_M | 193 MB |
|