apepkuss79 commited on
Commit
223d19c
1 Parent(s): befaf99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -42,18 +42,18 @@ language: en
42
 
43
  | Name | Quant method | Bits | Size | Use case |
44
  | ---- | ---- | ---- | ---- | ----- |
45
- | [nomic-embed-text-v1.5-Q2_K.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q2_K.gguf) | Q2_K | 2 |60.9 MB| smallest, significant quality loss - not recommended for most purposes |
46
- | [nomic-embed-text-v1.5-Q3_K_L.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_L.gguf) | Q3_K_L | 3 | 80.7 MB| small, substantial quality loss |
47
- | [nomic-embed-text-v1.5-Q3_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_M.gguf) | Q3_K_M | 3 | 76.3 MB| very small, high quality loss |
48
- | [nomic-embed-text-v1.5-Q3_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_S.gguf) | Q3_K_S | 3 | 68.8 MB| very small, high quality loss |
49
- | [nomic-embed-text-v1.5-Q4_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q4_0.gguf) | Q4_0 | 4 | 84.8 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
50
- | [nomic-embed-text-v1.5-Q4_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q4_K_M.gguf) | Q4_K_M | 4 | 90.2 MB| medium, balanced quality - recommended |
51
- | [nomic-embed-text-v1.5-Q4_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q4_K_S.gguf) | Q4_K_S | 4 | 84.1 MB| small, greater quality loss |
52
- | [nomic-embed-text-v1.5-Q5_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q5_0.gguf) | Q5_0 | 5 | 98 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
53
- | [nomic-embed-text-v1.5-Q5_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q5_K_M.gguf) | Q5_K_M | 5 | 103 MB| large, very low quality loss - recommended |
54
- | [nomic-embed-text-v1.5-Q5_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q5_K_S.gguf) | Q5_K_S | 5 | 98 MB| large, low quality loss - recommended |
55
- | [nomic-embed-text-v1.5-Q6_K.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q6_K.gguf) | Q6_K | 6 | 113 MB| very large, extremely low quality loss |
56
- | [nomic-embed-text-v1.5-Q8_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-Q8_0.gguf) | Q8_0 | 8 | 146 MB| very large, extremely low quality loss - not recommended |
57
- | [nomic-embed-text-v1.5-f16.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-GGUF/blob/main/nomic-embed-text-v1.5-f16.gguf) | Q8_0 | 8 | 274 MB| very large, extremely low quality loss - not recommended |
58
 
59
  *Quantized with llama.cpp b2636*
 
42
 
43
  | Name | Quant method | Bits | Size | Use case |
44
  | ---- | ---- | ---- | ---- | ----- |
45
+ | [nomic-embed-text-v1.5-Q2_K.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q2_K.gguf) | Q2_K | 2 |60.9 MB| smallest, significant quality loss - not recommended for most purposes |
46
+ | [nomic-embed-text-v1.5-Q3_K_L.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_L.gguf) | Q3_K_L | 3 | 80.7 MB| small, substantial quality loss |
47
+ | [nomic-embed-text-v1.5-Q3_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_M.gguf) | Q3_K_M | 3 | 76.3 MB| very small, high quality loss |
48
+ | [nomic-embed-text-v1.5-Q3_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q3_K_S.gguf) | Q3_K_S | 3 | 68.8 MB| very small, high quality loss |
49
+ | [nomic-embed-text-v1.5-Q4_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q4_0.gguf) | Q4_0 | 4 | 84.8 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
50
+ | [nomic-embed-text-v1.5-Q4_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q4_K_M.gguf) | Q4_K_M | 4 | 90.2 MB| medium, balanced quality - recommended |
51
+ | [nomic-embed-text-v1.5-Q4_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q4_K_S.gguf) | Q4_K_S | 4 | 84.1 MB| small, greater quality loss |
52
+ | [nomic-embed-text-v1.5-Q5_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q5_0.gguf) | Q5_0 | 5 | 98 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
53
+ | [nomic-embed-text-v1.5-Q5_K_M.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q5_K_M.gguf) | Q5_K_M | 5 | 103 MB| large, very low quality loss - recommended |
54
+ | [nomic-embed-text-v1.5-Q5_K_S.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q5_K_S.gguf) | Q5_K_S | 5 | 98 MB| large, low quality loss - recommended |
55
+ | [nomic-embed-text-v1.5-Q6_K.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q6_K.gguf) | Q6_K | 6 | 113 MB| very large, extremely low quality loss |
56
+ | [nomic-embed-text-v1.5-Q8_0.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-Q8_0.gguf) | Q8_0 | 8 | 146 MB| very large, extremely low quality loss - not recommended |
57
+ | [nomic-embed-text-v1.5-f16.gguf](https://huggingface.co/second-state/Nomic-embed-text-v1.5-Embedding-GGUF/blob/main/nomic-embed-text-v1.5-f16.gguf) | Q8_0 | 8 | 274 MB| very large, extremely low quality loss - not recommended |
58
 
59
  *Quantized with llama.cpp b2636*