Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,7 @@ tags:
|
|
| 15 |
- no more oom issues (possibly) 💫💻🥊
|
| 16 |
|
| 17 |
## eligible model example
|
|
|
|
| 18 |
- use **cow-gemma2** [2.33GB](https://huggingface.co/calcuis/cow-encoder/blob/main/cow-gemma2-2b-q4_0.gguf) for [lumina](https://huggingface.co/calcuis/lumina-gguf)
|
| 19 |
- use **cow-umt5base** [451MB](https://huggingface.co/calcuis/cow-encoder/blob/main/cow-umt5base-iq4_nl.gguf) for [ace-audio](https://huggingface.co/calcuis/ace-gguf)
|
| 20 |
- use **cow-umt5xxl** [3.67GB](https://huggingface.co/calcuis/cow-encoder/blob/main/cow-umt5xxl-q2_k.gguf) for [wan-s2v](https://huggingface.co/calcuis/wan-s2v-gguf) or any wan video model
|
|
|
|
| 15 |
- no more oom issues (possibly) 💫💻🥊
|
| 16 |
|
| 17 |
## eligible model example
|
| 18 |
+
- use **cow-mistral3small** [7.73GB](https://huggingface.co/chatpig/flux2-dev-gguf/blob/main/cow-mistral3-small-q2_k.gguf) for [flux2-dev](https://huggingface.co/chatpig/flux2-dev-gguf)
|
| 19 |
- use **cow-gemma2** [2.33GB](https://huggingface.co/calcuis/cow-encoder/blob/main/cow-gemma2-2b-q4_0.gguf) for [lumina](https://huggingface.co/calcuis/lumina-gguf)
|
| 20 |
- use **cow-umt5base** [451MB](https://huggingface.co/calcuis/cow-encoder/blob/main/cow-umt5base-iq4_nl.gguf) for [ace-audio](https://huggingface.co/calcuis/ace-gguf)
|
| 21 |
- use **cow-umt5xxl** [3.67GB](https://huggingface.co/calcuis/cow-encoder/blob/main/cow-umt5xxl-q2_k.gguf) for [wan-s2v](https://huggingface.co/calcuis/wan-s2v-gguf) or any wan video model
|